Providing practical support for people with autism spectrum disorder – supported living in the community
“The last few chapters managed to almost push me all the way towards giving the book one star. You can’t just claim in a book like this that very expensive and comprehensive support systems which you’re dreaming about are cost-effective without citing a single study, especially not in a context where you’ve just claimed that activities which usually end up costing a lot of money will end up saving money. If you envision a much more comprehensive support system, you can’t not address obvious cost drivers.
Some interesting stuff and important observations are included in the book, but the level of coverage is not high and you should not take my two star (‘ok’) rating to indicate that I am in agreement with the author. The main reason why I ended up finishing it was that it was easy to read, not that it was a good book.”
There are no inline citations, and examples of things people with ASD might need help with and ways to help them with these problems seem to be derived from anecdotes, not systematic research. The author repeatedly emphasizes that aid should be individualized and focused on the specific needs of the person with ASD, and although this makes a lot of sense it also makes recommendations very difficult to evaluate (it’s a bit like figuring out what’s going on in the context of other areas of psychological research, where therapists will often ‘mix methods’ when dealing with specific individuals, making it impossible to figure out which components of the treatment regime are actually helpful and which are not because even if people were to try to figure this out, power issues would make it impossible to estimate the relevant interaction effects even in theory); though it should be made clear that the author makes no attempt to do this.
I however found some of the observations included and specific points raised in the book to be interesting, and I’ll mention some of these in the coverage below.
“Professional support needs to be developed and executed in partnership with people and families. For support to be successful, all concerned need to be aware of its objectives and agree with the plan and strategies involved.”
I decided to start out the coverage with this quote because the book is full of postulates like these. Often specific cases will be used to illustrate points like these, but don’t expect any references to actual research on such topics – it’s not that kind of book. The approach employed makes the book incredibly hard for me to evaluate; some of the ideas are presumably sound, but it’s difficult to tell which because they didn’t do the research. In theory it’s sometimes easy to see how a given approach mentioned might lead to, or solve, specific problems, but you’ll often get the idea that perhaps there are tradeoffs at play here which the advice included does not take into account, meaning that in specific cases an alternative solution/piece of advice to the one proposed might lead to better outcomes by trading off the problems associated with the approach mentioned and the problems associated with an alternative approach. In some cases you perhaps would ideally prefer the parents of an adult child living outside the home of the parents to not have too much influence on support strategies employed even though they might traditionally have had a significant role to play in the context of support provision, because the family’s approach to problem solving might be counterproductive, in which case a support plan not supported by the parents might still in some cases be preferable to one which would be supported by them. The emphasis on individualized care throughout the book is, it must be said, on the other hand helpful in terms of thinking about such potential problems, but you still have this impression that a lot of the suggestions in the book are really not based on anywhere near a sufficient amount of data or research, and although they’re often ‘common sense suggestions’ it’s quite clear from a lot of different areas of psychological research by now that common sense can sometimes deceive us.
A general problem I have with the book is, I think, that I think the author is too confident about which support approaches/strategies/etc. might, or might not, work – and perhaps a key reason why she seems overconfident is that she’s not provided the research results in the book which one would in my opinion need in order to draw conclusions like the ones she draws, regardless of whether such research actually exists. A related problem is that quite a few of the concluding statements in the book are at least partly normative statements (which I generally dislike to encounter in non-fiction), not descriptive statements (which I do like to encounter). In the book she repeatedly makes claims about what people with ASD are like without referring to research on these topics, so you’re wondering how she knows these things, and whether or not those claims are actually true, or just true for a small subset of people with ASD which she’s encountered or read about. Many of the observations seemed familiar to me (having encountered them either in other textbooks, or having personal experience with the issues mentioned) so I’d be likely to grant that many of the observations are valid, but you are sometimes wondering how she knows the things she claims to know. A big problem is actually the way she covers the material; she covers various topics in various chapters, but the way she does is makes it relatively hard for a reader to know which parts of a given chapter might actually be useful for a specific individual curious about these things; another way to do things might have been to split the coverage up into chapters about support provision for people with low support requirements, and other chapters about support provision for people with high support requirements. It’s made clear in the book that needs are different for different individuals, but you’re often sort of wondering which passages are most relevant to which groups of people with ASD. One might argue that ‘people ought to be able to tell this on their own’, but then we get to the problems that people with ASD tend to be bad at asking for support, perhaps not realizing that they need it, and the problem that people without ASD who do not know much about ASD perhaps have a difficult time figuring out which types of help might be useful in a specific setting. This stuff is difficult as it is, but I don’t think the way the coverage is structured in this book is helping at all with solving these sorts of issues.
Oh well, let’s move on…:
“The ultimate aim of support should be to improve skills and develop strategies to enable the person with ASD to feel in control and better able to cope independently.”
“The fact is that extremely able people with ASD frequently struggle with day-to-day life skills. Very intelligent students cannot organize themselves to launder their clothes, and may get up to find they are all dirty or still wet in the machine from several days ago. This is one of those superficially trivial things that can be a major problem to the person it repeatedly happens to. On a practical domestic front, what may be a massive difficulty for a person with ASD, may be an easily solved problem for someone without it. […] People with ASD like to have regular routines. The ability to adhere to routine is an advantage in many situations, and this skill can be used productively. Structure and organization can be brought to running the household. As a plan is constructed, problems can be considered and systems put in place to deal with them. A planning session when the individual collaborates with support to work out a weekly menu and the necessary shopping plan, gives the person more autonomy, than having someone turn up to go shopping or cook with them. Having someone alongside is sometimes necessary, but has the disadvantage of creating dependence. The individual is empowered instead by being facilitated to complete tasks independently. […] The best support methods promote independence. […] The aspects of forward planning can be incredibly challenging for a person with ASD, regardless of their intellectual level. […] As people with ASD have great difficulty seeing consequences or planning ahead, they may find it hard to become motivated if the gratification is not instant. Things have to be broken down and explained in a practical way.”
“Most people instigate minor changes easily. It may be more convenient to vary a normal routine on a particular day, even pleasurable. I might decide that as it is a sunny day I will go out, and do the housework in the evening. As a supporter for someone with ASD it is vital to remember, that he will not have the flexibility of thought that people generally have and so may need routines to be more stringently adhered to. Such a simple adjustment may not be easy, and it may be preferable to stay with the usual unless there is a strong argument for change. The world becomes easier to interpret if as much as possible is held constant. […] Change is easier to manage if we know it is coming. The better prepared someone is for a change, generally the easier it is to cope. For people with ASD, it helps if the preparation can be as concrete as possible.”
“The paradoxical nature of ASD is demonstrated again in attention span. The person will be absolutely absorbed, blocking out the rest of the world, when he is engrossed in something of particular interest; but at other times his attention span can be low. Most people will recognize the experience of being called away to answer a phone call, or speaking to a visitor and completely forgetting that they were in the middle of doing something. This distractibility is a common experience for those with ASD. […] I often think that ASD is the source of the stereotype of the ‘absent-minded professor’.”
A personal remark on these topics is perhaps in order here, and I add it because it is my impression that mass media portrails of individuals with these sorts of traits are generally if anything favourably inclined; in the sense that distractibility, forgetfulness and these sorts of traits are in those contexts in general traits you smile about and which are mildly funny. My impression is that the first word that springs to mind in these contexts is ‘amusing’, or something along those lines, not ‘annoying’. The downsides are usually to some extent neglected. However I know from Real Life experience that things like forgetfulness and distractability can be really annoying. Forgetting the key to your flat and locking yourself out of your flat (multiple times); forgetting to bring home your laptop from the university and having to go back and get it while worrying about whether or not it’s been stolen in the meantime (it fortunately wasn’t); getting caught up in an interesting exchange on the internet causing you to you forget that you turned on the stove an hour ago (or was it two hours ago? Time flies when you’re engaged in stuff that interests you…), so now you’ll have to spend another hour trying to clean the pot and separate the charred chunks of vegetables and the metal; getting a burn while taking something out of the oven because you were thinking about something else and didn’t pay sufficient attention to the task at hand – these things border from annoying to dangerous, as also noted in the book: “Depending on what we were doing, finding that we have left something in the middle of it can be anything from mildly annoying (left the kitchen half cleaned) to very distressing (left the pan on the hob and burnt the house down).” Similar observations might be made in the context of ‘clumsiness’ (not a diagnostic trait, but apparently often observed) and combinations of these traits. The sorts of things people often find amusing when they happen to, say, cartoon characters are a lot less funny when they happen to you personally, especially if you are having difficulties finding ways to address the issues and other people are impacted by them as well. Problems like these may cause amusement among others, but I know from both personal experience and the experiences of a good friend of mine that they may also cause profound exasperation among the people around you.
“Difficulty with communication is a core problem for those with autism spectrum disorder (ASD). Some people have little or no speech, some have an extensive vocabulary, some make grammatical mistakes, some have a wide use of language – but all people with ASD have problems with communication. These problems are extremely complex, leading to much misunderstanding, confusion and stress. The more sophisticated the person’s language is the greater the problem may be. Ros Blackburn, a highly intelligent British woman with ASD who gives many talks on the subject, highlights that a person’s ability can also be their greatest disability. As a verbal, intellectually able woman, she finds that people do not appreciate the support that she needs in everyday and social situations. The power to have a seemingly normal conversation can cause many troubles for a person with ASD by giving a false impression of their comprehension. […] Care should be taken not to give too much information at one time. People with ASD generally process language slowly and have difficulty handling a lot of verbal input. […] People with ASD work through matters slowly, and speed of discussion is problematic. […] So time needs to be offered to assimilate information before a response is expected. […] For most people with ASD, it is easier to talk if there are fewer people in the group. In a large meeting there is too much to take in, and few silences in which to process what has been said. […] They almost always prefer one to one conversation to group discussion, and small intimate gatherings to parties.”
“We all make blunders in relationships. We misjudge what is acceptable in a situation, mistake another person’s intention or misinterpret someone’s meaning. We then feel upset, isolated and embarrassed. People with ASD are more prone to doing this sort of thing than most – and they do experience the same unpleasant aftermath. […] Coping well is a double-edged sword; the better a person manages, the more likely he is to be judged harshly when he does make a mistake. […] Some people with ASD are able to think their way through social situations. They teach themselves or have been taught to interpret non-verbal signals. They can use cognition to remember that the other person may feel differently to them, and to compute what their perception and emotions may be. This is a slow, cumbersome method compared to the automatic, rapid assimilation that those without ASD make. Even those who compensate well appear slow, stilted, awkward, and are liable to make significant mistakes.”
“Neurotypical people (NTs) are as lacking in empathy towards people with ASD as vice versa.” This is in my opinion a bold claim and I’m not sure it’s true, but I think she does have a point here. I think it’s likely that NTs often judge people with ASD based on the standards of NTs; standards which may well be impossible for the person with ASD to ever meet, regardless of the amount of effort the individual puts into meeting those standards. She however argues later on in the coverage that: “Most people are not unkind, but are unthinking or, because of lack of knowledge about disability, make incorrect assumptions.” This seems plausible.
“The rigidity of AS thinking and the tendency to obsess means that a worry can escalate and dominate a person’s life. […] As a basic rule of thumb, regular, familiar routines are better stress busters than a novel idea. A holiday, for example, is more likely to add to stress than relieve it.” (This sounds very familiar, and I’ll keep this quote in mind…)
“Many people with ASD remain more susceptible to parental influence than the majority of their peers. […] All people with ASD, including the highly intelligent, are susceptible to being led by others and it is very easy for the person offering support, either knowingly or unwittingly, to lead the person down a route, which is not the course he wants to follow.”
“Social inabilities create problems for people with autism spectrum disorder (ASD) in establishing peer relationships and so naturally accessing the support that evolves between members of groups, such as work colleagues, fellow students or regulars in the pub. Asking for assistance appropriately will be challenging for people with ASD. […] adults often only appear on the services ‘radar’ when they reach crisis point. Forty-nine per cent of adults with ASD are still living with their parents. […] Only 6 per cent of adults with ASD are in full-time employment [no sources provided, US]”
“It is not always possible to tell from meeting a person or even from having regular contact with him that he has autism spectrum disorder (ASD). Individuals therefore face the decision as to whether or not to disclose that they have the disorder. […] Generally disclosure is on a sliding scale. Most people tell close family; whilst it would probably be inappropriate to tell a casual stranger. Some will disclose to professionals, but prefer to keep the information from social contacts. […] There are no easy answers as to who and when to tell. Disclosure to professionals in formal situations appears advisable so that all are aware of the condition and any differences are accepted and planned for. Informal social situations are more fluid and difficult to read.”
“NAS statistics show that only six per cent of people with autism spectrum disorder (ASD) (12% of those with Asperger Syndrome (AS)) in the UK are in full-time employment. This compares with 49 per cent of people with general disabilities who are employed. […] Given the talents which many with ASD have, this is a great loss to the workforce. […] Traits common to ASD, such as conscientiousness, attention to detail, perseverance and loyalty, are great assets to an employer. […] People with ASD tend to be loyal, to stick to routines and dislike change. […] The characteristics of the disorder mean that the individual may not make a good impression at interview. Social skills will not be a forté. […] The employer needs to be aware of any ASD traits the person displays, such as lack of eye contact. Questions may be prepared with support so that they elicit the information needed, but are specific, factual and clear. Broad questions, such as, ‘Tell me about yourself ’, will leave the interviewee floundering. […] Interviews are not always the most appropriate way of assessing candidates, especially not those with ASD.”
The author does not address in the book the specific problems and tradeoffs related to the question of whether or not it’s optimal to disclose an autism spectrum disorder to a potential employer, but rather seems to take it for granted that the interviewee should always disclose, preferably beforehand. I’ve given this a lot of thought, and I’m really not convinced this is always the right approach.
i. A lecture on mathematical proofs:
ii. “In the fall of 1944, only seven percent of all bombs dropped by the Eighth Air Force hit within 1,000 feet of their aim point.”
From wikipedia’s article on Strategic bombing during WW2. The article has a lot of stuff. The ‘RAF estimates of destruction of “built up areas” of major German cities’ numbers in the article made my head spin – they didn’t bomb the Germans back to the stone age, but they sure tried. Here’s another observation from the article:
“After the war, the U.S. Strategic Bombing Survey reviewed the available casualty records in Germany, and concluded that official German statistics of casualties from air attack had been too low. The survey estimated that at a minimum 305,000 were killed in German cities due to bombing and estimated a minimum of 780,000 wounded. Roughly 7,500,000 German civilians were also rendered homeless.” (The German population at the time was roughly 70 million).
iii. Also war-related: Eddie Slovik:
“Edward Donald “Eddie” Slovik (February 18, 1920 – January 31, 1945) was a United States Army soldier during World War II and the only American soldier to be court-martialled and executed for desertion since the American Civil War.
Although over 21,000 American soldiers were given varying sentences for desertion during World War II, including 49 death sentences, Slovik’s was the only death sentence that was actually carried out.
During World War II, 1.7 million courts-martial were held, representing one third of all criminal cases tried in the United States during the same period. Most of the cases were minor, as were the sentences. Nevertheless, a clemency board, appointed by the Secretary of War in the summer of 1945, reviewed all general courts-martial where the accused was still in confinement. That Board remitted or reduced the sentence in 85 percent of the 27,000 serious cases reviewed. The death penalty was rarely imposed, and those cases typically were for rapes or murders. […] In France during World War I from 1917 to 1918, the United States Army executed 35 of its own soldiers, but all were convicted of rape and/or unprovoked murder of civilians and not for military offenses. During World War II in all theaters of the war, the United States military executed 102 of its own soldiers for rape and/or unprovoked murder of civilians, but only Slovik was executed for the military offense of desertion. […] of the 2,864 army personnel tried for desertion for the period January 1942 through June 1948, 49 were convicted and sentenced to death, and 48 of those sentences were voided by higher authority.”
What motivated me to read the article was mostly curiosity about how many people were actually executed for deserting during the war, a question I’d never encountered any answers to previously. The US number turned out to be, well, let’s just say it’s lower than I’d expected it would be. American soldiers who chose to desert during the war seem to have had much, much better chances of surviving the war than had soldiers who did not. Slovik was not a lucky man. On a related note, given numbers like these I’m really surprised desertion rates were not much higher than they were; presumably community norms (”desertion = disgrace’, which would probably rub off on other family members…’) played a key role here.
iv. Chess and infinity. I haven’t posted this link before even though the thread is a few months old, and I figured that given that I just had a conversation on related matters in the comment section of SCC (here’s a link) I might as well repost some of this stuff here. Some key points from the thread (I had to make slight formatting changes to the quotes because wordpress had trouble displaying some of the numbers, but the content is unchanged):
“Shannon has estimated the number of possible legal positions to be about 1043. The number of legal games is quite a bit higher, estimated by Littlewood and Hardy to be around 1010^5 (commonly cited as 1010^50 perhaps due to a misprint). This number is so large that it can’t really be compared with anything that is not combinatorial in nature. It is far larger than the number of subatomic particles in the observable universe, let alone stars in the Milky Way galaxy.
As for your bonus question, a typical chess game today lasts about 40 to 60 moves (let’s say 50). Let us say that there are 4 reasonable candidate moves in any given position. I suspect this is probably an underestimate if anything, but let’s roll with it. That gives us about 42×50 ≈ 1060 games that might reasonably be played by good human players. If there are 6 candidate moves, we get around 1077, which is in the neighbourhood of the number of particles in the observable universe.”
“To put 1010^5 into perspective:
There are 1080 protons in the Universe. Now imagine inside each proton, we had a whole entire Universe. Now imagine again that inside each proton inside each Universe inside each proton, you had another Universe. If you count up all the protons, you get (1080 )3 = 10240, which is nowhere near the number we’re looking for.
You have to have Universes inside protons all the way down to 1250 steps to get the number of legal chess games that are estimated to exist. […]
Imagine that every single subatomic particle in the entire observable universe was a supercomputer that analysed a possible game in a single Planck unit of time (10-43 seconds, the time it takes light in a vacuum to travel 10-20 times the width of a proton), and that every single subatomic particle computer was running from the beginning of time up until the heat death of the Universe, 101000 years ≈ 1011 × 101000 seconds from now.
Even in these ridiculously favorable conditions, we’d only be able to calculate
1080 × 1043 × 1011 × 101000 = 101134
possible games. Again, this doesn’t even come close to 1010^5 = 10100000 .
Basically, if we ever solve the game of chess, it definitely won’t be through brute force.”
v. An interesting resource which a friend of mine recently shared with me and which I thought I should share here as well: Nature Reviews – Disease Primers.
vi. Here are some words I’ve recently encountered on vocabulary.com: augury, spangle, imprimatur, apperception, contrition, ensconce, impuissance, acquisitive, emendation, tintinnabulation, abalone, dissemble, pellucid, traduce, objurgation, lummox, exegesis, probity, recondite, impugn, viscid, truculence, appurtenance, declivity, adumbrate, euphony, educe, titivate, cerulean, ardour, vulpine.
i. “Calumny can injure you only if you reflect yourself in others and not in your conscience.” (Fausto Cercignani).
ii. “Emulation can be positive, if you succeed in avoiding imitation.” (-ll-).
iii. “Your identity is like your shadow: not always visible and yet always present.” (-ll-).
iv. “Sometimes moderation is a bad counselor.” (-ll-).
v. “It is error only, and not truth, that shrinks from inquiry.” (Thomas Paine)
vi. “A long habit of not thinking a thing wrong, gives it a superficial appearance of being right, and raises at first a formidable outcry in defense of custom.” (-ll-)
vii. “A body of men, holding themselves accountable to nobody, ought not to be trusted by any body.” (-ll-)
viii. “All national institutions of churches, whether Jewish, Christian, or Turkish, appear to me no other than human inventions set up to terrify and enslave mankind, and monopolize power and profit.” (-ll-)
ix. “Example has more followers than reason.” (Christian Nestell Bovee)
xii. “Education is an ornament for the prosperous, a refuge for the unfortunate.” (Democritus)
xiii. “There is no such thing as a Scientific Mind. Scientists are people of very dissimilar temperaments doing different things in very different ways. Among scientists are collectors, classifiers and compulsive tidiers-up; many are detectives by temperament and many are explorers; some are artists and others artisans. There are poet-scientists and philosopher-scientists and even a few mystics. What sort of mind or temperament can all these people be supposed to have in common? Obligative scientists must be very rare, and most people who are in fact scientists could easily have been something else instead.” (Peter Medawar)
xiv. “The purpose of scientific enquiry is not to compile an inventory of factual information, nor to build up a totalitarian world picture of natural Laws in which every event that is not compulsory is forbidden. We should think of it rather as a logically articulated structure of justifiable beliefs about nature.” (-ll-)
xv. “the spread of secondary and latterly tertiary education has created a large population of people, often with well-developed literary and scholarly tastes, who have been educated far beyond their capacity to undertake analytical thought.” (-ll-)
xvi. “If a person a) is poorly, b) receives treatment intended to make him better, and c) gets better, no power of reasoning known to medical science can convince him that it may not have been the treatment that restored his health.” (-ll-)
xvii. “I once spoke to a human geneticist who declared that the notion of intelligence was quite meaningless, so I tried calling him unintelligent. He was annoyed, and it did not appease him when I went on to ask how he came to attach such a clear meaning to the notion of lack of intelligence. We never spoke again.” (-ll-)
xviii. “There is no feeling so simple that it is not immediately complicated and distorted by introspection.” (André Gide)
xix. “Men need history; it helps them to have an idea of who they are.” (V. S. Naipaul)
xx. “There is a great deal of difference between the eager man who wants to read a book, and the tired man who wants a book to read.” (G. K. Chesterton)
Here’s a link to the first post in this series. The quotes below are from the book Full Moon, which is one of the books in Wodehouse’ Blandings Castle series. I have not read a book in that series which I did not enjoy reading.
“I really am feeling astoundingly well. It’s what I’ve always said – alcohol’s a tonic. Where most fellows go wrong is that they don’t take enough of it. […] He never drank tea, having always had a prejudice against the stuff since his friend Buffy Struggles back in the nineties had taken to it as a substitute for alcohol and had perished miserably as a result. (Actually what had led to the late Mr Struggles’s turning in his dinner pail had been a collision in Piccadilly with a hansom cab, but Gally had always felt that this could have been avoided if the poor dear old chap had not undermined his constitution by swilling a beverage whose dangers are recognized by every competent medical authority.)”
“Some little while later Veronica, starting the conversational ball rolling once more, said that she had been bitten on the nose that afternoon by a gnat. Tipton, shuddering at this, said that he had never liked gnats. Veronica said that she too, did not like gnats, but that they were better than bats. Yes, assented Tipton, oh, sure, yes a good deal better than bats. Of cats Veronica said she was fond, and Tipton agreed that cats as a class were swell. On the subject of rats they were also as one, both holding strong views regarding their lack of charm.
The ice thus broken, the talk flowed pretty easily until Veronica said that perhaps they had better be going in now. Tipton said, “Oh, shoot!” and Veronica said, “I think we’d better,” and Tipton said, “Well, okay, if we must.” His heart was racing and bounding as he accompanied her to the drawing-room. If there had ever been any doubt in his mind that this girl and he were twin souls, it no longer existed. It seemed to him absolutely amazing that two people should think so alike on everything – on gnats, bats, cats, rats, in fact absolutely everything.”
“Tipton removed his gaze from the cow. As a matter of fact, he had seen about as much of it as he wanted to see. A fine animal, but, as is so often the case with cows, not much happening.”
“‘Look here, Guv’nor, will you do something for me?’
‘What?’ asked Lord Emsworth, cautiously.
‘What were you thinking of buying Vee?’
‘I had in mind some little inexpensive trinket, such as girls like to wear. A wrist watch was your aunt’s suggestion.’
‘Good. That fits my plans like the paper on the wall. Go to Aspinall’s in Bond Street. They have wrist watches of all descriptions. And when you get there, tell them that you are empowered to act for F. Threepwood. I left Aggie’s necklace with them to be cleaned, and at the same time ordered a pendant for Vee. Tell them to send the necklace to … Are you following me, Guv’nor?’
‘No,’ said Lord Emsworth.
‘It’s quite simple. On the one hand, the necklace; on the other, the pendant. Tell them to send the necklace to Aggie at the Ritz Hotel, Paris—‘
‘Who’, asked Lord Emsworth, mildly interested, ‘is Aggie?’
‘Come, come, Guv’nor. This is not the old form. My wife.’
‘I thought your wife’s name was Frances.’
‘Well, it isn’t. It’s Niagara.’
‘What a peculiar name.’
‘Her parents spent their honeymoon at the Niagara Falls hotel.’
‘Niagara is a town in America, is it not?’
‘Not so much a town as a rather heavy downpour.’
‘A town, I always understood.’
‘You were misled by your advisers, Guv’nor. But do you mind if we get back to the res. Time presses. Tell these Aspinall birds to mail the necklace to Aggie at the Ritz Hotel, Paris, and bring back the pendant with you. Have no fear that you will be left holding the baby—‘
Again Lord Emsworth was interested. This was the first he’d heard of this.
‘Have you a baby? Is it a boy? How old is he? What do you call him? Is he at all like you?’ he asked, with a sudden pang of pity for the unfortunate suckling.
‘I was speaking figuratively, Guv’nor,’ said Freddie patiently. ‘When I said, “Have no fear that you will be left holding the baby,” I meant, “Entertain no alarm lest they may shove the bill off on you.” The score is all paid up. Have you got it straight?’
‘Let me hear the story in your own words.’
‘There is a necklace and a pendant—‘
‘Don’t go getting them mixed.’
‘I never get anything mixed. You wish me to have the pendant sent to your wife and to bring back—‘
‘No, no, the other way round.’
‘Or, rather, as I was just about to say, the other way round. It is all perfectly clear. Tell me,’ said Lord Emsworth, returning to the subject which really interested him, ‘why is Frances nicknamed Niagara?’
‘Her name isn’t Frances, and she isn’t.’
‘You told me she was. Has she taken the baby to Paris with her?’
Freddie produced a light blue handkerchief from his sleeve and passed it over his forehead.
‘Look here, Guv’nor, do you mind if we call the whole thing off? Not the necklace and pendant sequence, but all this stuff about Frances and babies—‘
‘I like the name Frances.’
‘Me, too. Music to the ears. But shall we just let it go, just forget all about it? We shall both feel easier and happier.’
Lord Emsworth uttered a pleased exlamation.
‘Not Niaraga. Chicago. This is the town I was thinking of. There is a town in America called Chicaco.'”
“‘I’ve got it,’ he said, returning. ‘The solution came to me in a flash. We will put the pig in Veronica’s room.’
A rather anxious expression stole across Freddie’s face. Of the broad general principle of putting pigs in girls’ rooms he of course approved, but he did not like that word ‘we’. […]
‘What’s the good of putting pigs in Vee’s room?’
‘My dear fellow, have you no imagination? What happens when a girl finds a pig in her room?’
‘I should think she’d yell her head off.’
‘Precisely. I confidently expect Veronica to raise the roof. Whereupon, up dashes young Plimsoll to her rescue. If you can think of a better way to bring two young people together, I should be interested to hear it.'”
“‘Is he wanted by the police?’
‘No, he is not wanted by the police.’
‘How I sympathize with the police,’ said Lady Hermione. ‘I know just how they feel.'”
I’ve been reading Wodehouse lately. I read some of his books on my Kindle as well (8, according to my updates on goodreads – it’s hard to keep track), but it’s harder to take pictures of those – for a complete list, go here or here.
Wodehouse’ novels are nice because you can pretty much read one each day even if you have other stuff going on as well, as least if you have a few hours you don’t know what to do with each day. According to one estimate from Statistics Denmark which I’ve blogged before, the average Dane spends something like 3 hours and 20 minutes per day watching TV; if they spent that time reading books like these ones instead, there’d be a lot more Danes reading more than 100 books per year than there are.
Over the last year or two I’ve in general limited my blogging of fiction to a minimum, and I’ve also actually dedicated a lot of effort into making this blog as mind-numbingly boring and irrelevant as possible. So of course it feels terrible to have to take this step now; to start suddenly blogging books which have a strong tendency to make their readers laugh and enjoy themselves. But there’s no way around it – this is stuff that’s easy to blog, and a very plausible alternative to me seems to be ‘no blogging’. I hope that by blogging books like these I’ll be able to sustain a relatively regular blogging schedule in the period to come. There’s no work involved in reading the books any longer, which should be very helpful; I already read the books, and I have more than 20 to choose from now in terms of what to cover. Wodehouse’ books are really funny, and my impression is that they’ll be easy for me to blog, in the sense that there’s a lot of funny stuff in those books and you can get away with quoting from the books without spoiling anything much. On the other hand as the picture illustrates these are mostly paper books, which are not as easy to blog as e-books are; I may find that these posts actually take so much time and effort that not much work is saved by switching (at the very least partially) to this sort of coverage. We’ll see how it goes.
I should mention that although I only discovered Wodehouse earlier this year, he’s already on my top five list of fiction authors (Terry Pratchett and Agatha Christie also belong on such a list, as do probably George R. R. Martin and Jasper Fforde – but it’s hard; there are a lot of good authors…).
The first book I’ll cover is Big Money, which I gave 4 stars on goodreads. Below I have added some quotes from the book to illustrate how Wodehouse writes and what he writes about.
“‘I wish I could find some way of making a bit of money,’ he said, resuming his remarks. ‘I don’t seem able to do it, racing. And I don’t seem able to do it at Bridge. But there must be some method. Look at all the wealthy blighters you see running around. They’ve managed to find it. I read a book the other day where a bloke goes up to another bloke in the street – and whispers in his ear – the first bloke does – “A word with you, sir!” Addressing the second bloke, you understand. “A word with you, sir. I know your secret!” Upon which, the second bloke turns ashy white and supports him in luxury for the rest of his life. I thought there might be something in it.’
‘About seven years, I should think.'”
“A low moan escaped Mr Frisby. His face, which was rather like that of a horse, twisted in pain. Of the broad principle of his sister going to Japan he approved, Japan being further away than New York. What rived his very soul was that she should be squandering her cash to tell him so [over the telephone]. A picture postcard from Tokyo, with a cross and a ‘This is my room’ against one of the windows of a hotel, would have met the case. […]
‘Do you know what she did last week?’
Mr Frisby gave a lifelike imitation of a man who has just discovered that he is sitting on an ant’s nest.’How the devil should I know what she did last week? Do you think I’m a clairvoyant?'”
“Lord Hoddesdon gasped.
‘You don’t imagine I would be fool enough to go touching Frisby?’
‘Wasn’t that your idea?’
‘Of course not. Certainly not. I was thinking – er – I was wondering – well, to tell you the truth, it crossed my mind that you might possibly be willing to part with a trifle.’
‘It did, eh?’
‘I don’t see why you shouldn’t, said Lord Hoddesdon plaintively. ‘You must have plenty. There’s a lot of money in this chaperoning business. When you took on that Argentine girl three years ago you got a couple of thousand pounds.’
‘I got fifteen hundred,’ corrected his sister. ‘In a moment of weakness – I can’t imagine what I was thinking of – I lent you the rest.’
‘Er – well, yes,’ said Lord Hoddesdon, not unembarrassed. ‘That is, in a measure, true. It comes back to me now.’
‘It didn’t come back to me – ever,’ said Lady Vera”.
“Ever since she had read in her paper that morning the plain, blunt statement that she was engaged to be married, she had been feeling oddly pensive. […] A sudden thirst for information seized her. She leaned towards her host.
‘Tell me about Godfrey,’ she said abruptly.
‘Eh?’ said Lord Hoddesdon, blinking. […] ‘What about him?’
It was a question which Ann found difficult to answer. ‘What sort of man is he?’ she would have liked to say. But when you have agreed to marry a man, it seems silly to ask what sort of man he is.
‘Well, what was he like as a little boy?’ she said, feeling that that was safe. […]
‘Boyish and vivacious,’ […] ‘Full of spirits. But always,’ he said impressively, ‘good.’
‘Good?’ said Ann with a slight shiver.
‘Always the soul of honour,’ said Lord Hoddesdon solemnly. Ann shivered again. Clarence Dumphry had been the soul of honour. She had often caught him at it.”
“A man who has so recently become engaged to be married as Lord Biskerton has, of course, no right to stare appreciatively at strange girls. But this is what Biscuit found himself doing. The fact that Ann Moon had accepted his hand had done nothing to impair his eyesight”.
“There are two schools of thought concerning the correct method of dealing with small boys who throw stones at their elders and betters in the public street. Some say they should be kicked, others that they should be smacked on the head. Lord Hoddesdon, no bigot, did both.”
“‘Biscuit,’ said Berry, ‘the most extraordinary thing has happened. There’s a girl …’
‘A girl, eh?’ said the Biscuit, interested. He began to see daylight. ‘Who is she?’
‘What?’ asked Berry, whose attention had wandered.
‘I said, who is she?’
‘I don’t know.’
‘What’s her name?’
‘I don’t know.’
‘Where does she live?’
‘I don’t know.’
‘You aren’t an Encyclopedia, old boy, are you?’ said the Biscuit. […]
‘Either a man clicks or he does not click,’ said the Biscuit firmly. ‘There are no half measures. You did?’
‘I think she was – pleased to see me.’
‘Ah! Well, then, of course you proceeded to ask her name?’
‘I hadn’t time.’
‘Did you ask her where she lived?’
‘Did she ask you your name?’
‘Did she ask you where you lived?’
‘What the dickens did you talk about?’ asked the Biscuit, curiously. ‘The situation in Russia?'”
“Mr Robbins, of Robbins, Robbins, Robbins, and Robbins, solicitors and Commissioners for Oaths, was just the sort of man you would have expected him to be after hearing his voice on the telephone.”
“‘I can’t stand Paris. I hate the place. Full of people talking French”.
“Few things in life are more embarrassing than the necessity of having to inform an old friend that you have just got engaged to his fiancée.”
“‘We’re engaged,’ he said.
‘Fine!’ said the Biscuit. ‘So you’re engaged? Well, well!’
‘Just to this one girl, I suppose?’
‘What do you mean?’
‘You always were a prudent, level-headed fellow who knew where to stop,’ said the Biscuit enviously. ‘I’m engaged to two girls.’
The Biscuit sighed.
‘Yes, two. And I’m hoping that you may have a word of advice to offer on the subject. Otherwise, I see a slightly tangled future ahead of me.’
‘Two?’ said Berry, dazed.
‘Two,’ said the Biscuit. ‘I’ve counted them over and over again, but that’s what the sum keeps working out at. I started, if you remember, with one. So far, so good. A steady, conservative policy. But complications have now arisen.”
Before I move on to the book coverage, I thought I should mention that people reading along here should expect few updates in the next month or two. I have considered simply taking a break from blogging for a month because I really need to focus on my work, but this seems a bit too radical an approach and I think what I’ll do instead is to e.g. occasionally blog one of the Wodehouse novels which I’ve been reading during the spring; this shouldn’t take too much time or effort, and ‘lazy blogging’ like that may well be all I can justify doing. Maybe I’ll talk about a textbook or two, but don’t expect much ‘serious’ blogging in the near future.
Okay, let’s move on to the book. I’ve read 25 of the 30 chapters, and the coverage will pick up where I left off in my second post.
“Few scholars today claim that there is a direct relationship between environmental scarcity and violent conflict. Accordingly, empirical research increasingly discusses and attempts to identify plausible intervening variables, notably social, political, demographic, or economic mechanisms that together with environmental scarcity may increase the risk of violent conflict. […] frequently suggested intervening variables include food security and migration […]. For instance, in sub-Saharan Africa, where inter- and intra-annual rainfall variation is extensive, almost 90 percent of total food production comes from rain-fed agriculture […], implying high social and economic vulnerability to volatile resource supplies.”
“Taken together, this broad literature [on environmental change and armed conflict] offers mixed evidence for a causal relationship. The majority of studies of civil wars and major armed conflict conclude that resource scarcity, population pressure, and weather patterns exhibit weak influences on conflict risk, compared to structural economic and institutional features. Moreover, those that report a significant correlation disagree on the direction and magnitude of the effect.”
“Children recruited into armed groups in one conflict often end up fighting in other regional conflicts as ‘floating warriors’ capitalising on porous borders to travel wherever there was a market for their newly learned trade. In certain regions of recurrent conflict, large pools of ex-combatants as well as children exist as potential recruits for armed groups lured by the opportunity to share in the spoils of war. […] Such dynamics underline the problem of regional zones of instability or ‘conflict complexes’. War economies spread beyond borders and networks of mercenaries, illegal trading and organised crime spread instability.”
“Scholars of civil war often mistake the causes of the onset of armed conflict with the factors which explain the continuation of war. Many studies seem to implicitly argue that when understanding its causes, we understand the continuation of war. […] War may [however] break out for one set of issues but might continue for a completely different and changing set of reasons. As a result of interaction between the belligerents new reasons and stimuli for conflict develop. […] These developments can significantly complicate the picture that civil war presents and do not necessarily make it easier to work towards resolution. […] Two important causal mechanisms can be distinguished that hold explanatory power for the continuation of conflict. […] For the continuation of violence one observed causal mechanism is the provocation trap. An important theory developed by insurgents since the nineteenth century aims to play on the calculations of the political decision-makers by provoking violence from the state, which generally acts as a forceful recruiting mechanism for insurgent groups […] The second mechanism can be called the counter-measure imperative. The counter-measure imperative is the commonly observable chain of events after an attack against unarmed and unwitting targets. A public outcry occurs and political decision-makers feel forced to respond. Doing nothing is often not an option in terms of political capital and electoral consequences, at least in most democratic societies. James Fearon has called this “audience costs” in the context of international crises […] Weakness in times of crisis can be political – or electoral – suicide. Therefore, there is a strong tendency to institute one stringent measure after another. Repression, the use of force and police action are just a few of the instruments that can be used […] These mechanisms trigger state violence both from a push and pull perspective and are very powerful to propel a struggle forward. Discontinuing civil war by not buying into the provocation trap and counter-measure imperative is extremely difficult, given the primary demands made of the state to uphold its monopoly of force and to protect its population.”
“Most studies looking into the dynamics or continuation of conflict see the increase or decrease in capabilities as an important explanatory factor for the continuation or discontinuation of civil war. […] The termination of civil war has in several studies been strongly linked to cutting off the capabilities and supplies of belligerents. Paul Staniland concludes that “the best offense is a fence” (Staniland 2006; see also Record 2007). When capabilities are compromised by cutting off the replenishment of men and material that are necessary to continue the struggle, wars wither down.”
“For those gathering conflict data, obtaining accurate numbers of fatalities is one of the most complicated and difficult tasks due to a plethora of problems, including misuse of the terms “casualties” and “fatalities,” political reasons for either the under-reporting or exaggeration of fatalities, and either a lack of information or the presence of conflicting information in the available sources […] In addressing the sources of bias in fatality statistics Gohdes and Price (2012: 9) note that the higher the visibility of the act of violence, the more likely it (and its fatalities) will be reported. Visibility can be reflected in the magnitude of armed conflict, wars are more visible than minor disputes; but visibility can also be related to the types of participants or fatalities, with deaths of those in uniform, whose job it is to fight being more visible than deaths of civilians. Visibility leads to a greater likelihood that fatalities will be reported, thus making them more reliable. As Lacina and Gleditsch note (2012: 3) the tallies provided by military agencies of personnel killed in action are very credible data. It was considerations such as these that led COW to make different choices than UCDP, in ways in which it codifies and gathers data about armed conflict: focusing primarily on higher fatality levels (war), using the war as the primary unit of analysis, and counting deaths only among combatants (rather than combatants and civilians).”
“Utilizing the COW datasets on war, one gains a perspective on the trends in warfare that varies significantly from those that utilize UCDP/PRIO data […] A fundamental difference is merely the timeframe covered, with COW examining wars after 1815 and UCDP/PRIO focusing upon the post-World War II era. An analysis of trends in all COW wars types for the period 1816 to 2007 […] concluded that there is a relative constancy over time in war behavior. […] Intra-state wars are the most numerous of the four major COW categories, constituting 52 percent of all of the COW wars [and there has been a] significant increase in intra-state wars since the end of World War II. […] Of the 192 years in the 1816–2007 period, there is an average number of 1.6 civil war onsets per year, and only 52 years (27 percent) experienced no civil war onsets. […] If one looks at the number of civil wars experienced by the various regions of the world […], the numbers look fairly comparable […] All in all, this analysis does not promote optimism about the trends in civil war for the remainder of the twenty-first century. The Human Security Report’s (2011) emphasis on the decline in civil war since the end of the Cold War ignores the fact that civil war onsets (even after the highpoints of 1989 and 1991) are at historically high levels with an average of 2.8 civil war onsets per year from 1992 to 2007 (compared to the yearly average of 1.6 onsets from 1816 to 2007). These figures hardly portend the end of civil war.”
“There are multiple ways to distinguish types of civil wars: whether they are ethnically motivated […], whether they are driven by attempts at secession or control of the central government […], or whether they involve lootable resources […]. Another way to distinguish different types of civil wars is to examine the military tactics used by each side in the conflict. […] Kalyvas and Balcells (2010) […] identify three technologies of rebellion that are used in civil war: irregular warfare, conventional warfare, and symmetric non-conventional warfare. Irregular war, or insurgency, occurs when the state’s military capabilities exceed those of the rebels. Conventional civil war occurs when both the state and the rebels are militarily matched at a high level, and symmetric non-conventional war happens when both the state and the rebels are militarily matched, but at a lower level. […] [They] show that irregular wars comprise just over half of the civil wars between 1944 and 2004 [and that] the end of the Cold War resulted in a decrease in the percentage of conflicts that were irregular […] from about two-thirds during the Cold War to about one-quarter after 1991 […] irregular wars last longer and are more likely to be won by the incumbent as compared to both conventional wars and symmetric non-conventional wars.”
“Findley and Young (2012) […] find that a majority of terrorist acts occur in the context of civil war, which suggests that this is an important tactic in the context of the larger struggle between state and non-state actors. […] Lake (2002), among others, has argued that terrorism is often used in conflicts to provoke a disproportionate response from the state. […] Kydd and Walter (2006) argue that terrorism can be used to spoil potential peace among moderate factions in a civil war and empirical evidence supports this claim (Findley and Young, 2013).”
“While sexual violence against civilians in conflict is pervasive, it is not ubiquitous. There are conflicts where systematic sexual violence is completely absent, showing that, contrary to popular belief, sexual violence is not an inherent component of conflict […] Sexual violence in conflict creates disorder in communities by violating social norms and dissolving social bonds through humiliation, shame, and terror […]. The breakdown of the rule of law and social norms has an impact upon the whole community, not just the victims of the violence. Formal and informal social controls are diminished during civil war and communities in conflict lack a functional formal system to maintain order. […] Whether sexual violence is primarily a consequence of the strategy or tactic of leaders or the lack of control of militaries is an ongoing debate.”
“forced migration is not simply a function of conflict and insecurity. Rather, security concerns interact with economic “push” factors in sending regions and “pull” factors in receiving areas. […] it is difficult to disentangle security motives from economic ones […] When governments deliberately target political or ethnic opponents, people are more likely to cross borders as compared with general turmoil in civil wars and dissident violence. […] In addition, better economic conditions and political stability in neighboring states make it more likely that individuals will cross an international border, demonstrating the importance of pull factors in receiving countries. […] Proximity to the conflict country exerts a very large effect on destination choice as does the presence of a large diaspora population. […] Bohra-Mishra and Massey (2011) find that low levels of violence actually discourage migration, perhaps because unsafe travel conditions make it more likely that people will hunker down and stay at home to protect their assets. Only at a high threshold of violence are people willing to leave. Engel and Ibáñez (2007) find that owning land interacts with violence. People with more land are less willing to move since they would lose a fixed asset, but at the same time, large landowners are more likely to be threatened with violence. Confronted with low levels of violence, landowners are more likely to stay put, but become increasingly likely to flee as violence gets worse. […] Greenhill (2010) examines the strategic use of forced migration as a negotiating tactic in interstate relations. In many cases, sending states have “engineered” refugee flows in such a way so as to extract concessions from migrant-receiving states. […] several themes have emerged in the literature on the causes of forced migration. First, refugees are not choice-less, but are strategic actors who weigh the various options available to them, even if choice is in the context of extreme violence. Second, forced migration and economic migration are not mutually exclusive categories; rather, security, economics, and social networks all shape migration decisions to a greater or lesser degree. Finally, perpetrators of violence understand the effects of forced migration and displacement, and use refugee flows and “cleansing” as a way to further their political aims.”
“Refugee communities can also foster conflict in host countries, either through mobilization into militant factions, or by the mere presence of ethnically different “foreigners” and economic competitors. Salehyan and Gleditsch (2006) start with the observation that civil wars often cluster in space – when one country experiences civil war its neighbors are significantly more likely to fall into conflict themselves. They then argue that refugee migration facilitates the transnational spread of militant networks as well as presents negative externalities for receiving areas – such as ethnic competition or economic burdens – increasing the risk of conflict in refugee hosts. Through statistical testing they demonstrate that hosting a large number of refugees does indeed raise the risk of conflict. […] scholars have [also] noted a link between civil war and international conflict: countries that are faced with domestic unrest are more likely to become involved in disputes with their neighbors […] Refugee flows are one potential source of friction between states and can become a cause of international armed conflict. […] Statistically, Salehyan (2008) confirms a general pattern that refugee flows between two countries are associated with militarized interstate disputes (MIDs). Controlling for an array of factors known to be associated with international conflict, hosting 100,000 refugees from another country raises the probability that the host will initiate a conflict against the sender by 96 percent. On the flip side, the sending state is over 90 percent more likely to launch an MID against the host. Therefore, while international relations scholars have focused on variables such as the power balance, democracy, and territorial issues, a significant share of interstate conflict stems from the external effects of domestic unrest and refugee flows.”
“[One] typological approach to understanding violence against civilians is to focus on the military capacity of the actors, usually with a dyadic approach which identifies the relative strength of actors. A general finding is that relatively weak actors are more likely to target civilians. […] Regarding government violence, Valentino et al. (2004) show that governments who face strong rebel groups with a strong civilian base are more likely to engage in mass killings. […] While selective violence is useful for controlling a population (in areas where such control is feasible to uphold through violence), indiscriminate violence seems to follow a logic of weakening the adversary in their strongholds. […] A few studies have examined to what extent violence against civilians occurs as a response to violence against civilians by the adversary. […] Taken together, these studies suggest that there is some evidence for a cycle-of-violence dynamic. […] Hultman (2007) shows that when rebels lose on the battlefield, they tend to shift strategy towards more targeting of civilians and less targeting of government forces. Wood et al. (2012) also focus on shifts in relative power, showing that exogenously imposed power shifts through armed interventions into civil wars increase the level of violence against civilians by the actor that is disadvantaged by the intervention. Hence, rather than concluding that weak actors are more likely to target civilians, these findings show that actors are more likely to target civilians in response to being weakened as a consequence of the war.”
“Numbers from the Uppsala Conflict Data Program (UCDP) Conflict Termination Project1 (Kreutz 2010) reveal the average length of civil wars episodes from 1946 to 2009 is approximately 1647 days […] Fearon (2004) links war type to civil war duration. Coups and revolutions seek quick outright victories. Failing this, coup organizers will likely face imprisonment, death, or exile. The strategy in territorial wars – which are usually fought on the periphery – is to continue the fight to win more concessions at the bargaining table. Peripheral wars do not necessarily need outright military victory to realize important goals. Rebels in these wars have more time. […] There were approximately 30 military coups between 1946 and 2009 identified in the Uppsala Conflict Termination data […] Peripheral/territorial wars tend to endure and are unlikely to end conclusively with peace agreements or military victories. According to the Uppsala Conflict Termination data, the mean duration of territorial wars from 1946 to 2010 is 1826.7 days […] Wars over control of government, on the other hand, typically do not last as long […] we find that the mean duration of these wars is 1456.7 days between 1946 and 2010.”
“Whereas several scholars [have noted] that ethnic/secessionist wars are more intractable than wars over government, there is not a wealth of empirical evidence directly linking war type to recurrence. […] The duration of peace after a civil war has been shown to have a negative impact on recurrence […]. In other words if peace has lasted 20 years after a war has ended, the probability of war in a future year is quite low. […] Civil war duration tends to increase when credible commitment is lacking, the war is ethnic/ peripheral, there are lootable natural resources the rebels can exploit, there are spoilers and a good number of veto players, and when there is third-party intervention. “Reversing” these factors makes for shorter wars. The factors are cumulative in that an ethnic war in the presence of lootable resources, low credible commitment, and spoilers will be expected to last a very long time. Wars over government with no spoilers or lootables will be expected to be shorter. Civil wars are more likely to recur if the war is ethnic/peripheral, credible commitment is lacking, the outcome is one of negotiated settlement, the war did not see an exceptionally high death rate, there are factors conducive to rebel recruitment such as low democracy and a weak economy at war’s end, there are valuable natural resources present in the rebel territory, the war is not mediated, and there is no effective peacekeeping operation.”
i. “A drawback to success in life is that failure, when it does come, acquires an exaggerated importance.” (P. G. Wodehouse).
ii. “Truth is the cry of all, but the game of the few.” (George Berkeley).
iii. “It is always the best policy to speak the truth, unless, of course, you are an exceptionally good liar.” (Jerome K. Jerome).
iv. “I don’t believe any man ever existed without vanity, and if he did he would be an extremely uncomfortable person to have anything to do with. He would, of course, be a very good man, and we should respect him very much. He would be a very admirable man—a man to be put under a glass case and shown round as a specimen—a man to be stuck upon a pedestal and copied, like a school exercise—a man to be reverenced, but not a man to be loved, not a human brother whose hand we should care to grip. Angels may be very excellent sort of folk in their way, but we, poor mortals, in our present state, would probably find them precious slow company. Even mere good people are rather depressing. It is in our faults and failings, not in our virtues, that we touch one another and find sympathy. We differ widely enough in our nobler qualities. It is in our follies that we are at one.” (-ll-).
v. “A shy man’s lot is not a happy one. The men dislike him, the women despise him, and he dislikes and despises himself. […] A shy man means a lonely man—a man cut off from all companionship, all sociability. He moves about the world, but does not mix with it. Between him and his fellow-men there runs ever an impassable barrier—a strong, invisible wall that, trying in vain to scale, he but bruises himself against. He sees the pleasant faces and hears the pleasant voices on the other side, but he cannot stretch his hand across to grasp another hand. He stands watching the merry groups, and he longs to speak and to claim kindred with them. But they pass him by, chatting gayly to one another, and he cannot stay them. He tries to reach them, but his prison walls move with him and hem him in on every side. In the busy street, in the crowded room, in the grind of work, in the whirl of pleasure, amid the many or amid the few—wherever men congregate together, wherever the music of human speech is heard and human thought is flashed from human eyes, there, shunned and solitary, the shy man, like a leper, stands apart. His soul is full of love and longing, but the world knows it not. The iron mask of shyness is riveted before his face, and the man beneath is never seen.” (-ll-).
vi. “We cannot tell the precise moment when friendship is formed. As in filling a vessel drop by drop, there is at last a drop which makes it run over; so in a series of kindnesses there is at last one which makes the heart run over.” (James Boswell).
vii. “Men might as well project a voyage to the Moon as attempt to employ steam navigation against the stormy North Atlantic Ocean.” (Dr. Dionysus Lardner (1793-1859). Many more quotes of a similar nature here).
viii. “We pity in others only those evils which we have ourselves experienced.” (Jean-Jacques Rousseau).
ix. “All that time is lost which might be better employed.” (-ll-).
x. “Virtue is a state of war, and to live in it means one always has some battle to wage against oneself.” (-ll-).
xi. “Remorse sleeps during a prosperous period but wakes up in adversity.” (-ll-).
xii. “Hatred, as well as love, renders its votaries credulous.” (-ll-).
xiii. “He that is choice of his time will be choice of his company, and choice of his actions.” (Jeremy Taylor).
xiv. “To say that a man is vain means merely that he is pleased with the effect he produces on other people. A conceited man is satisfied with the effect he produces on himself.” (Max Beerbohm).
xv. “Moderation is the silken string running through the pearl chain of all virtues.” (Joseph Hall).
xvi. “If you make people think they’re thinking, they’ll love you; but if you really make them think, they’ll hate you.” (Donald Marquis).
xvii. “Some luck lies in not getting what you thought you wanted but getting what you have, which once you have got it you may be smart enough to see is what you would have wanted had you known.” (Garrison Keillor)
xviii. “Once I believed that sooner or later I would come across a really wise person; today I couldn’t even say what wisdom is.” (Fausto Cercignani).
xix. “If you are living in the past or in the future, you will never find a meaning in the present.” (-ll-)
xx. “A secret remains a secret until you make someone promise never to reveal it.” (-ll-)
Update: According to the category count, this is the 150th post of quotes here on this blog (the category cloud seems to be slow to update the number, but I assume it’ll do it eventually).
It’s probably worth pointing out to new readers in particular that if you like this post and perhaps have liked a few of the previous posts in the series, you can access a collection of all the other posts in the series simply by clicking the blue category link, ‘quotes’, at the bottom of this post, or by clicking the ‘quotes’ link provided in the category cloud in the sidebar to the right.
[Warning: Long post].
I’ve blogged data related to the data covered in this post before here on the blog, but when I did that I only provided coverage in Danish. Part of my motivation for providing some coverage in English here (which is a slightly awkward and time consuming thing to do as all source material is in Danish) is that this is the sort of data you probably won’t ever get to know about if you don’t understand Danish, and it seems like some of it might be worth knowing about also for people who do not live in Denmark. Another reason for posting stuff in English is of course that I dislike writing a blog post which I know beforehand that some of my regular readers will not understand. I should perhaps note that some of the data is at least peripherally related to my academic work at the moment.
The report which I’m covering in this post (here’s a link to it) deals primarily with various metrics collected in order to evaluate whether treatment goals which have been set centrally are being met by the Danish regions, one of the primary political responsibilities of which is to deal with health care service delivery. To take an example from the report, a goal has been set that at least 95 % of patients with known diabetes in the Danish regions should have their Hba1c (an important variable in the treatment context) measured at least once per year. The report of course doesn’t just contain a list of goals etc. – it also presents a lot of data which has been collected throughout the country in order to figure out to which extent the various goals have been met at the local levels. Hba1c is just an example; there are also goals set in relation to the variables hypertension, regular eye screenings, regular kidney function tests, regular foot examinations, and regular tests for hyperlipidemia, among others.
Testing is just one aspect of what’s being measured; other goals relate to treatment delivery. There’s for example a goal that the proportion of (known) type 2 diabetics with an Hba1c above 7.0% who are not receiving anti-diabetic treatment should be at most 5% within regions. A thought that occurred to me while reading the report was that it seemed to me that some interesting incentive problems might pop up here if these numbers were more important than I assume they are in the decision-making context, because adding this specific variable without also adding a goal for ‘finding diabetics who do not know they are sick’ – and no such goal is included in the report, as far as I’ve been able to ascertain – might lead to problems; in theory a region that would do well in terms of identifying undiagnosed type 2 patients, of which there are many, might get punished for this if their higher patient population in treatment as a result of better identification might lead to binding capacity constraints at various treatment levels; capacity constraints which would not affect regions which are worse at identifying (non-)patients at risk because of the existence of a tradeoff between resources devoted to search/identification and resources devoted to treatment. Without a goal for identifying undiagnosed type 2 diabetics, it seems to me that to the extent that there’s a tradeoff between devoting resources to identifying new cases and devoting resources to the treatment of known cases, the current structure of evaluation, to the extent that it informs decision-making at the regional level, favours treatment over identification – which might or might not be problematic from a cost-benefit point of view. I find it somewhat puzzling that no goals relate to case-finding/diagnostics because a lot of the goals only really make sense if the people who are sick actually get diagnosed so that they can receive treatment in the first place; that, say, 95% of diabetics with a diagnosis receives treatment option X is much less impressive if, say, a third of all people with the disease do not have a diagnosis. Considering the relatively low amount of variation in some of the metrics included you’d expect a variable of this sort to be included here, at least I did.
The report has an appendix with some interesting information about the sex ratios, age distributions, how long people have had diabetes, whether they smoke, what their BMIs and blood pressures are like, how well they’re regulated (in terms of Hba1c), what they’re treated with (insulin, antihypertensive drugs, etc.), their cholesterol levels and triglyceride levels, etc. I’ll talk about these numbers towards the end of the post – if you want to get straight to this coverage and don’t care about the ‘main coverage’, you can just scroll down until you reach the ‘…’ point below.
The report has 182 pages with a lot of data, so I’m not going to talk about all of it. It is based on very large data sets which include more than 37.000 Danish diabetes patients from specialized diabetes units (diabetesambulatorier) (these are usually located in hospitals and provide ambulatory care only) as well as 34.000 diabetics treated by their local GPs – the aim is to eventually include all Danish diabetics in the database, and more are added each year, but even as it is a very big proportion of all patients are ‘accounted for’ in the data. Other sources also provide additional details, for example there’s a database on children and young diabetics collected separately. Most of the diabetics which are not included here are patients treated by their local GPs, and there’s still a substantial amount of uncertainty related to this group; approximately 90% of all patients connected to the diabetes units are assumed at this point to be included in the database, but the report also notes that approximately 80 % of diabetics are assumed to be treated in general practice. Coverage of this patient population is currently improving rapidly and it seems that most diabetics in Denmark will likely be included in the database within the next few years. They speculate in the report that the inclusion of more patients treated in general practice may be part of the explanation why goal achievement seems to have decreased slightly over time; this seems to me like a likely explanation considering the data they present as the diabetes units in general are better at achieving the goals set than are the GPs. The data is up to date – as some of you might have inferred from the presumably partly unintelligible words in the parenthesis in the title, the report deals with data from the time period 2013-2014. I decided early on not to copy tables into this post directly as it’s highly annoying to have to translate terms in such tables; instead I’ve tried to give you the highlights. I may or may not have succeeded in doing that, but you should be aware, especially if you understand Danish, that the report has a lot of details, e.g. in terms of intraregional variation etc., which are excluded from this coverage. Although I far from cover all the data, I do cover most of the main topics dealt with in the publication in at least a little bit of detail.
The report concludes in the introduction that for most treatment indicators no clinically significant differences in the quality of the treatment provided to diabetics are apparent when you compare the different Danish regions – so if you’re looking at the big picture, if you’re a Danish diabetic it doesn’t matter all that much if you live in Jutland or in Copenhagen. However some significant intra-regional differences do exist. In the following I’ll talk in a bit more detail about some of data included in the report.
When looking at the Hba1c goal (95% should be tested at least once per year), they evaluate the groups treated in the diabetes units and the groups treated in general practice separately; so you have one metric for patients treated in diabetes units living in the north of Jutland (North Denmark Region) and you have another group of patients treated in general practice living in the north of Jutland – this breakdown of the data makes it possible to not only compare people across regions but also to investigate whether there are important differences between the care provided by diabetes units and the care provided by general practitioners. When dealing with patients receiving ambulatory care from the diabetes units all regions meet the goal, but in Copenhagen (Capital Region of Denmark, (-CRD)) only 94% of patients treated in general practice had their Hba1c measured within the last year – this was the only region which did not meet the goal for the patient population treated in general practice. I would have thought beforehand that all diabetes units would have 100% coverage here, but that’s actually only the case in the region in which I live (Central Denmark Region) – on the other hand in most other regions, aside from Copenhagen again, the number is 99%, which seems reasonable as I’m assuming a substantial proportion of the remainder is explained by patient noncompliance, which is difficult to avoid completely. I speculate that patient compliance differences between patient populations treated at diabetes units and patient populations treated by their GP might also be part of the explanation for the lower goal achievement of the general practice population; as far as I’m aware diabetes units can deny care in the case of non-compliance whereas GPs cannot, so you’d sort of expect the most ‘difficult’ patients to end up in general practice; this is speculation to some extent and I’m not sure it’s a big effect, but it’s worth keeping in mind when analyzing this data that not all differences you observe necessarily relate to service delivery inputs (whether or not a doctor reminds a patient it’s time to get his eyes checked, for example); the two main groups analyzed are likely to also be different due to patient population compositions. Differences in patient population composition may of course also drive some of the intraregional variation observed. They mention in their discussion of the results for the Hba1c variable that they’re planning on changing the standard here to one which relate to the distributional results of the Hba1c, not just whether the test was done, which seems like a good idea. As it is, the great majority of Danish diabetics have their Hba1c measured at least annually, which is good news because of the importance of this variable in the treatment context.
In the context of hypertension, there’s a goal that at least 95% of diabetics should have their blood pressure measured at least once per year. In the context of patients treated in the diabetes units, all regions achieve the goal and the national average for this patient population is 97% (once again the region in which I live is the only one that achieved 100 % coverage), but in the context of patients treated in general practice only one region (North Denmark Region) managed to get to 95% and the national average is 90%. In most regions, one in ten diabetics treated in general practice do not have their blood pressure measured once per year, and again Copenhagen (CRD) is doing worst with a coverage of only 87%. As mentioned in the general comments above some of the intraregional variation is actually quite substantial, and this may be a good example because not all hospitals are doing great on this variable. Sygehus Sønderjylland, Aabenraa (in southern Jutland), one of the diabetes units, had a coverage of only 67%, and the percentage of patients treated at Hillerød Hospital in Copenhagen (CRD), another diabetes unit, was likewise quite low, with 83% of patients having had their blood pressure measured within the last year. These hospitals are however the exceptions to the rule. Evaluating whether it has been tested if patients do or do not have hypertension is different from evaluating whether hypertension is actually treated after it has been discovered, and here the numbers are less impressive; for the type 1 patients treated in the diabetes units, roughly one third (31%) of patients with a blood pressure higher than 140/90 are not receiving treatment for hypertension (the goal was at most 20%). The picture was much better for type 2 patients (11% at the national level) and patients treated in general practice (13%). They note that the picture has not improved over the last years for the type 1 patients and that this is not in their opinion a satisfactory state of affairs. A note of caution is that the variable only includes patients who have had a blood pressure measured within the last year which was higher than 140/90 and that you can’t use this variable as an indication of how many patients with high blood pressure are not being treated; some patients who are in treatment for high blood pressure have blood pressures lower than 140/90 (achieving this would in many cases be the point of treatment…). Such an estimate will however be added to later versions of the report. In terms of the public health consequences of undertreatment, the two patient populations are of course far from equally important. As noted later in the coverage, the proportion of type 2 patients on antihypertensive agents is much higher than the proportion of type 1 diabetics receiving treatment like this, and despite this difference the blood pressure distributions of the two patient populations are reasonably similar (more on this below).
Screening for albuminuria: The goal here is that at least 95 % of adult diabetics are screened within a two-year period (There are slightly different goals for children and young adults, but I won’t go into those). In the context of patients treated in the diabetes units, the northern Jutland Region and Copenhagen/RH failed to achieve the goal with a coverage slightly below 95% – the other regions achieved the goal, although not much more than that; the national average for this patient population is 96%. In the context of patients treated in general practice none of the regions achieve the goal and the national average for this patient population is 88%. Region Zealand was doing worst with 84%, whereas the region in which I live, Region Midtjylland, was doing best with a 92% coverage. Of the diabetes units, Rigshospitalet, “one of the largest hospitals in Denmark and the most highly specialised hospital in Copenhagen”, seems to also be the worst performing hospital in Denmark in this respect, with only 84 % of patients being screened – which to me seems exceptionally bad considering that for example not a single hospital in the region in which I live is below 95%. Nationally roughly 20% of patients with micro- or macroalbuminuria are not on ACE-inhibitors/Angiotensin II receptor antagonists.
Eye examination: The main process goal here is at least one eye examination every second year for at least 90% of the patients, and a requirement that the treating physician knows the result of the eye examination. This latter requirement is important in the context of the interpretation of the results (see below). For patients treated in diabetes units, four out of five regions achieved the goal, but there were also what to me seemed like large differences across regions. In Southern Denmark, the goal was not met and only 88 % had had an eye examination within the last two years, whereas the number was 98% in Region Zealand. Region Zealand was a clear outlier here and the national average for this patient population was 91%. For patients treated in general practice no regions achieved the goal, and this variable provides a completely different picture from the previous variables in terms of the differences between patients treated in diabetes units and patients treated in general practice: In most regions, the coverage here for patients in general practice is in the single digits and the national average for this patient population is just 5 %. They note in the report that this number has decreased over the years through which this variable has been analyzed, and they don’t know why (but they’re investigating it). It seems to be a big problem that doctors are not told about the results of these examinations, which presumably makes coordination of care difficult.
The report also has numbers on how many patients have had their eyes checked within the last 4 years, rather than within the last two, and this variable makes it clear that more infrequent screening is not explaining anything in terms of the differences between the patient populations; for patients treated in general practice the numbers are still here in the single digits. They mention that data security requirements imposed on health care providers are likely the reason why the numbers are low in general practice as it seems common that the GP is not informed of the results of screenings taking place, so that the only people who gets to know about the results are the ophthalmologists doing them. A new variable recently included in the report is whether newly-diagnosed type 2 diabetics are screened for eye-damage within 12 months of receiving their diagnosis – here they have received the numbers directly from the ophthalmologists so uncertainty about information sharing doesn’t enter the picture (well, it does, but the variable doesn’t care; it just measures whether an eye screen has been performed or not) – and although the standard set is 95% (at most one in twenty should not have their eyes checked within a year of diagnosis) at the national level only half of patients actually do get an eye screen within the first year (95% CI: 46-53%) – uncertainty about the date of diagnosis makes it slightly difficult to interpret some of the specific results, but the chosen standard is not achieved anywhere and this once again underlines how diabetic eye care is one of the areas where things are not going as well as the people setting the goals would like them to. The rationale for screening people within the first year of diagnosis is of course that many type 2 patients have complications at diagnosis – “30–50 per cent of patients with newly diagnosed T2DM will already have tissue complications at diagnosis due to the prolonged period of antecedent moderate and asymptomatic hyperglycaemia.” (link).
The report does include estimates of the number of diabetics who receive eye screenings regardless of whether the treating physician knows the results or not; at the national level, according to this estimate 65% of patients have their eyes screened at least once every second year, leaving more than a third of patients in a situation where they are not screened as often as is desirable. They mention that they have had difficulties with the transfer of data and many of the specific estimates are uncertain, including two of the regional estimates, but the general level – 65% or something like that – is based on close to 10.000 patients and is assumed to be representative. Approximately 1% of Danish diabetics are blind, according to the report.
Foot examinations: Just like most of the other variables: At least 95 % of patients, at least once every second year. For diabetics treated in diabetes units, the national average is here 96% and the goal was not achieved in Copenhagen (CRD) (94%) and northern Jutland (91%). There are again remarkable differences within regions; at Helsingør Hospital only 77% were screened (95% CI: 73-82%) (a drop from 94% the year before), and at Hillerød Hospital the number was even lower, 73% (95% CI: 70-75), again a drop from the previous year where the coverage was 87%. Both these numbers are worse than the regional averages for all patients treated in general practice, even though none of the regions meet the goal. Actually I thought the year-to-year changes in the context of these two hospitals were almost as interesting as the intraregional differences because I have a hard time explaining those; how do you even set up a screening programme such that a coverage drop of more than 10 % from one year to the next is possible? To those who don’t know, diabetic feet are very expensive and do not seem to get the research attention one might from a cost-benefit perspective assume they would (link, point iii). Going back to the patients in general practice on average 81 % of these patients have a foot examination at least once every second year. The regions here vary from 79% to 84%. The worst covered patients are patients treated in general practice in the Vordingborg sygehus catchment area in the Zealand Region, where only roughly two out of three (69%, 95% CI: 62-75%) patients have regularly foot examinations.
Aside from all the specific indicators they’ve collected and reported on, the authors have also constructed a combined indicator, an ‘all-or-none’ indicator, in which they measure the proportion of patients who have not failed to get their Hba1c measured, their feet checked, their blood pressure measured, kidney function tests, etc. … They do not include in this metric the eye screening variable because of the problems associated with this variable, but this is the only process variable not included, and the variable is sort of an indicator of how many of the patients are actually getting all of the care that they’re supposed to get. As patients treated in general practice are generally less well covered than patients treated in the diabetes units at the hospitals I was interested to know how much these differences ‘added up to’ in the end. For the diabetes units, 11 % of patients failed on at least one metric (i.e. did not have their feet checked/Hba1c measured/blood pressure measured/etc.), whereas this was the case for a third of patients in general practice (67%). Summed up like that it seems to me that if you’re a Danish diabetes patient and you want to avoid having some variable neglected in your care, it matters whether you’re treated by your local GP or by the local diabetes unit and that you’re probably going to be better off receiving care from the diabetes unit.
Some descriptive statistics from the appendix (p. 95 ->):
Sex ratio: In the case of this variable, they have multiple reports on the same variable based on data derived from different databases. In the first database, including 16.442 people, 56% are male and 44% are female. In the next database (n=20635), including only type 2 diabetics, the sex ratio is more skewed; 60% are males and 40% are females. In a database including only patients in general practice (n=34359), like in the first database 56% of the diabetics are males and 44% are females. For the patient population of children and young adults included (n=2624), the sex ratio is almost equal (51% males and 49% females). The last database, Diabase, based on evaluation of eye screening and including only adults (n=32842), have 55% males and 45% females. It seems to me based on these results that the sex ratio is slightly skewed in most patient populations, with slightly more males than females having diabetes – and it seems not improbable that this is to due to a higher male prevalence of type 2 diabetes (the children/young adult database and type 2 database seem to both point in this direction – the children/young adult group mainly consists of type 1 patients as 98% of this sample is type 1. The fact that the prevalence of autoimmune disorders is in general higher in females than in males also seems to support this interpretation; to the extent that the sex ratio is skewed in favour of males you’d expect lifestyle factors to be behind this.
Next, age distribution. In the first database (n=16.442), the average and the median age is 50, the standard deviation is 16, the youngest individual is 16 and the oldest is 95. It is worth remembering in this part of the reporting that the oldest individual in the sample is not a good estimate of ‘how long a diabetic can expect to live’ – for all we know the 95 year old in the database got diagnosed at the age of 80. You need diabetes duration before you can begin to speculate about that variable. Anyway, in the next database, of type 2 patients (n=20635), the average age is 64 (median=65), the standard deviation is 12 and the oldest individual is 98. In the context of both of the databases mentioned so far some regions do better than others in terms of the oldest individual, but it also seems to me that this may just be a function of the sample size and ‘random stuff’ (95+ year olds are rare events); Northern Jutland doesn’t have a lot of patients so the oldest patient in that group is not as old as the oldest patient from Copenhagen – this is probably but what you’d expect. In the general practice database (n=34359), the average age is 68 (median=69) and the standard deviation is 11; the oldest individual there is 102. In the Diabase database (n=32842), the average age is 62 (median=64), the standard deviation is 15 and the oldest individual is 98. It’s clear from these databases that most diabetics in Denmark are type 2 diabetics (this is no surprise) and that a substantial proportion of them are at or close to retirement age.
The appendix has a bit of data on diabetes type, but I think the main thing to take away from the tables that break this variable down is that type 1 is overrepresented in the databases compared to the true prevalence – in the Diabase database for example almost half of patients are type 1 (46%), despite the fact that type 1 diabetics are estimated to make up only 10% of the total in Denmark (see e.g. this (Danish source)). I’m sure this is to a significant extent due to lack of coverage of type 2 diabetics treated in general practice.
Diabetes duration: In the first data-set including 16.442 individuals the patients have a median diabetes duration of 21,2 years. The 10% cutoff is 5,4 years, the 25% cutoff is 11,3 years, the 75% cutoff is 33,5 years, and the 90% cutoff is 44,2 years. High diabetes durations are more likely to be observed in type 1 patients as they’re in general diagnosed earlier; in the next database involving only type 2 patients (n=20635), the median duration is 12.9 years and the corresponding cutoffs are 3,8 years (10%); 7,4 years (25%); 18,6 years (75%); and 24,7 years (90%). In the database involving patients treated in general practice, the median duration is 6,8 years and the cutoffs reported for the various percentiles are 2,5 years (10%), 4,0 (25%), 11,2 (75%) and 15,6 (90%). One note not directly related to the data but which I thought might be worth adding here is that of one were to try to use these data for the purposes of estimating the risk of complications as a function of diabetes duration, it would be important to have in mind that there’s probably often a substantial amount of uncertainty associated with the diabetes duration variable because many type 2 diabetics are diagnosed after a substantial amount of time with sub-optimal glycemic control; i.e. although diabetes duration is lower in type 2 populations than in type 1 populations, I’d assume that the type 2 estimates of duration are still biased downwards compared to type 1 estimates causing some potential issues in terms of how to interpret associations found here.
Next, smoking. In the first database (n=16.442), 22% of diabetics smoke daily and another 22% are ex-smokers who have not smoked within the last 6 months. According to the resource to which you’re directed when you’re looking for data on that kind of stuff on Statistics Denmark, the percentage of daily smokers was 17% in 2013 in the general population (based on n=158.870 – this is a direct link to the data), which seems to indicate that the trend (this is a graph of the percentage of Danes smoking daily as a function of time, going back to the 70es) I commented upon (Danish link) a few years back has not reversed or slowed down much. If we go back to the appendix and look at the next source, dealing with type 2 diabetics, 19% of them are smoking daily and 35% of them are ex-smokers (again, 6 months). In the general practice database (n=34.359) 17% of patients smoke daily and 37% are ex-smokers.
BMI. Here’s one variable where type 1 and type 2 look very different. The first source deals with type 1 diabetics (n=15.967) and here the median BMI is 25.0, which is comparable to the population median (if anything it’s probably lower than the population median) – see e.g. page 63 here. Relevant percentile cutoffs are 20,8 (10%), 22,7 (25%), 28,1 (75%), and 31,3 (90%). Numbers are quite similar across regions. For the type 2 data, the first source (n=20.035) has a median BMI of 30,7 (almost equal to the 1 in 10 cutoff for type 1 diabetics), with relevant cutoffs of 24,4 (10%), 27,2 (25%), 34,9 (75%), and 39,4 (90%). According to this source, one in four type 2 diabetics in Denmark are ‘severely obese‘ and more diabetics are obese than are not. It’s worth remembering that using these numbers to implicitly estimate the risk of type 2 diabetes associated with overweight is problematic as especially some of the people in the lower end of the distribution are quite likely to have experienced weight loss post-diagnosis. For type 2 patients treated in general practice (n=15.736), the median BMI is 29,3 and cutoffs are 23,7 (10%), 26,1 (25%), 33,1 (75%), and 37,4 (90%).
Distribution of Hba1c. The descriptive statistics included also have data on the distribution of Hba1c values among some of the patients who have had this variable measured. I won’t go into the details here except to note that the differences between type 1 and type 2 patients in terms of the Hba1c values achieved are smaller than I’d perhaps expected; the median Hba1c among type 1s was estimated at 62, based on 16.442 individuals, whereas the corresponding number for type 2s was 59, based on 20.635 individuals. Curiously, a second data source finds a median Hba1c of only 48 for type 2 patients treated in general practice; the difference between this one and the type 1 median is definitely high enough to matter in terms of the risk of complications (it’s more questionable how big the effect of a jump from 59 to 62 is, especially considering measurement error and the fact that the type 1 distribution seems denser than the type 2 distribution so that there aren’t that many more exceptionally high values in the type 1 dataset), but I wonder if this actually quite impressive level of metabolic control in general practice may not be due to biased reporting, with GPs doing well in terms of diabetes management being also more likely to report to the databases; it’s worth remembering that most patients treated in general practice are still not accounted for in these data-sets.
Oral antidiabetics and insulin. In one sample of 20.635 type 2 patients, 69% took oral antidiabetics, and in another sample of 34.359 type 2 patients treated in general practice the number was 75%. 3% of type 1 diabetics in a sample of 16.442 individuals also took oral antidiabetics, which surprised me. In the first-mentioned sample of type 2 patients 69% (but not the same amount of individuals – this was not a reporting error) also took insulin, so there seems to be a substantial number of patients on both treatments. In the general practice sample included the number of patients on insulin was much lower, as only 14% of type 2 patients were on insulin – again concerns about reporting bias may play a role here, but even taking this number at face value and extrapolating out of sample you reach the conclusion that the majority of patients on insulin are probably type 2 diabetics, as only roughly one patient in 10 is type 1.
Antihypertensive treatment and treatment for hyperlipidemia: Although there as mentioned above seems to be less focus on hypertension in type 1 patients than on hypertension in type 2 patients, it’s still the case that roughly half (48%) of all patients in the type 1 sample (n=16.442) was on antihypertensive treatment. In the first type 2 sample (n=20635), 82% of patients were receiving treatment against hypertension, and this number was similar in the general practice sample (81%). The proportions of patients in treatment for hyperlipidemia are roughly similar (46% of type 1, and 79% and 73% in the two type 2 samples, respectively).
Blood pressure. The median level of systolic blood pressure among type 1 diabetics (n=16442) was 130, with the 75% cutoff intersecting the hypertension level (140) and 10% of patients having a systolic blood pressure above 151. These numbers are almost identical to the sample of type 2 patients treated in general practice, however as earlier mentioned this blood pressure level is achieved with a lower proportion of patients in treatment for hypertension. In the second sample of type 2 patients (n=20635), the numbers were slightly higher (median: 133, 75% cutoff: 144, 90% cutoff: 158). The median diastolic blood pressure was 77 in the type 1 sample, with 75 and 90% cutoffs of 82 and 89; the data in the type 2 samples are almost identical.
Here’s my first post about the book. In this post I’ll continue my coverage where I left off in my first post. A few of the chapters covered below I did not think very highly of, but other parts of the coverage are about as good as you could expect (given problems such as e.g. limited data etc.). Some of the stuff I found quite interesting. As people will note in the coverage below the book does address the religious dimension to some extent, though in my opinion far from to the extent that the variable deserves. An annoying aspect of the chapter on religion was to me that although the author of the chapter includes data which to me cannot but lead to some very obvious conclusions, the author seems to be very careful avoiding drawing those conclusions explicitly. It’s understandable, but still annoying. For related reasons I also got annoyed at him for presumably deliberately completely disregarding which seems in the context of his own coverage to be an actually very important component of Huntington’s thesis, that conflict at the micro level seems to very often be between muslims and ‘the rest’. Here’s a relevant quote from Clash…, p. 255:
“ethnic conflicts and fault line wars have not been evenly distributed among the world’s civilizations. Major fault line fighting has occurred between Serbs and Croats in the former Yugoslavia and between Buddhists and Hindus in Sri Lanka, while less violent conflicts took place between non-Muslim groups in a few other places. The overwhelming majority of fault line conflicts, however, have taken place along the boundary looping across Eurasia and Africa that separates Muslims from non-Muslims. While at the macro or global level of world politics the primary clash of civilizations is between the West and the rest, at the micro or local level it is between Islam and the others.”
This point, that conflict at the local level – which seems to be the type of conflict level you’re particularly interested in if you’re researching civil wars, as also argued in previous chapters in the coverage – according to Huntington seems to be very islam-centric, is completely overlooked (ignored?) in the handbook chapter, and if you haven’t read Huntington and your only exposure to him is through the chapter in question you’ll probably conclude that Huntington was wrong, because that seems to be the conclusion the author draws, arguing that other models are more convincing (I should add here that these other models do seem useful, at least in terms of providing (superficial) explanations; the point is just that I feel the author is misrepresenting Huntington and I dislike this). Although there are parts of the coverage in that chapter where I feel that it’s obvious the author and I do not agree, I should note that the fact that he talks about the data and the empirical research makes up for a lot of other stuff.
Anyway, on to the coverage – it’s perhaps worth noting, in light of the introductory remarks above, that the post has stuff on a lot of things besides religion, e.g. the role of natural resources, regime types, migration, and demographics.
“Elites seeking to end conflict must: (1) lead followers to endorse and support peaceful solutions; (2) contain spoilers and extremists and prevent them from derailing the process of peacemaking; and (3) forge coalitions with more moderate members of the rival ethnic group(s) […]. An important part of the two-level nature of the ethnic conflict is that each of the elites supporting the peace process be able to present themselves, and the resulting terms of the peace, as a “win” for their ethnic community. […] A strategy that a state may pursue to resolve ethnic conflict is to co-opt elites from the ethnic communities demanding change […]. By satisfying elites, it reduces the ability of the aggrieved ethnic community to mobilize. Such a process of co-option can also be used to strengthen ethnic moderates in order to undermine ethnic extremists. […] the co-opted elites need to be careful to be seen as still supporting ethnic demands or they may lose all credibility in their respective ethnic community. If this occurs, the likely outcome is that more extreme ethnic elites will be able to capture the ethnic community, possibly leading to greater violence.
It is important to note that “spoilers,” be they an individual or a small sub-group within an ethnic community, can potentially derail any peace process, even if the leaders and masses support peace (Stedman, 2001).”
“Three separate categories of international factors typically play into identity and ethnic conflict. The first is the presence of an ethnic community across state boundaries. Thus, a single community exists in more than one state and its demands become international. […] This division of an ethnic community can occur when a line is drawn geographically through a community […], when a line is drawn and a group moves into the new state […], or when a diaspora moves a large population from one state to another […] or when sub-groups of an ethnic community immigrate to the developed world […] When ethnic communities cross state boundaries, the potential for one state to support an ethnic community in the other state exists. […] There is also the potential for ethnic communities to send support to a conflict […] or to lobby their government to intervene […]. Ethnic groups may also form extra-state militias and cross international borders. Sometimes these rebel groups can be directly or indirectly sponsored by state governments, leading to a very complex situation […] A second set of possible international factors is non-ethnic international intervention. A powerful state may decide to intervene in an ethnic conflict for a variety of reasons, ranging from humanitarian support, to peacekeeping, to outright invasion […] The third and last factor is the commitment of non-governmental organizations (NGOs) or third-party mediators to a conflict. […] The record of international interventions in ethnic civil wars is quite mixed. There are many difficulties associated with international action [and] international groups cannot actually change the underlying root of the ethnic conflict (Lake and Rothchild, 1998; Kaufman, 1996).”
“A relatively simple way to think of conflict onset is to think that for a rebellion to occur two conditions need to be satisfactorily fulfilled: There must be a motivation and there must be an opportunity to rebel.3 First, the rebels need a motive. This can be negative – a grievance against the existing state of affairs – or positive – a desire to capture resource rents. Second, potential rebels need to be able to achieve their goal: The realization of their desires may be blocked by the lack of financial means. […] Work by Collier and Hoeffler (1998, 2004) was crucial in highlighting the economic motivation behind civil conflicts. […] Few conflicts, if any, can be characterized purely as “resource conflicts.” […] It is likely that few groups are solely motivated by resource looting, at least in the lower rank level. What is important is that valuable natural resources create opportunities for conflicts. To feed, clothe, and arm its members, a rebel group needs money. Unless the rebel leaders are able to raise sufficient funds, a conflict is unlikely to start no matter how severe the grievances […] As a consequence, feasibility of conflict – that is, valuable natural resources providing opportunity to engage in violent conflict – has emerged as a key to understanding the relation between valuable resources and conflict.”
“It is likely that some natural resources are more associated with conflict than others. Early studies on armed civil conflict used resource measures that aggregated different types of resources together. […] With regard to financing conflict start-up and warfare the most salient aspect is probably the ease with which a resource can be looted. Lootable resources can be extracted with simple methods by individuals or small groups, are easy to transport, and can be smuggled across borders with limited risks. Examples of this type of resources are alluvial gemstones and gold. By contrast, deep-shaft minerals, oil, and natural gas are less lootable and thus less likely sources of financing. […] Using comprehensive datasets on all armed civil conflicts in the world, natural resource production, and other relevant aspects such as political regime, economic performance, and ethnic composition, researchers have established that at least some high-value natural resources are related to higher risk of conflict onset. Especially salient in this respect seem to be oil and secondary diamonds […] The results regarding timber […] and cultivation of narcotics […] are inconclusive. […] [An] important conclusion is that natural resources should be considered individually and not lumped together. Diamonds provide an illustrative example: the geological form of the diamond deposit is related to its effect on conflict. Secondary diamonds – the more lootable form of two deposit types – makes conflict more likely, longer, and more severe. Primary diamonds on the other hand are generally not related to conflict.”
“Analysis on conflict duration and severity confirm that location is a salient factor: resources matter for duration and severity only when located in the region where the conflict is taking place […] That the location of natural resources matters has a clear and important implication for empirical conflict research: relying on country-level aggregates can lead to wrong conclusions about the role of natural resources in armed civil conflict. As a consequence of this, there has been effort to collect location-specific data on oil, gas, drug cultivation, and gemstones”.
“a number of prominent studies of ethnic conflict have suggested that when ethnic groups grow at different rates, this may lead to fears of an altered political balance, which in turn might cause political instability and violent conflict […]. There is ample anecdotal evidence for such a relationship [but unfortunately little quantitative research…]. The civil war in Lebanon, for example, has largely been attributed to a shift in the delicate ethnic balance in that state […]. Further, in the early 1990s, radical Serb leaders were agitating for the secession of “Serbian” areas in Bosnia-Herzegovina by instigating popular fears that Serbs would soon be outnumbered by a growing Muslim population heading for the establishment of a Shari’a state”.
“[One] part of the demography-conflict literature has explored the role of population movements. Most of this literature […] treats migration and refugee flows as a consequence of conflict rather than a potential cause. Some scholars, however, have noted that migration, and refugee migration in particular, can spur the spread of conflict both between and within states […]. Existing work suggests that environmentally induced migration can lead to conflict in receiving areas due to competition for scarce resources and economic opportunities, ethnic tensions when migrants are from different ethnic groups, and exacerbation of socioeconomic “fault lines” […] Salehyan and Gleditsch (2006) point to spill-over effects, in the sense that mass refugee migration might spur tensions in neighboring or receiving states by imposing an economic burden and causing political stability [sic]. […] Based on a statistical analysis of refugees from neighboring countries and civil war onset during the period 1951–2001, they find that countries that experience an influx of refugees from neighboring states are significantly more likely to experience wars themselves. […] While the youth bulge hypothesis [large groups of young males => higher risk of violence/war/etc.] in general is supported by empirical evidence, indicating that countries and areas with large youth cohorts are generally at a greater risk of low-intensity conflict, the causal pathways relating youth bulges to increased conflict propensity remain largely unexplored quantitatively. When it comes to the demographic factors which have so far received less attention in terms of systematic testing – skewed sex ratios, differential ethnic growth, migration, and urbanization – the evidence is somewhat mixed […] a clear challenge with regard to the study of demography and conflict pertains to data availability and reliability. […] Countries that are undergoing armed conflict are precisely those for which we need data, but also those in which census-taking is hampered by violence.”
“Most research on the duration of civil war find that civil wars in democracies tend to be longer than other civil wars […] Research on conflict severity finds some evidence that democracies tend to see fewer battledeaths and are less likely to target civilians, suggesting that democratic institutions may induce some important forms of restraints in armed conflict […] Many researchers have found that democratization often precedes an increase in the risk of the onset of armed conflict. Hegre et al. (2001), for example, find that the risk of civil war onset is almost twice as high a year after a regime change as before, controlling for the initial level of democracy […] Many argue that democratic reforms come about when actors are unable to rule unilaterally and are forced to make concessions to an opposition […] The actual reforms to the political system we observe as democratization often do not suffice to reestablish an equilibrium between actors and the institutions that regulate their interactions; and in its absence, a violent power struggle can follow. Initial democratic reforms are often only partial, and may fail to satisfy the full demands of civil society and not suffice to reduce the relevant actors’ motivation to resort to violence […] However, there is clear evidence that the sequence matters and that the effect [the increased risk of civil war after democratization, US] is limited to the first election. […] civil wars […] tend to be settled more easily in states with prior experience of democracy […] By our count, […] 75 percent of all annual observations of countries with minor or major armed conflicts occur in non-democracies […] Democracies have an incidence of major armed conflict of only 1 percent, whereas nondemocracies have a frequency of 5.6 percent.”
“Since the Iranian revolution in the late 1970s, religious conflicts and the rise of international terror organizations have made it difficult to ignore the facts that religious factors can contribute to conflict and that religious actors can cause or participate in domestic conflicts. Despite this, comprehensive studies of religion and domestic conflict remain relatively rare. While the reasons for this rarity are complex there are two that stand out. First, for much of the twentieth century the dominant theory in the field was secularization theory, which predicted that religion would become irrelevant and perhaps extinct in modern times. While not everyone agreed with this extreme viewpoint, there was a consensus that religious influences on politics and conflict were a waning concern. […] This theory was dominant in sociology for much of the twentieth century and effectively dominated political science, under the title of modernization theory, for the same period. […] Today supporters of secularization theory are clearly in the minority. However, one of their legacies has been that research on religion and conflict is a relatively new field. […] Second, as recently as 2006, Brian Grim and Roger Finke lamented that “religion receives little attention in international quantitative studies. Including religion in cross-national studies requires data, and high-quality data are in short supply” […] availability of the necessary data to engage in quantitative research on religion and civil wars is a relatively recent development.”
“[Some] studies [have] found that conflicts involving actors making religious demands – such as demanding a religious state or a significant increase in religious legislation – were less likely to be resolved with negotiated settlements; a negotiated settlement is possible if the settlement focused on the non-religious aspects of the conflict […] One study of terrorism found that terror groups which espouse religious ideologies tend to be more violent (Henne, 2012). […] The clear majority of quantitative studies of religious conflict focus solely on inter-religious conflicts. Most of them find religious identity to influence the extent of conflict […] but there are some studies which dissent from this finding”.
“Terror is most often selected by groups that (1) have failed to achieve their goals through peaceful means, (2) are willing to use violence to achieve their goals, and (3) do not have the means for higher levels of violence.”
“the PITF dataset provides an accounting of the number of domestic conflicts that occurred in any given year between 1960 and 2009. […] Between 1960 and 2009 the modified dataset includes 817 years of ethnic war, 266 years of genocides/politicides, and 477 years of revolutionary wars. […] Cases were identified as religious or not religious based on the following categorization:
1 Not Religious.
2 Religious Identity Conflict: The two groups involved in the conflict belong to different religions or different denominations of the same religion.
3 Religious Wars: The two sides of the conflict belong to the same religion but the description of the conflict provided by the PITF project identifies religion as being an issue in the conflict. This typically includes challenges by religious fundamentalists to more secular states. […]
The results show that both numerically and as a proportion of all conflict, religious state failures (which include both religious identity conflicts and religious wars) began increasing in the mid-1970s. […] As a proportion of all conflict, religious state failures continued to increase and became a majority of all state failures in 2002. From 2002 onward, religious state failures were between 55 percent and 62 percent of all state failures in any given year.”
“Between 2002 and 2009, eight of 12 new state failures were religious. All but one of the new religious state failures were ongoing as of 2009. These include:
• 2002: A rebellion in the Muslim north of the Ivory Coast (ended in 2007)
• 2003: The beginning of the Sunni–Shia violent conflict in Iraq (ongoing)
• 2003: The resumption of the ethnic war in the Sudan [97% muslims, US] (ongoing)
• 2004: Muslim militants challenged Pakistan’s government in South and North Waziristan. This has been followed by many similar attacks (ongoing)
• 2004: Outbreak of violence by Muslims in southern Thailand (ongoing)
• 2004: In Yemen [99% muslims, US], followers of dissident cleric Husain Badr al-Din al-Huthi create a stronghold in Saada. Al-Huthi was killed in September 2004, but serious fighting begins again in early 2005 (ongoing)
• 2007: Ethiopia’s invasion of southern Somalia causes a backlash in the Muslim (ethnic- Somali) Ogaden region (ongoing)
• 2008: Islamist militants in the eastern Trans-Caucasus region of Russia bordering on Georgia (Chechnya, Dagestan, and Ingushetia) reignited their violent conflict against Russia (ongoing)” [my bold]
“There are few additional studies which engage in this type of longitudinal analysis. Perhaps the most comprehensive of such studies is presented in Toft et al.’s (2011) book God’s Century based on data collected by Toft. They found that religious conflicts – defined as conflicts with a religious content – rose from 19 percent of all civil wars in the 1940s to about half of civil wars during the first decade of the twenty-first century. Of these religious conflicts, 82 percent involved Muslims. This analysis includes only 135 civil wars during this period. The lower number is due to a more restrictive definition of civil war which includes at least 1,000 battle deaths. This demonstrates that the findings presented above also hold when looking at the most violent of civil wars.” [my bold]
“This comprehensive new Handbook explores the significance and nature of armed intrastate conflict and civil war in the modern world.
Civil wars and intrastate conflict represent the principal form of organised violence since the end of World War II, and certainly in the contemporary era. These conflicts have a huge impact and drive major political change within the societies in which they occur, as well as on an international scale. The global importance of recent intrastate and regional conflicts in Afghanistan, Pakistan, Iraq, Somalia, Nepal, Côte d’Ivoire, Syria and Libya – amongst others – has served to refocus academic and policy interest upon civil war. […] This volume will be of much interest to students of civil wars and intrastate conflict, ethnic conflict, political violence, peace and conflict studies, security studies and IR in general.”
I’m currently reading this handbook. One observation I’ll make here before moving on to the main coverage is that although I’ve read more than 100 pages and although every single one of the conflicts argued in the introduction above to be motivating study into these topics aside from one (the exception being Nepal) involve muslims, the word ‘islam’ has been mentioned exactly once in the coverage so far (an updated list would arguably include yet another muslim country, Yemen, as well). I noted while doing the text search that they seem to take up the topic of religion and religious motivation later on, so I sort of want to withhold judgment for now, but if they don’t deal more seriously with this topic later on than they have so far, I’ll have great difficulties giving this book a high rating, despite the coverage being in general actually quite interesting, detailed and well written so far – chapter 7, on so-called ‘critical perspectives’ is in my opinion a load of crap [a few illustrative quotes/words/concepts from that chapter: “Frankfurt School-inspired Critical Theory”, “approaches such as critical constructivism, post-structuralism, feminism, post-colonialism”, “an openly ethical–normative commitment to human rights, progressive politics”, “labelling”, “dialectical”, “power–knowledge structures”, “conflict discourses”, “Foucault”, “an abiding commitment to being aware of, and trying to overcome, the Eurocentric, Orientalist and patriarchal forms of knowledge often prevalent within civil war studies”, “questioning both morally and intellectually the dominant paradigm”… I read the chapter very fast, to the point of almost only skimming it, and I have not quoted from that chapter in my coverage below, for reasons which should be obvious – I was reminded of Poe’s Corollary while reading the chapter as I briefly started wondering along the way if the chapter was an elaborate joke which had somehow made it into the publication, and I also briefly was reminded of the Sokal affair, mostly because of the unbelievable amount of meaningless buzzwords], but that’s just one chapter and most of the others so far have been quite okay. A few of the points in the problematic chapter are actually arguably worth having in mind, but there’s so much bullshit included as well that you’re having a really hard time taking any of it seriously.
Some observations from the first 100 pages:
“There are wide differences of opinion across the broad field of scholars who work on civil war regarding the basis of legitimate and scientific knowledge in this area, on whether cross-national studies can generate reliable findings, and on whether objective, value-free analysis of armed conflict is possible. All too often – and perhaps increasingly so, with the rise in interest in econometric approaches – scholars interested in civil war from different methodological traditions are isolated from each other. […] even within the more narrowly defined empirical approaches to civil war studies there are major disagreements regarding the most fundamental questions relating to contemporary civil wars, such as the trends in numbers of armed conflicts, whether civil wars are changing in nature, whether and how international actors can have a role in preventing, containing and ending civil wars, and the significance of [various] factors”.
“In simplest terms civil war is a violent conflict between a government and an organized rebel group, although some scholars also include armed conflicts primarily between non-state actors within their study. The definition of a civil war, and the analytical means of differentiating a civil war from other forms of large-scale violence, has been controversial […] The Uppsala Conflict Data Program (UCDP) uses 25 battle-related deaths per year as the threshold to be classified as armed conflict, and – in common with other datasets such as the Correlates of War (COW) – a threshold of 1,000 battle-related deaths for a civil war. While this is now widely endorsed, debate remains regarding the rigor of this definition […] differences between two of the main quantitative conflict datasets – the UCDP and the COW – in terms of the measurement of armed conflict result in significant differences in interpreting patterns of conflict. This has led to conflicting findings not only about absolute numbers of civil wars, but also regarding trends in the numbers of such conflicts. […] According to the UCDP/PRIO data, from 1946 to 2011 a total of 102 countries experienced civil wars. Africa witnessed the most with 40 countries experiencing civil wars between 1946 and 2011. During this period 20 countries in the Americas experienced civil war, 18 in Asia, 13 in Europe, and 11 in the Middle East […]. There were 367 episodes (episodes in this case being separated by at least one year without at least 25 battle-related deaths) of civil wars from 1946 to 2009 […]. The number of active civil wars generally increased from the end of the Cold War to around 1992 […]. Since then the number has been in decline, although whether this is likely to be sustained is debatable. In terms of onset of first episode by region from 1946 to 2011, Africa leads the way with 75, followed by Asia with 67, the Western Hemisphere with 33, the Middle East with 29, and Europe with 25 […]. As Walter (2011) has observed, armed conflicts are increasingly concentrated in poor countries. […] UCDP reports 137 armed conflicts for the period 1989–2011. For the overlapping period 1946–2007, COW reports 179 wars, while UCDP records 244 armed conflicts. As most of these conflicts have been fought over disagreements relating to conditions within a state, it means that civil war has been the most common experience of war throughout this period.”
“There were 3 million deaths from civil wars with no international intervention between 1946 and 2008. There were 1.5 million deaths in wars where intervention occurred. […] In terms of region, there were approximately 350,000 civil war-related deaths in both Europe and the Middle East from the years 1946 to 2008. There were 467,000 deaths in the Western Hemisphere, 1.2 million in Africa, and 3.1 million in Asia for the same period […] In terms of historical patterns of civil wars and intrastate armed conflict more broadly, the most conspicuous trend in recent decades is an apparent decline in absolute numbers, magnitude, and impact of armed conflicts, including civil wars. While there is wide – but not total – agreement regarding this, the explanations for this downward trend are contested. […] the decline seems mainly due not to a dramatic decline of civil war onsets, but rather because armed conflicts are becoming shorter in duration and they are less likely to recur. While this is undoubtedly welcome – and so is the tendency of civil wars to be generally smaller in magnitude – it should not obscure the fact that civil wars are still breaking out at a rate that has been fairly static in recent decades.”
“there is growing consensus on a number of findings. For example, intrastate armed conflict is more likely to occur in poor, developing countries with weak state structures. In situations of weak states the presence of lootable natural resources and oil increase the likelihood of experiencing armed conflict. Dependency upon the export of primary commodities is also a vulnerability factor, especially in conjunction with drastic fluctuations in international market prices which can result in economic shocks and social dislocation. State weakness is relevant to this – and to most of the theories regarding armed conflict proneness – because such states are less able to cushion the impact of economic shocks. […] Authoritarian regimes as well as entrenched democracies are less likely to experience civil war than societies in-between […] Situations of partial or weak democracy (anocracy) and political transition, particularly a movement towards democracy in volatile or divided societies, are also strongly correlated to conflict onset. The location of a society – especially if it has other vulnerability factors – in a region which has contiguous neighbors which are experiencing or have experienced armed conflict is also an armed conflict risk.”
“Military intervention aimed at supporting a protagonist or influencing the outcome of a conflict tends to increase the intensity of civil wars and increase their duration […] It is commonly argued that wars ending with military victory are less likely to recur […]. In these terminations one side no longer exists as a fighting force. Negotiated settlements, on the other hand, are often unstable […] The World Development Report 2011 notes that 90 percent of the countries with armed conflicts taking place in the first decade of the 2000s also had a major armed conflict in the preceding 30 years […] of the 137 armed conflicts that were fought after 1989 100 had ended by 2011, while 37 were still ongoing”
“Cross-national, aggregated, analysis has played a leading role in strengthening the academic and policy impact of conflict research through the production of rigorous research findings. However, the […] aggregation of complex variables has resulted in parsimonious findings which arguably neglect the complexity of armed conflict; simultaneously, differences in the codification and definition of key concepts result in contradictory findings. The growing popularity of micro-studies is therefore an important development in the field of civil war studies, and one that responds to the demand for more nuanced analysis of the dynamics of conflict at the local level.”
“Jason Quinn, University of Notre Dame, has calculated that the number of scholarly articles on the onset of civil wars published in the first decade of the twenty-first century is larger than the previous five decades combined”.
“One of the most challenging aspects of quantitative analysis is transforming social concepts into numerical values. This difficulty means that many of the variables used to capture theoretical constructs represent crude indicators of the real concept […] econometric studies of civil war must account for the endogenising effect of civil war on other variables. Civil war commonly lowers institutional capacity and reduces economic growth, two of the primary conditions that are consistently shown to motivate civil violence. Scholars have grown more capable of modelling this process […], but still too frequently fail to capture the endogenising effect of civil conflict on other variables […] the problems associated with the rare nature of civil conflict can [also] cause serious problems in a number of econometric models […] Case-based analysis commonly suffers from two fundamental problems: non-generalisability and selection bias. […] Combining research methods can help to enhance the validity of both quantitative and qualitative research. […] the combination of methods can help quantitative researchers address measurement issues, assess outliers, discuss variables omitted from the large-N analysis, and examine cases incorrectly predicted by econometric models […] The benefits of mixed methods research designs have been clearly illustrated in a number of prominent studies of civil war […] Yet unfortunately the bifurcation of conflict studies into qualitative and quantitative branches makes this practice less common than is desirable.”
“Ethnography has elicited a lively critique from within and without anthropology. […] Ethnographers stand accused of argument by ostension (pointing at particular instances as indicative of a general trend). The instances may not even be true. This is one of the reasons that the economist Paul Collier rejected ethnographic data as a source of insight into the causes of civil wars (Collier 2000b). According to Collier, the ethnographer builds on anecdotal evidence offered by people with good reasons to fabricate their accounts. […] The story fits the fact. But so might other stories. […] [It might be categorized as] a discipline that still combines a mix of painstaking ethnographic documentation with brilliant flights of fancy, and largely leaves numbers on one side.”
“While macro-historical accounts convincingly argue for the centrality of the state to the incidence and intensity of civil war, there is a radical spatial unevenness to violence in civil wars that defies explanation at the national level. Villages only a few miles apart can have sharply contrasting experiences of conflict and in most civil wars large swathes of territory remain largely unaffected by violence. This unevenness presents a challenge to explanations of conflict that treat states or societies as the primary unit of analysis. […] A range of databases of disaggregated data on incidences of violence have recently been established and a lively publication programme has begun to explore sub-national patterns of distribution and diffusion of violence […] All of these developments testify to a growing recognition across the social sciences that spatial variation, territorial boundaries and bounding processes are properly located at the heart of any understanding of the causes of civil war. It suggests too that sub-national boundaries in their various forms – whether regional or local boundaries, lines of control established by rebels or no-go areas for state security forces – need to be analysed alongside national borders and in a geopolitical context. […] In both violent and non-violent contention local ‘safe territories’ of one kind or another are crucial to the exercise of power by challengers […] the generation of violence by insurgents is critically affected by logistics (e.g. roads), but also shelter (e.g. forests) […] Schutte and Weidmann (2011) offer a […] dynamic perspective on the diffusion of insurgent violence. Two types of diffusion are discussed; relocation diffusion occurs when the conflict zone is shifted to new locations, whereas escalation diffusion corresponds to an expansion of the conflict zone. They argue that the former should be a feature of conventional civil wars with clear frontlines, whereas the latter should be observed in irregular wars, an expectation that is borne out by the data.”
“Research on the motivation of armed militants in social movement scholarship emphasises the importance of affective ties, of friendship and kin networks and of emotion […] Sageman’s (2004, 2008) meticulous work on Salafist-inspired militants emphasises that mobilisation is a collective rather than individual process and highlights the importance of inter-personal ties, networks of friendship, family and neighbours. That said, it is clear that there is a variety of pathways to armed action on the part of individuals rather than one single dominant motivation”.
“While it is often difficult to conduct real experiments in the study of civil war, the micro study of violence has seen a strong adoption of quasi-experimental designs and in general, a more careful thinking about causal identification”.
“Condra and Shapiro (2012) present one of the first studies to examine the effects of civilian targeting in a micro-level study. […] they show that insurgent violence increases as a result of civilian casualties caused by counterinsurgent forces. Similarly, casualties inflicted by the insurgents have a dampening effect on insurgent effectiveness. […] The conventional wisdom in the civil war literature has it that indiscriminate violence by counterinsurgent forces plays into the hands of the insurgents. After being targeted collectively, the aggrieved population will support the insurgency even more, which should result in increased insurgent effectiveness. Lyall (2009) conducts a test of this relationship by examining the random shelling of villages from Russian bases in Chechnya. He matches shelled villages with those that have similar histories of violence, and examines the difference in insurgent violence between treatment and control villages after an artillery strike. The results clearly disprove conventional wisdom and show that shelling reduces subsequent insurgent violence. […] Other research in this area has looked at alternative counterinsurgency techniques, such as aerial bombings. In an analysis that uses micro-level data on airstrikes and insurgent violence, Kocher et al. (2011) show that, counter to Lyall’s (2009) findings, indiscriminate violence in the form of airstrikes against villages in the Vietnam war was counterproductive […] Data availability […] partly dictates what micro-level questions we can answer about civil war. […] not many conflicts have datasets on bombing sorties, such as the one used by Kocher et al. (2011) for the Vietnam war.”
i. Lock (water transport). Zumerchik and Danver’s book covered this kind of stuff as well, sort of, and I figured that since I’m not going to blog the book – for reasons provided in my goodreads review here – I might as well add a link or two here instead. The words ‘sort of’ above are in my opinion justified because the book coverage is so horrid you’d never even know what a lock is used for from reading that book; you’d need to look that up elsewhere.
On a related note there’s a lot of stuff in that book about the history of water transport etc. which you probably won’t get from these articles, but having a look here will give you some idea about which sort of topics many of the chapters of the book are dealing with. Also, stuff like this and this. The book coverage of the latter topic is incidentally much, much more detailed than is that wiki article, and the article – as well as many other articles about related topics (economic history, etc.) on the wiki, to the extent that they even exist – could clearly be improved greatly by adding content from books like this one. However I’m not going to be the guy doing that.
ii. Congruence (geometry).
I’d note that this is a topic which seems to be reasonably well covered on wikipedia; there’s for example also a ‘good article’ on the Everglades and a featured article about the Everglades National Park. A few quotes and observations from the article:
“The geography and ecology of the Everglades involve the complex elements affecting the natural environment throughout the southern region of the U.S. state of Florida. Before drainage, the Everglades were an interwoven mesh of marshes and prairies covering 4,000 square miles (10,000 km2). […] Although sawgrass and sloughs are the enduring geographical icons of the Everglades, other ecosystems are just as vital, and the borders marking them are subtle or nonexistent. Pinelands and tropical hardwood hammocks are located throughout the sloughs; the trees, rooted in soil inches above the peat, marl, or water, support a variety of wildlife. The oldest and tallest trees are cypresses, whose roots are specially adapted to grow underwater for months at a time.”
“A vast marshland could only have been formed due to the underlying rock formations in southern Florida. The floor of the Everglades formed between 25 million and 2 million years ago when the Florida peninsula was a shallow sea floor. The peninsula has been covered by sea water at least seven times since the earliest bedrock formation. […] At only 5,000 years of age, the Everglades is a young region in geological terms. Its ecosystems are in constant flux as a result of the interplay of three factors: the type and amount of water present, the geology of the region, and the frequency and severity of fires. […] Water is the dominant element in the Everglades, and it shapes the land, vegetation, and animal life of South Florida. The South Florida climate was once arid and semi-arid, interspersed with wet periods. Between 10,000 and 20,000 years ago, sea levels rose, submerging portions of the Florida peninsula and causing the water table to rise. Fresh water saturated the limestone, eroding some of it and creating springs and sinkholes. The abundance of fresh water allowed new vegetation to take root, and through evaporation formed thunderstorms. Limestone was dissolved by the slightly acidic rainwater. The limestone wore away, and groundwater came into contact with the surface, creating a massive wetland ecosystem. […] Only two seasons exist in the Everglades: wet (May to November) and dry (December to April). […] The Everglades are unique; no other wetland system in the world is nourished primarily from the atmosphere. […] Average annual rainfall in the Everglades is approximately 62 inches (160 cm), though fluctuations of precipitation are normal.”
“Between 1871 and 2003, 40 tropical cyclones struck the Everglades, usually every one to three years.”
“Islands of trees featuring dense temperate or tropical trees are called tropical hardwood hammocks. They may rise between 1 and 3 feet (0.30 and 0.91 m) above water level in freshwater sloughs, sawgrass prairies, or pineland. These islands illustrate the difficulty of characterizing the climate of the Everglades as tropical or subtropical. Hammocks in the northern portion of the Everglades consist of more temperate plant species, but closer to Florida Bay the trees are tropical and smaller shrubs are more prevalent. […] Islands vary in size, but most range between 1 and 10 acres (0.40 and 4.05 ha); the water slowly flowing around them limits their size and gives them a teardrop appearance from above. The height of the trees is limited by factors such as frost, lightning, and wind: the majority of trees in hammocks grow no higher than 55 feet (17 m). […] There are more than 50 varieties of tree snails in the Everglades; the color patterns and designs unique to single islands may be a result of the isolation of certain hammocks. […] An estimated 11,000 species of seed-bearing plants and 400 species of land or water vertebrates live in the Everglades, but slight variations in water levels affect many organisms and reshape land formations.”
“Because much of the coast and inner estuaries are built by mangroves—and there is no border between the coastal marshes and the bay—the ecosystems in Florida Bay are considered part of the Everglades. […] Sea grasses stabilize sea beds and protect shorelines from erosion by absorbing energy from waves. […] Sea floor patterns of Florida Bay are formed by currents and winds. However, since 1932, sea levels have been rising at a rate of 1 foot (0.30 m) per 100 years. Though mangroves serve to build and stabilize the coastline, seas may be rising more rapidly than the trees are able to build.”
iv. Chang and Eng Bunker. Not a long article, but interesting:
“Chang (Chinese: 昌; pinyin: Chāng; Thai: จัน, Jan, rtgs: Chan) and Eng (Chinese: 恩; pinyin: Ēn; Thai: อิน In) Bunker (May 11, 1811 – January 17, 1874) were Thai-American conjoined twin brothers whose condition and birthplace became the basis for the term “Siamese twins”.”
I loved some of the implicit assumptions in this article: “Determined to live as normal a life they could, Chang and Eng settled on their small plantation and bought slaves to do the work they could not do themselves. […] Chang and Adelaide [his wife] would become the parents of eleven children. Eng and Sarah [‘the other wife’] had ten.”
A ‘normal life’ indeed… The women the twins married were incidentally sisters who ended up disliking each other (I can’t imagine why…).
v. Genie (feral child). This is a very long article, and you should be warned that many parts of it may not be pleasant to read. From the article:
“Genie (born 1957) is the pseudonym of a feral child who was the victim of extraordinarily severe abuse, neglect and social isolation. Her circumstances are prominently recorded in the annals of abnormal child psychology. When Genie was a baby her father decided that she was severely mentally retarded, causing him to dislike her and withhold as much care and attention as possible. Around the time she reached the age of 20 months Genie’s father decided to keep her as socially isolated as possible, so from that point until she reached 13 years, 7 months, he kept her locked alone in a room. During this time he almost always strapped her to a child’s toilet or bound her in a crib with her arms and legs completely immobilized, forbade anyone from interacting with her, and left her severely malnourished. The extent of Genie’s isolation prevented her from being exposed to any significant amount of speech, and as a result she did not acquire language during childhood. Her abuse came to the attention of Los Angeles child welfare authorities on November 4, 1970.
In the first several years after Genie’s early life and circumstances came to light, psychologists, linguists and other scientists focused a great deal of attention on Genie’s case, seeing in her near-total isolation an opportunity to study many aspects of human development. […] In early January 1978 Genie’s mother suddenly decided to forbid all of the scientists except for one from having any contact with Genie, and all testing and scientific observations of her immediately ceased. Most of the scientists who studied and worked with Genie have not seen her since this time. The only post-1977 updates on Genie and her whereabouts are personal observations or secondary accounts of them, and all are spaced several years apart. […]
Genie’s father had an extremely low tolerance for noise, to the point of refusing to have a working television or radio in the house. Due to this, the only sounds Genie ever heard from her parents or brother on a regular basis were noises when they used the bathroom. Although Genie’s mother claimed that Genie had been able to hear other people talking in the house, her father almost never allowed his wife or son to speak and viciously beat them if he heard them talking without permission. They were particularly forbidden to speak to or around Genie, so what conversations they had were therefore always very quiet and out of Genie’s earshot, preventing her from being exposed to any meaningful language besides her father’s occasional swearing. […] Genie’s father fed Genie as little as possible and refused to give her solid food […]
In late October 1970, Genie’s mother and father had a violent argument in which she threatened to leave if she could not call her parents. He eventually relented, and later that day Genie’s mother was able to get herself and Genie away from her husband while he was out of the house […] She and Genie went to live with her parents in Monterey Park. Around three weeks later, on November 4, after being told to seek disability benefits for the blind, Genie’s mother decided to do so in nearby Temple City, California and brought Genie along with her.
On account of her near-blindness, instead of the disabilities benefits office Genie’s mother accidentally entered the general social services office next door. The social worker who greeted them instantly sensed something was not right when she first saw Genie and was shocked to learn Genie’s true age was 13, having estimated from her appearance and demeanor that she was around 6 or 7 and possibly autistic. She notified her supervisor, and after questioning Genie’s mother and confirming Genie’s age they immediately contacted the police. […]
Upon admission to Children’s Hospital, Genie was extremely pale and grossly malnourished. She was severely undersized and underweight for her age, standing 4 ft 6 in (1.37 m) and weighing only 59 pounds (27 kg) […] Genie’s gross motor skills were extremely weak; she could not stand up straight nor fully straighten any of her limbs. Her movements were very hesitant and unsteady, and her characteristic “bunny walk”, in which she held her hands in front of her like claws, suggested extreme difficulty with sensory processing and an inability to integrate visual and tactile information. She had very little endurance, only able to engage in any physical activity for brief periods of time. […]
Despite tests conducted shortly after her admission which determined Genie had normal vision in both eyes she could not focus them on anything more than 10 feet (3 m) away, which corresponded to the dimensions of the room she was kept in. She was also completely incontinent, and gave no response whatsoever to extreme temperatures. As Genie never ate solid food as a child she was completely unable to chew and had very severe dysphagia, completely unable to swallow any solid or even soft food and barely able to swallow liquids. Because of this she would hold anything which she could not swallow in her mouth until her saliva broke it down, and if this took too long she would spit it out and mash it with her fingers. She constantly salivated and spat, and continually sniffed and blew her nose on anything that happened to be nearby.
Genie’s behavior was typically highly anti-social, and proved extremely difficult for others to control. She had no sense of personal property, frequently pointing to or simply taking something she wanted from someone else, and did not have any situational awareness whatsoever, acting on any of her impulses regardless of the setting. […] Doctors found it extremely difficult to test Genie’s mental age, but on two attempts they found Genie scored at the level of a 13-month-old. […] When upset Genie would wildly spit, blow her nose into her clothing, rub mucus all over her body, frequently urinate, and scratch and strike herself. These tantrums were usually the only times Genie was at all demonstrative in her behavior. […] Genie clearly distinguished speaking from other environmental sounds, but she remained almost completely silent and was almost entirely unresponsive to speech. When she did vocalize, it was always extremely soft and devoid of tone. Hospital staff initially thought that the responsiveness she did show to them meant she understood what they were saying, but later determined that she was instead responding to nonverbal signals that accompanied their speaking. […] Linguists later determined that in January 1971, two months after her admission, Genie only showed understanding of a few names and about 15–20 words. Upon hearing any of these, she invariably responded to them as if they had been spoken in isolation. Hospital staff concluded that her active vocabulary at that time consisted of just two short phrases, “stop it” and “no more”. Beyond negative commands, and possibly intonation indicating a question, she showed no understanding of any grammar whatsoever. […] Genie had a great deal of difficulty learning to count in sequential order. During Genie’s stay with the Riglers, the scientists spent a great deal of time attempting to teach her to count. She did not start to do so at all until late 1972, and when she did her efforts were extremely deliberate and laborious. By 1975 she could only count up to 7, which even then remained very difficult for her.”
“From January 1978 until 1993, Genie moved through a series of at least four additional foster homes and institutions. In some of these locations she was further physically abused and harassed to extreme degrees, and her development continued to regress. […] Genie is a ward of the state of California, and is living in an undisclosed location in the Los Angeles area. In May 2008, ABC News reported that someone who spoke under condition of anonymity had hired a private investigator who located Genie in 2000. She was reportedly living a relatively simple lifestyle in a small private facility for mentally underdeveloped adults, and appeared to be happy. Although she only spoke a few words, she could still communicate fairly well in sign language.“
i. World Happiness Report 2013. A few figures from the publication:
“As the Internet has become a nearly ubiquitous resource for acquiring knowledge about the world, questions have arisen about its potential effects on cognition. Here we show that searching the Internet for explanatory knowledge creates an illusion whereby people mistake access to information for their own personal understanding of the information. Evidence from 9 experiments shows that searching for information online leads to an increase in self-assessed knowledge as people mistakenly think they have more knowledge “in the head,” even seeing their own brains as more active as depicted by functional MRI (fMRI) images.”
A little more from the paper:
“If we go to the library to find a fact or call a friend to recall a memory, it is quite clear that the information we seek is not accessible within our own minds. When we go to the Internet in search of an answer, it seems quite clear that we are we consciously seeking outside knowledge. In contrast to other external sources, however, the Internet often provides much more immediate and reliable access to a broad array of expert information. Might the Internet’s unique accessibility, speed, and expertise cause us to lose track of our reliance upon it, distorting how we view our own abilities? One consequence of an inability to monitor one’s reliance on the Internet may be that users become miscalibrated regarding their personal knowledge. Self-assessments can be highly inaccurate, often occurring as inflated self-ratings of competence, with most people seeing themselves as above average [here’s a related link] […] For example, people overestimate their own ability to offer a quality explanation even in familiar domains […]. Similar illusions of competence may emerge as individuals become immersed in transactive memory networks. They may overestimate the amount of information contained in their network, producing a “feeling of knowing,” even when the content is inaccessible […]. In other words, they may conflate the knowledge for which their partner is responsible with the knowledge that they themselves possess (Wegner, 1987). And in the case of the Internet, an especially immediate and ubiquitous memory partner, there may be especially large knowledge overestimations. As people underestimate how much they are relying on the Internet, success at finding information on the Internet may be conflated with personally mastered information, leading Internet users to erroneously include knowledge stored outside their own heads as their own. That is, when participants access outside knowledge sources, they may become systematically miscalibrated regarding the extent to which they rely on their transactive memory partner. It is not that they misattribute the source of their knowledge, they could know full well where it came from, but rather they may inflate the sense of how much of the sum total of knowledge is stored internally.
We present evidence from nine experiments that searching the Internet leads people to conflate information that can be found online with knowledge “in the head.” […] The effect derives from a true misattribution of the sources of knowledge, not a change in understanding of what counts as internal knowledge (Experiment 2a and b) and is not driven by a “halo effect” or general overconfidence (Experiment 3). We provide evidence that this effect occurs specifically because information online can so easily be accessed through search (Experiment 4a–c).”
iii. Some words I’ve recently encountered on vocabulary.com: hortatory, adduce, obsequious, enunciate, ineluctable, guerdon, chthonic, condign, philippic, coruscate, exceptionable, colophon, lapidary, rubicund, frumpish, raiment, prorogue, sonorous, metonymy.
v. I have no idea how accurate this test of chess strength is, (some people in this thread argue that there are probably some calibration issues at the low end) but I thought I should link to it anyway. I’d be very cautious about drawing strong conclusions about over-the-board strength without knowing how they’ve validated the tool. In over-the-board chess you have at minimum a couple of minutes/move on average and this tool never gives you more than 30 seconds, so some slow players will probably suffer using this tool (I’d imagine this is why u/ViktorVamos got such a low estimate). For what it’s worth my Elo estimate was 2039 (95% CI: 1859, 2220).
In related news, I recently defeated my first IM – Pablo Garcia Castro – in a blitz (3 minutes/player) game. It actually felt a bit like an anticlimax and afterwards I was thinking that it would probably have felt like a bigger deal if I’d not lately been getting used to winning the occasional bullet game against IMs on the ICC. Actually I think my two wins against WIM Shiqun Ni during the same bullet session at the time felt like a bigger accomplishment, because that specific session was played during the Women’s World Chess Championship and I realized while looking up my opponent that this woman was actually stronger than one of the contestants who made it to the quarter-finals in that event (Meri Arabidze). On the other hand bullet isn’t really chess, so…
Here’s the first post about the book. This post will cover some of the stuff included in the remaining chapters of the book.
“It’s not easy to get an accurate or reliable picture of children’s curiosity at school. To begin with, the data are, almost by definition, descriptive. We can watch to see how many questions children ask, how often they tinker, open, take apart, or watch — but it’s virtually impossible to track the thoughts of twenty-three children during a classroom activity. However, we can measure how much curiosity children express while they are in school. […] We wanted to find out whether children expressed curiosity when they began grade school, and how different things looked by the time children were finished. We recorded ten hours in each of five kindergarten classrooms and five fifth-grade classrooms. Each time we visited, we recorded the children for two hours. […] Three students were trained to code the data, and achieved a high rate of inter-coder reliability. It turned out it’s not all that hard to spot curiosity in action. But what we found took us aback. Or rather what we didn’t find. On average, in any given kindergarten classroom, there were 2.36 episodes of curiosity in a two-hour stretch. Expressions of curiosity were even scarcer in the older grades. The average number of episodes in a fifth-grade classroom was 0.48. In other words, on average, classroom activity over a two-hour stretch included less than one expression of curiosity. In the schools we studied, the expression of curiosity was, at best, infrequent. Nine of the ten classrooms had at least one two-hour stretch where there were no expressions of curiosity. In other words, we rarely saw children take things apart, ask questions about topics either children or adults had raised, watch interesting phenomena unfold in front of their eyes, or in any way show signs that there were things they were eager to know more about it, much less actually follow up with any visible sort of investigation, whether in words or actions. The easiest interpretation is that children are simply less curious by the time they are in kindergarten and grow even less so by the end of grade school. However, the data don’t support that conclusion. For one thing, we saw as much variation between classrooms as we did between grade levels.”
“Our discovery, that there is little curiosity in grade school, is confirmed by the work others have done. Recall that Tizard and Hughes fitted preschoolers with tape recorders to get a picture of how many questions they asked at home with their parents (the answer […] is that preschoolers ask a lot of questions). However, Tizard and Hughes also recorded those same children when they went to preschool (1984). Once inside a school building, the picture changes dramatically. While the preschoolers they studied asked, on average, twenty-six questions per hour at home, that rate dropped to two per hour when the children were in school. […] One striking feature […] was how curious children were about anything that seemed exotic to them. Topics that led to a series of eager questions included the Rocky Mountains, Pangaea, Venus flytraps, unusual geometric shapes, trips to Mexico, and the Australopithecus Lucy’s descendants. But their episodes of curiosity were brief, often fleeting. Some 78 percent of the curiosity episodes involved fewer than four conversational turns. We also timed these sequences, since we were interested in nonverbal inquiry. Not one episode lasted longer than six minutes, and all but three lasted less than three minutes. We never saw an episode of curiosity that led to a more structured classroom activity, or that redirected a classroom discussion for more than a few moments.”
“Our impression was that most of the time teachers had very specific objectives for each stretch of time, and that a great deal of effort was put into keeping children on task and in reaching those objectives. […] Mastery rather than inquiry seemed to be the dominant goal for almost all the classrooms in which we observed. Often it seemed that finishing specific assignments (worksheets, writing assignments) was an even more salient goal than actually learning the material. In other words, the structure of the classroom made it clear that the educational activities we saw were not designed to encourage curiosity — nor were teachers using the children’s curiosity as a guide to what and how to teach. […] in the classrooms we visited, there was little or no evidence that an implicit or explicit goal of the curriculum was to help children pose questions. […] an important but easily overlooked distinction [is] between children’s engagement and children’s curiosity. A teacher can be talking about things that captivate the students, and the students can be deeply interested in a topic — quite engaged in a discussion or activity. But that in and of itself doesn’t mean the children are asking questions, or that their questions reflect curiosity. […] a key finding of our research so far [is that often] the reason children ask few questions, and fail to examine objects or tinker with things, is that the teacher feels such exploration would get in the way of learning. I have even heard teachers say as much. […] “I can’t answer questions right now. Now it’s time for learning.” […] A student and I sent out surveys to 114 teachers. In one part of the survey, they were asked to list the five skills or attributes they most wanted to instill or encourage in their students over the course of the school year. In the second part of the survey they were asked to circle five such desirable attributes from a list of ten. The list included words like “polite,” “cooperative,” “thoughtful,” “knowledgeable,” and also “curious.” Some 77 percent of the teachers surveyed circled “curious” as one of their top five. However, when asked to come up with their own ideas, only twenty- three listed curiosity. […] The impediments to curiosity in school consist of more than just the absence of enthusiasm for it. There are also powerful, somewhat invisible forces working against the expression and cultivation of curiosity in classrooms. Two primary impediments are the way in which plans and scripts govern what happens in most classrooms, and the pressure to get a lot of things “done” each day. […] Once children get to school, they exhibit a lot less curiosity. They ask fewer questions, examine objects less frequently and less thoroughly, and in general seem less inclined to persevere in sating their appetite for information.”
“When children have trouble learning, we think we need to teach it in a different way, or impress upon them the importance or usefulness of what they are learning. We encourage them to try harder, or spend more time trying to learn, even though it’s usually more effective to elicit their interest in the material. […] Several studies confirm the commonsense idea that children remember text better, and understand it more fully, when it has piqued their interest in one way or another (Silvia 2006; Knobloch et al. 2004).”
“Some would argue that the work of researchers like Robert Bjork (Bjork and Linn 2006) and Nate Kornell (Kornell and Bjork 2008) demonstrates that difficulty is key to learning. In what is now a large series of studies, researchers have shown that when students struggle a bit with the material they are learning, they learn it better.”
“Though researchers and teachers must deal with the fact that there are significant individual differences in what stirs a child’s interest or urge to know more, it is also possible to identify some general qualities that seem to make an object or a topic more or less intriguing to the majority of students. […] In the observations of curiosity that my students and I have done in classrooms, we have noticed one […] topic that consistently sparked children’s curiosity — intellectual exotica. […] Often what ignited a line of questioning was a reference to something outside the children’s zone of familiarity — unfamiliar places, historically distant times. […] children are often as curious about things they cannot see, touch, or directly experience as they are about what is going on right around them. […] the more unknown and unfamiliar a topic, and the denser with details its presentation, the more it may invite learning. […] The characteristics that fuel curiosity are not mysterious. Adults who use words and facial expressions to encourage children to explore; access to unexpected, opaque, and complex materials and topics; a chance to inquire with others; and plenty of suspense . . . these turn out to be the potent ingredients.”
“children are frequently privy to language not directed at them. The conversations adults have with one another influence how children talk and think. […] By the time children are four or so, they not only listen to their parents talk about other people — they also begin, in fledgling form, to gossip themselves. […] Daniela O’Neill and her colleagues tape-recorded the snack-time conversations of twenty-five preschoolers over a period of twenty-five weeks. Over 77 percent of the conversations children initiated with one another referenced other people, and nearly 30 percent mentioned people’s mental states. […] Peggy Miller’s work (Miller et al. 1992) shows that by the time children are five, more of their stories include information not just about themselves, but about themselves in relation to other people.”
“Sandra Hofferth and John Sandberg (2001) drew subjects from the 1997 Child Health Development Supplement to the Panel Study of Income Dynamics, a thirty-year longitudinal survey of a representative sample of families. […] While three-to-five-year-olds spent approximately seventeen hours a week in free play, most of them spent less than one hour a week outside, and less than two hours a week reading. By the time children were nine years old, they spent no more time outside, and far less time in free play (just under nine hours a week). They spent even less time reading (one and a quarter hours per week).”
“In an examination of how adults use the Internet to pursue a recreational interest in genealogy, Crystal Fulton (2009) found a link between amount of pleasure and effective persistent information-foraging strategies. The key to her argument is the role of time — she points out that when students feel pressured to complete an assignment, they experience less pleasure, and also engage in less thorough search behavior. That finding is replicated in a wide range of studies of online foraging.”
“The children who will get the most out of opportunities to work on their own (deciding what to tackle, and what to concentrate on) are the ones who can stay focused, stick with a question, and plan how to solve what ever problem intrigues them. In other words, at their best, autonomy and self-regulation go hand in hand. But in the world of real classrooms, every teacher must figure out how to balance the two. If a child doesn’t seem to have a great deal of perseverance, focus, or self-control, the teacher must decide whether to give him more autonomy so that he has a chance develop self-regulation, or whether to make autonomy the prize for self-control. […] This book for the most part has not focused on fleeting moments of curiosity, but the kind of curiosity that persists, unfolding over time and leading to sustained action (inquiry, discovery, tinkering, question asking, observation, research, reflection). Such sustained inquiry may be more likely to blossom when children have free time, and some time alone.”
“Many teachers […] discourage uncertainty, emphasizing instead what they know, or feel the students should know. They are more comfortable encouraging students to learn trustworthy information than to explore questions to which they themselves do not know the answer. Instead of using school as a place to formalize and extend the power of a young child’s zest for tackling the unknown or uncertain, teachers tend to squelch curiosity. They don’t do this out of meanness, or small-mindedness. They do it in the interests of making sure children master certain skills and established facts. While an emphasis on acquiring knowledge is reasonable, discouraging the disposition that leads to gaining new knowledge squanders a child’s most formidable learning tool. […] curiosity takes time to unfold, and even more time to bear fruit. In order to help children build on their curiosity, teachers have to be willing to spend time doing so. Nurturing curiosity takes time, but also saturation. It cannot be confined to science class. […] Teachers should provide children with interesting materials, seductive details, and desirable difficulty. Instead of presenting children with material that has been made as straightforward and digested as possible, teachers should make sure their students encounter objects, texts, environments, and ideas that will draw them in and pique their curiosity. […] to cultivate students’ curiosity, teachers need to give them both time to seek answers and guidance about various routes to getting answers, such as looking things up in reliable sources or testing hypotheses.”
“Few teachers readily see that they’re discouraging students’ questions, just as few parents readily see that they’re short-tempered with their children. […] One of the key findings of research is that children are heavily influenced not only by what adults say to them, but also by how the adults themselves behave. If schools value children’s curiosity, they’ll need to hire teachers who are curious. It is hard to fan the flames of a drive you yourself rarely experience. Many principals hire teachers who seem smart, who like children, and who have the kind of drive that supports academic achievement. They know that teachers who possess these qualities will foster the same in their students. Why not put curiosity at the top of the list of criteria for good teachers? […] in order to flourish, curiosity needs to be cultivated.”
“I will […] argue that curiosity is a fragile seed — for some the seed bears fruit, and for others, it shrivels and dies all too soon. By the time a child is five years old, his curiosity has been carved to reflect his personality, family life, daily encounters, and school experience. By the time that five-year-old is twenty-two, the intensity and object of his curiosity has become a defining, though often invisible part of who he is — something that will shape much of his future life. But the journey curiosity takes, from a universal and ubiquitous characteristic, one that accompanies much of the infant’s daily experience, to a quality that defines certain adults and barely exists in others, is subtle. In the chapters that follow, I’ll try to show that there are several sources of individual variation, and each has its developmental moment. Attachment in toddlerhood, language in the three-year-old, and a succession of environmental limitations and open doors all contribute to a person’s particular kind and intensity of curiosity. […] This book is about why some children remain curious and others do not, and how we can encourage more curiosity in everyone.”
“I’d expected more from a Harvard University Press publication. The book has too many personal anecdotes and too much speculation, and not enough data; also, the coverage would have benefited from the author being more familiar with ethological research such as e.g. some of the stuff included in Natural Conflict Resolution. However it was interesting enough for me to read it to the end, despite the format, and I assume many people who don’t mind reading popular science books might like the book.”
I’ve mentioned before how my expectations sort of depend, a bit, on who the publisher is; I have one set of (implicit) criteria for books published by academic publishers, and a different set of (implicit) criteria which needs to be met if the book is published by other publishing companies. Over the last couple of years I’ve pretty much exclusively read academic publications (I think I read two or three non-academic non-fiction publications last year, out of 72), but at least I’m aware there’s an argument to be made for having different standards for different kinds of books. I gave this book two stars, and part of the reason why it did not get a higher rating is that this kind of publication is the kind of publication I’m actively trying to avoid by sticking to only reading academic publications. I don’t care about reading anecdotes about somebody’s grandmother, and I don’t need two-page long anecdotes used to introduce readers to relatively simple concepts which could be covered in a paragraph by a skilled textbook author. I consider much of the fluff in normal popular science publications to be a waste of my time, and I get annoyed and confused when I find that kind of stuff in supposedly academic publications (this book was published by Harvard University Press). The book is not bad and it has some interesting ideas, but there’s way too much fluff for my taste. In the post I’ll talk a little about some of the ideas presented in the first four chapters of the book.
This observation from the book, made early in the coverage, might arguably be one of the most important things to take away from the book: “People who are curious learn more than people who are not, and people learn more when they are curious than when they are not.”
Attention is an important variable in the learning context, and curiosity helps with that; the author notes both that it’s quite obvious that curiosity helps children (much of the book is about the curiosity of children) learn, but also that we don’t actually know a great deal about how to make children curious about stuff in order to help them learn – this is not something people have researched very much. I find this, curious. An important observation in that context is however that we do know that curiosity is not what you might term dimension-less; people are curious about different things, and children are most curious when they are given the opportunity to inquire about things that mystify them or attract their attention. Research indicates that children are very curious early on in their lives (babies, toddlers), and that curiosity then seems to decline later on. One way to think about this is that babies don’t have good working models of what to expect will happen in the world around them yet given specific input, in part because they don’t have a lot of experience, so they’re often surprised; later on, they come to expect certain things to happen in specific ways (gravity causes both the plate and the cup (and the cutlery…) to drop to the floor if you pick it up and throw it – my example, derived from avuncular experience…), and as their working models improve habituation kicks in and removes the need to attend to the inputs which previously demanded their attention, freeing up mental resources which can then be devoted to other purposes. Actually adults wouldn’t be very well off if they were all as curious as two-year-olds, because the need to constantly react to new stimuli presenting themselves would likely mean they’d never get anything done (the author does not bring this up, but it’s also not really important in the context of the coverage). As put in the book: “during the first three years, children are gathering the material they need to establish, and then enrich, the schemas that help them navigate the physical, psychological, and social worlds. Key to this mastery of pattern and order is their alertness to novelty. This fundamental characteristic of early development explains why toddlers seem practically voracious in their appetite for new information.”
Curiosity has multiple faces, but a working definition presented early in the work is that “curiosity is an expression, in words or behaviors, of the urge to know more — an urge that is typically sparked when expectations are violated.” Breadth and depth are important variables, as is persistence. Even if there’s sort of an identifiable general trajectory for the variable during childhood, with much curiosity early on and then lower values later, you still as argued in the quote above have a lot of interpersonal variation, and the book spends some time trying to figure out why it is that some people end up a lot more curious than others and how they might be different. It seems to be the case that differences present quite early, and as usual Bowlby‘s name pops up. It pops up because although exploration of the unknown may have positive consequences, it also involves taking a risk – anxiety is argued to be an important curiosity-mediator, so that children who are worried about abandonment may be less likely to go exploring than are children who have a secure attachment bond and feel that they have a safe haven to which they can retreat without much risk. Longitudinal research has indicated that at least for one curiosity conceptualization (a so-called ‘curiosity box’-setup), individuals who were securely attached at the age of 2 were more curious two to three years later than were individuals who were not securely attached at baseline. A study on monkeys done more than fifty years ago likewise found that monkeys raised without an attachment figure were more fearful and that fear prevented the animals from exploring their environment. Not impressive, but it seems plausible. This is incidentally one of the only (if not the only? Can’t remember…) monkey studies included in the coverage, and if I had to explain my annoyance in my goodreads review at the absence of such research, the main reason was that the author in my opinion early on in the coverage pushes the ‘humans are exceptional’-point further than it can be supported, which is the sort of behaviour that always tends to make me irritated.
It seems likely that feedback processes start early and may be important; if you explore and have positive experiences doing it early on, you’ll probably be more likely to explore in the future; and if you’re too fearful to go look behind that curtain, you may never realize it wasn’t dangerous. Although trait variables matter, environmental mediation also seems really important and there’s quite a bit of stuff about this in the book. There’s incidentally some research suggesting that too little inhibition may not be desirable, but too much will certainly contribute to a lack of curiosity.
Although it’s very obvious that children in what might be termed the ‘asks a lot of questions’-age are incredibly curious, it’s become clear from research on these matters that they’re actually quite curious even before that time, if you know where to look for this curiosity; in a series of experiments it’s been shown that children will point at objects to get information about them long before they learn how to verbally form questions, and it’s clear both that children point more often at unfamiliar objects and events than familiar ones, and that they’re more likely to point when they’re in the presence of someone they consider to be a knowledgeable informant (e.g. a mother). When they do reach the asks-a-lot-of-questions age, they, well, ask a lot of questions, and it turns out that some people have actually collected data on this stuff. One really neat sample mentioned in the book involved four children followed for almost four years, from they were fourteen months old until they were five years and one month old, and here the recordings included 24,741 questions presenting 229,5 hours of conversation; the children asked an average of 107 questions per hour. That’s an average, and it hides a huge variation among the individuals even in that small sample; one of the children asked an average of close to 200 questions per hour, whereas another asked only slightly less than 70. I’d suggest these numbers are higher than average due to selection bias and perhaps also due to Hawthorne, but I find it quite incredible that data such as this is even available in the first place, and the numbers do sort of illustrate what kind of level we’re talking about. It’s obvious from the conversational strategies the children employ at that point in time that they aren’t just asking questions to get their parents’ attention or in order to monopolize their time (though this may be a convenient side-effect); children act differently depending on how questions are answered and question-sequences display path-dependence, indicating that they use the questions to gather knowledge about the world around them, rather than e.g. just to train their language skills.
Most children acquire language in roughly the same sequence. They point long before they start talking in sentences, and after pointing they begin to use an object to represent another object. After that they realize that objects have names, and at that point they start learning new words very fast. While their vocabulary develops very rapidly during this first learning-new-words phase, they start combining them in orderly ways; i.e. they start speaking in sentences.
In diary studies the data seem to indicate that children who hear adults ask many questions in their environment are more likely to get their questions answered (causality is iffy, though). How many questions they ask depends on what they consider to constitute a satisfactory answer, but in general they are more likely to continue asking questions than are children who rarely see other people ask informational questions and who are not rewarded with satisfactory answers when they ask questions. The data suggest that three-year-olds generally ask more questions than seven-year-olds, but also that there are already at that point (at the age of three) important differences in terms of how many questions are asked by different children; interindividual differences can be spotted quite early and the feedback processes involved may be one mechanism leading to those differences growing over time.
Small children depend a great deal on their parents and other adults to interpret stuff in the world around them, and they don’t quickly outgrow this dependence on adults; however as children age the range of responses towards specific stimuli expands. A toddler might want to know whether or not a fear response is proper in a specific context and so will observe the parents before reacting to a new stimuli to learn what’s the proper response; but as the child ages and the cognitive abilities increase the child might also have to make a decision, implicitly or explicitly, of e.g whether or not to play with (how many of?) the toys on the floor. In a study on this stuff they tried to manipulate the curiosity of the mother of a child by asking her either to manipulate objects lying on a table, look towards the corner of the table, or talk to another adult elsewhere in the room, with the child observing through a one-way mirror – the child was then later let into the room, and it turned out that children who had observed their mother manipulating the objects were not only more likely to manipulate the toys in manners similar to how the parents had done, but they were also more likely to explore to the toys in other ways. How parents (and other adults) behave will be noticed by children whether or not the parents know they’re being observed, and I think many parents might be surprised to learn how much observed behaviours, as opposed to verbally communicated behavioural norms, matter. A quote from the coverage:
“To sum up so far, from infancy until at least the elementary school years, children look to adults for cues about how to respond to objects and events, how to interpret the things they witness and experience, and how to interact with the world. The cues children take from adults are powerful in the moment, but have long-term impact as well. Moreover, the influence extends beyond problem solving. Children also learn from the adults around them what kind of stance they can or should take toward the objects and events they encounter as the day unfolds. This is particularly important when it comes to inquiry. Because, as should be clear by now, inquiry does not bubble up simply because a child is intrinsically curious. Nor does it simply erupt when something in the environment is particularly intriguing. Whether a child has the impulse, day in and day out, to find out more, ebbs and flows as a result of the adults who surround her.” [my emphasis].
Parents aren’t the only adults with whom children interact, and multiple studies have indicated that when preschoolers receive informative answers by their teachers they ask significantly more questions. In a curiosity-box setup (basic setup: Leave a box with lots of drawers, each one including a small item, in a classroom and then observe how many children approach it, how fast they approach it, how often they do, etc.), “there was a direct link between how much the teacher smiled and talked in an encouraging manner and the level of curiosity [as measured by box-related behaviours] the children in the room expressed.” Even subtle adult behaviours like encouraging nods and smiles by a teacher may affect behaviours/curiosity.
A very important point in the context of social modelling is that many of the behaviours adults display are not necessarily geared towards the children, but that these behaviours still matter:
“Parents and teachers are not always gearing their behavior directly toward the children they are with. They are to a great degree just being themselves. They lift lids, tinker, look things up, watch things carefully, and ask questions. Or they don’t. In fact, many adults do not express much curiosity in their everyday lives. There are plenty of adults who rarely want to find out about something new, or probe beneath the surface. Why wouldn’t this have an impact on children? […] children watch and learn from adult behavior in the short run and in the long run. And now we have some evidence that the same is true when it comes to children’s interest in finding out more. When parents give their children some freedom to wander, explore, and tinker, it makes a difference. When parents express fear or disapproval of inquiry, that too has an effect. But parents are just the beginning. When it comes to their urge to know more, children at least as old as nine continue to be extremely susceptible to the behavior of adults. And here it’s worth remembering that children learn a lot at home from behaviors not directed toward them, and that at school the same is true.”
Here’s a previous post in the series covering this book. There’s a lot of stuff in these chapters, so the stuff below’s just some of the things I thought were interesting and worth being aware of. I’ve covered three chapters in this post: One about skin, nails and hair, one about the eye, and one about infectious and tropical diseases. I may post one more post about the book later on, but I’m not sure if I’ll do that or not at this point so this may be the last post in the series.
Okay, on to the book – skin, nails and hair (my coverage mostly deals with the skin):
“The skin is a highly specialized organ that covers the entire external surface of the body. Its various roles include protecting the body from trauma, infection and ultraviolet radiation. It provides waterproofing and is important for fluid and temperature regulation. It is essential for the detection of some sensory stimuli. […] Skin problems are extremely common and are responsible for 10–15 per cent of all consultations in general practice. […] Given that there are around 2000 dermatological conditions described, only common and important conditions, including some that might be especially relevant in the examination setting, can be covered here.”
“Urticaria is characterized by the development of red dermal swellings known as weals […]. Scaling is not seen and the lesions are typically very itchy. The lesions result from the release of histamine from mast cells. An important clue to the diagnosis is that individual lesions come and go within 24 hours, although new lesions may be appearing at other sites. Another associated feature is dermographism: a firm scratch of the skin with an orange stick will produce a linear weal within a few minutes. Urticaria is common, estimated to affect up to 20 per cent of the population at some point in their lives.”
“Stevens–Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) are thought to be two ends of a spectrum of the same condition. They are usually attributable to drug hypersensitivity, though a precipitant is not always identified. The latent period following initiation of the drug tends to be longer than seen with a classical maculopapular drug eruption. The disease is termed:
*SJS when 10 per cent or less of the body surface area epidermis detaches
*TEN when greater than 30 per cent detachment occurs.
Anything in between is designated SJS/TEN overlap. Following a prodrome of fever, an erythematous eruption develops. Macules, papules, or plaques may be seen. Some or all of the affected areas become vesicular or bullous, followed by sloughing off of the dead epidermis. This leads to potentially widespread denudation of skin. […] The affected skin is typically painful rather than itchy. […] The risk of death relates to the extent of epidermal loss and can exceed 30 per cent. […] A widespread ‘drug rash’ that is very painful should ring alarm bells.”
“Various skin problems arise in patients with diabetes mellitus. Bacterial and fungal infections are more common, due to impaired immunity. Vascular disease and neuropathy lead to ulceration on the feet, which can sometimes be very deep and there may be underlying osteomyelitis. Granuloma annulare […] and necrobiosis lipoidica have also been associated with diabetes, though many cases are seen in non-diabetic patients. The former produces smooth papules in an annular configuration, often coalescing into a ring. The latter usually occurs over the shins giving rise to yellow-brown discoloration, with marked atrophy and prominent telangiectasia. There is often an annular appearance, with a red or brown border. Acanthosis nigricans, velvety thickening of the flexural skin […], is seen with insulin resistance, with or without frank diabetes. […] Diabetic bullae are also occasionally seen and diabetic dermopathy produces hyperpigmented, atrophic plaques on the legs. The aetiology of these is unknown.”
“Malignant melanoma is one of the commonest cancers in young adults [and it] is responsible for almost three-quarters of skin cancer deaths, despite only accounting for around 4 per cent of skin cancers. Malignant melanoma can arise de novo or from a pre-existing naevus. Most are pigmented, but some are amelanotic. The most important prognostic factor for melanoma is the depth of the tumour when it is excised – Breslow’s thickness. As most malignant melanomas undergo a relatively prolonged radial (horizontal) growth phase prior to invading vertically, there is a window of opportunity for early detection and management, while the prognosis remains favourable. […] ‘Red flag’ findings […] in pigmented lesions are increasing size, darkening colour, irregular pigmentation, multiple colours within the same lesion, and itching or bleeding for no reason. […] In general, be suspicious if a lesion is rapidly changing.”
“Most ocular surface diseases […] are bilateral, whereas most serious pathology (usually involving deeper structures) is unilateral […] Any significant reduction of vision suggests serious pathology [and] [s]udden visual loss always requires urgent investigation and referral to an ophthalmologist. […] Sudden loss of vision is commonly due to a vascular event. These may be vessel occlusions giving rise to ischaemia of vision-serving structures such as the retina, optic nerve or brain. Alternatively there may be vessel rupture and consequent bleeding which may either block transmission of light as in traumatic hyphaema (haemorrhage into the anterior chamber) and vitreous haemorrhage, or may distort the retina as in ‘wet’ age-related macular degeneration (AMD). […] Gradual loss of vision is commonly associated with degenerations or depositions. […] Transient loss of vision is commonly due to temporary or subcritical vascular insufficiency […] Persistent loss of vision suggests structural changes […] or irreversible damage”.
There are a lot of questions one might ask here, and I actually found it interesting to know how much can be learned simply by asking some questions which might help narrow things down – the above are just examples of variables to consider, and there are others as well, e.g. whether or not there is pain (“Painful blurring of vision is most commonly associated with diseases at the front of the eye”, whereas “Painless loss of vision usually arises from problems in the posterior part of the eye”), whether there’s discharge, just how the vision is affected (a blind spot, peripheral field loss, floaters, double vision, …), etc.
“Ptosis (i.e. drooping lid) and a dilated pupil suggest an ipsilateral cranial nerve III palsy. This is a neuro-ophthalmic emergency since it may represent an aneurysm of the posterior communicating artery. […] In such cases the palsy may be the only warning of impending aneurysmal rupture with subsequent subarachnoid haemorrhage. One helpful feature that warns that a cranial nerve III palsy may be compressive is pupil involvement (i.e. a dilated pupil).”
“Although some degree of cataract (loss of transparency of the lens) is almost universal in those >65 years of age, it is only a problem when it is restricting the patient’s activity. It is most commonly due to ageing, but it may be associated with ocular disease (e.g. uveitis), systemic disease (e.g. diabetes), drugs (e.g. systemic corticosteroids) or it may be inherited. It is the commonest cause of treatable blindness worldwide. […] Glaucoma describes a group of eye conditions characterized by a progressive optic neuropathy and visual field loss, in which the intraocular pressure is sufficiently raised to impair normal optic nerve function. Glaucoma may present insidiously or acutely. In the more common primary open angle glaucoma, there is an asymptomatic sustained elevation in intraocular pressure which may cause gradual unnoticed loss of visual field over years, and is a significant cause of blindness worldwide. […] Primary open angle glaucoma is asymptomatic until sufficiently advanced for field loss to be noticeable to the patient. […] Acute angle closure glaucoma is an ophthalmic emergency in which closure of the drainage angle causes a sudden symptomatic elevation of intraocular pressure which may rapidly damage the optic nerve.”
“Age-related macular degeneration is the commonest cause of blindness in the older population (>65 years) in the Western world. Since it is primarily the macula […] that is affected, patients retain their peripheral vision and with it a variable level of independence. There are two forms: ‘dry’ AMD accounts for 90 per cent of cases and the more dramatic ‘wet’ (also known as neovascular) AMD accounts for 10 per cent. […] Treatments for dry AMD do not alter the course of the disease but revolve around optimizing the patient’s remaining vision, such as using magnifiers. […] Treatments for wet AMD seek to reverse the neovascular process”.
“Diabetes is the commonest cause of blindness in the younger population (<65 years) in the Western world. Diabetic retinopathy is a microvascular disease of the retinal circulation. In both type 1 and type 2 diabetes glycaemic control and blood pressure should be optimized to reduce progression. Progression of retinopathy to the proliferative stage is most commonly seen in type 1 diabetes, whereas maculopathy is more commonly a feature of type 2 diabetes. […] Symptoms
*Bilateral. *Usually asymptomatic until either maculopathy or vitreous haemorrhage. [This is part of why screening programs for diabetic eye disease are so common – the first sign of eye disease may well be catastrophic and irreversible vision loss, despite the fact that the disease process may take years or decades to develop to that point] *Gradual loss of vision – suggests diabetic maculopathy (especially if distortion) or cataract. *Sudden loss of vision – most commonly vitreous haemorrhage secondary to proliferative diabetic retinopathy.”
Recap of some key points made in the chapter:
“*For uncomfortable/red eyes, grittiness, itchiness or a foreign body sensation usually indicate ocular surface problems such as conjunctivitis.
*Severe ‘aching’ eye pain suggests serious eye pathology such as acute angle closure glaucoma or scleritis. *Photophobia is most commonly seen with acute anterior uveitis or corneal disease (ulcers or trauma). [it’s also common in migraine]
*Sudden loss of vision is usually due to a vascular event (e.g. retinal vessel occlusions, anterior ischaemic optic neuropathy, ‘wet’ AMD).
*Gradual loss of vision is common in the ageing population. It is frequently due to cataract […], primary open angle glaucoma (peripheral field loss) or ‘dry’ AMD (central field loss).
*Recent-onset flashes and floaters should be presumed to be retinal tear/detachment.
*Double vision may be monocular (both images from the same eye) or binocular (different images from each eye). Binocular double vision is serious, commonly arising from a cranial nerve III, IV or VI palsy. […]
the following presentations are sufficiently serious to warrant urgent referral to an ophthalmologist: sudden loss of vision, severe ‘aching’ eye pain, new-onset flashes and floaters, [and] new-onset binocular diplopia.”
Infectious and tropical diseases:
“Patients with infection (and inflammatory conditions or, less commonly, malignancy) usually report fever […] Whatever the cause, body temperature generally rises in the evening and falls during the night […] Fever is often lower or absent in the morning […]. A sensation of ‘feeling hot’ or ‘feeling cold’ is unreliable – healthy individuals often feel these sensations, as may those with menopausal flushing, thyrotoxicosis, stress, panic, or migraine. The height and duration of fever are important. Rigors (chills or shivering, often uncontrollable and lasting for 20–30 minutes) are highly significant, and so is a documented temperature over 37.5 °C taken with a reliable oral thermometer. Drenching sweats are also highly significant. Rigors generally indicate serious bacterial infections […] or malaria. An oral temperature >39 °C has the same significance as rigors. Rigors generally do not occur in mild viral infections […] malignancy, connective tissue diseases, tuberculosis and other chronic infections. […] Anyone with fever lasting longer than a week should have lost weight – if a patient reports a prolonged fever but no weight loss, the ‘fever’ usually turns out to be of no consequence. […] untouched meals indicate ongoing illness; return of appetite is a reliable sign of recovery.”
“Bacterial infections are the most common cause of sepsis, but other serious infections (e.g. falciparum malaria) or inflammatory states (e.g. pancreatitis, pre-eclamptic toxaemia, burns) can cause the same features. Below are listed the indicators of sepsis – the more abnormal the result, the more severe is the patient’s condition.
*Check if it is above 38 °C or below 36 °C.
*Simple viral infections seldom exceed 39 °C.
*Temperatures (from any cause) are generally higher in the evening than in the early morning.
*As noted above, rigors (uncontrollable shivering) are important indicators of severe bacterial infection or malaria. […] A heart rate greater than 90 beats/min is abnormal, and in severe sepsis a pulse of 140/min is not unusual. […] Peripheries (fingers, toes, nose) are often markedly cooler than central skin (trunk, forehead) with prolonged capillary refill time […] Blood pressure (BP) is low in the supine position (systolic BP <90 mmHg) and falls further when the patient is repositioned upright. In septic shock sometimes the BP is unrecordable on standing, and the patient may faint when they are helped to stand up […] The first sign [of respiratory disturbance] is a respiratory rate greater than 20 breaths/min. This is often a combination of two abnormalities: hypoxia caused by intrapulmonary shunts, and lactic acidosis. […] in hypoxia, the respiratory pattern is normal but rapid. Acidotic breathing has a deep, sighing character (also known as Kussmaul’s respiration). […] Also called toxic encephalopathy or delirium, confusion or drowsiness is often present in sepsis. […] Sepsis is always severe when it is accompanied by organ dysfunction. Septic shock is defined as severe sepsis with hypotension despite adequate fluid replacement.”
“Involuntary neck stiffness (‘nuchal rigidity’) is a characteristic sign of meningitis […] Patients with meningitis or subarachnoid haemorrhage characteristically lie still and do not move the head voluntarily. Patients who complain about a stiff neck are often worried about meningitis; patients with meningitis generally complain of a sore head, not a sore neck – thus neck stiffness is a sign, not a symptom, of meningitis.”
“General practitioners are generally correct when they say an infection is ‘a virus’, but the doctor needs to make an accurate assessment to be sure of not missing a serious bacterial infection masquerading as ‘flu’. […]
*Influenza is highly infectious, so friends, family, or colleagues should also be affected at the same time – the incubation period is short (1–3 days). If there are no other cases, question the diagnosis.
*The onset of viraemic symptoms is abrupt and often quite severe, with chills, headache, and myalgia. There may be mild rigors on the first day, but these are not sustained.
*As the next few days pass, the fever improves each day, and by day 3 the fever is settling or absent. A fever that continues for more than 3 days is not uncomplicated ’flu, and nor is an illness with rigors after the first day.
*As the viraemia subsides, so the upper respiratory symptoms become prominent […] The patient experiences a combination of: rasping sore throat, dry cough, hoarseness, coryza, red eyes, congested sinuses. These persist for a long time (10 days is not unusual) and the patient feels ‘miserable’ but the fever is no longer prominent.”
“Several infections cause a similar picture to ‘glandular fever’. The commonest is EBV [Epstein–Barr Virus], with cytomegalovirus (CMV) a close second; HIV seroconversion may look clinically identical, and acute toxoplasmosis similar (except for the lack of sore throat). Glandular fever in the USA is called ‘infectious mononucleosis’ […] The illness starts with viraemic symptoms of fever (without marked rigors), myalgia, lassitude, and anorexia. A sore throat is characteristic, and the urine often darkens (indicating liver involvement). […] Be very alert for any sign of stridor, or if the tonsils meet in the middle or are threatening to obstruct (a clue is that the patient is unable to swallow their saliva and is drooling or spitting it out). If there are any of these signs of upper airway obstruction, give steroids, intravenous fluids, and call the ENT surgeons urgently – fatal obstruction occasionally occurs in the middle of the night. […] Be very alert for a painful or tender spleen, or any signs of peritonism. In glandular fever the spleen may rupture spontaneously; it is rare, but tragic. It usually begins as a subcapsular haematoma, with pain and tenderness in the left upper quadrant. A secondary rupture through the capsule then occurs at a later date, and this is often rapidly fatal.”
(This was a review lecture for me as I read a textbook on these topics a few months back going into quite a lot more detail – the post I link to has some relevant links if you’re curious to explore this topic further).
A few relevant links: Group (featured), symmetry group, Cayley table, Abelian group, Symmetry groups of Platonic solids, dual polyhedron, Lagrange’s theorem (group theory), Fermat’s little theorem. I think he was perhaps trying to cover a little bit too much ground in too little time by bringing up the RSA algorithm towards the end, but I’m sort of surprised how many people disliked the video; I don’t think it’s that bad.
The beginning of the lecture has a lot of remarks about Fourier‘s life which are in some sense not ‘directly related’ to the mathematics, and so if this is what you’re most interested in knowing more about you can probably skip the first 11 minutes or so of the lecture without missing out on much. The lecture is very non-technical compared to coverage like this, this, and this (…or this).
I think one thing worth mentioning here is that the lecturer is the author of a rather amazing book on the topic he talks about in the lecture.
I noted in my last post about the book that although I’d initially thought I’d cover the rest of the book in that post, I at the end found myself unable to do so because the post would end up being too long; this post will cover the remaining chapters and points of interest and will be the last post about the book.
The first of the remaining chapters is a chapter about ‘Maintaining Relationships'; as usual most of the coverage focuses on romantic relationships. Some quotes:
“The most frequent focus of maintenance research has been the identification of behaviors or interactions that relational partners can enact to sustain their relationship […]. Numerous typologies of such behaviors exist […] Stafford and Canary’s (1991) initial research on the topic generated five positive and proactive maintenance strategies, which have become widely used […] Positivity refers to attempts to make interactions pleasant. These include acting nice and cheerful when one does not feel that way, performing favors for the partner, and withholding complaints. Openness involves direct discussion about the relationship, including talk about the history of the involvement, rules made, and personal disclosure. Assurances involve support of the partner, comforting the partner, and making one’s commitment clear. Social networks refers to relying on friends and family to support the relationship (e.g., having dinner every Sunday at the in-laws). Finally, sharing tasks refers to doing one’s fair share of household chores […] Early on, Duck (1988) questioned the extent to which maintenance behaviors are intentionally enacted. This issue is central because it addresses whether maintenance as a process requires effort and planning or occurs as a by-product of relating. […] some behaviors might start as strategies but over time become routine […] Dainton and Aylor (2002) found that the same behaviors are used intentionally and unintentionally […] [They] speculated that maintenance might be performed routinely until something happens to disrupt the routine. At that point, relational partners might turn to strategic maintenance enactment. As such, routine maintenance might be used during times when preferred levels of satisfaction and commitment are experienced, and strategic maintenance might be enacted during times of perceived uncertainty.”
“One popular axiom is that relationships are easy to get into and hard to get out of, and evidence exists to support this axiom. Attridge (1994) reviewed various “barriers” to dissolving romantic relationships […] Attridge noted that both internal and external barriers prevent people from treating marriages like blind dates and that smart relational partners would make use of barriers to keep their relationships intact (e.g., remind the partner of religious premises of marriage). In terms of internal barriers that Attridge (1994) reviewed, the first is commitment. […] Next, one’s religious beliefs regarding the sanctity of marriage compel people to remain. Also, one’s self-identity – that is, viewing oneself in terms of the relationship – acts as a barrier to dissolution. Next, irretrievable personal investments (such as spending time with the partner) work against dissolution. Finally, Attridge argued that the presence of children acted as an internal barrier, especially for women; women who have children are more likely to remain in a marriage than are women without children.
In terms of external barriers, Attridge (1994) cited several. Not surprisingly, these include legal barriers, financial obligations, and social networks that promote the bond. In addition to these, we would add a perception of a lack of alternatives. Both Rusbult and Johnson’s models indicate that having no perceived alternatives increases one’s commitment to the partner. Both Johnson (2001) and Rusbult and Martz (1995) have shown that abused women remain in these marriages because they perceive that they have no alternative associations or resources that they can leverage to leave their unhappy state. Conversely, Heaton and Albrecht (1991) found that “social contact – whether having potential sources of help, receiving help, or spending social and recreational time away from home – is positively associated with instability” […] Relationships with barriers are probably stable, but they do not necessarily contain characteristics that demarcate a high-quality relationship. To ensure the continuation of such qualities, one needs to engage in individual and relational strategies that help create and sustain liking, love, commitment, and so forth.”
“research shows that maintenance strategies provide the bases for increases in intimacy […]. That is, the use of maintenance behaviors helps dating partners develop their involvements. Moreover, people who do not engage in maintenance behaviors are more likely to de-escalate or terminate their relationships […] Yet the functional utility of maintenance behaviors does not endure for long. […] Canary, Stafford, and Semic (2002) conducted a panel study examining married partners’ maintenance activity and relational characteristics (liking, commitment, and control mutuality) at three points in time, each a month apart. They found that maintenance behaviors are strongly associated with relational characteristics concurrently, but that the effects completely fade within a month’s time (when controlling for the previous months’ reports). Thus, it appears that maintenance strategies must be used continuously if they are to sustain desired relational characteristics. Being positive, assuring the partner of one’s love and commitment, sharing tasks, and so forth represent proactive relational behaviors to be sure, but they must be enacted on a regular basis to matter.”
“Rusbult (1987) identified variations in the way that people respond to their partners during troubled times. These tendencies to accommodate reflect two dimensions: passive versus active and constructive versus destructive. Exit is an active and destructive behavior that includes threats to leave the partner; voice is an active and constructive strategy that involves discussing the problem without hostility; Loyalty is a passive and constructive approach that involves giving in to the partner; and Neglect is a passive and destructive approach that includes passive– aggressive reactions. Several studies have shown that committed individuals are more likely to engage in the more civil forms of accommodation – voice and loyalty – and that these behaviors have a more positive associations than do neglect or exit with relational quality. […] Tests of Rusbult’s model have largely endorsed its basic tenets, as reported elsewhere (Canary & Zelley, 2000).”
“a longstanding assumption is that in established relationships much communication involves taken-for-granted presumptions and expectations, and “habits of adjustment to the other person become perfected and require less participation of the consciousness” (Waller, 1951, p. 311). This would imply that over time maintenance would be achieved routinely rather than strategically. […] Research supports these presuppositions.”
The next chapter is called ‘The Treatment of Relationship Distress: Theoretical Perspectives and Empirical Findings’ – a few observations from the chapter:
“distressed married couples are more prone than nondistressed couples to aversive, destructive patterns of communication […] distressed couples are more likely to engage in exchanges in which one person’s hurtful comment is reciprocated with greater intensity by the receiving partner. […] Studies of couples’ conversations have shown that distressed partners are more likely to respond negatively to each other’s expressions of negative affect than are members of nondistressed couples (negative reciprocity); furthermore, these expressions of negative affect are not as likely to be offset by high levels of positive affect as they are in nondistressed relationships […] social learning theory emphasizes that a spouse’s behavior is both learned and influenced by the other partner’s behavior. Over time, spouses’ influence on each other becomes a stronger predictor of current behavior than the influences of previous close relationships.”
CBCT [Cognitive–Behavioral Couple Therapy] researchers have identified five major types of cognitions involved in couple relationship functioning […] The first three cognitions involve evaluations of specific events. Selective attention involves how each member of a couple idiosyncratically notices, or fails to notice, particular aspects of relationship events. Selective attention contributes to distressed couples’ low rates of agreement about the occurrence and quality of specific events, as well as negative biases in perceptions of each other’s messages […] Attributions are inferences made about the determinants of partners’ positive and negative behaviors. The tendency of distressed partners to attribute each other’s negative actions to global, stable traits has been referred to as “distress-maintaining attributions” because they leave little room for future optimism that one’s partner will behave in a more pleasing manner in other situations […] Expectancies, or predictions that each member of the couple makes about particular relationship events in the immediate or more distant future, are the last type of cognitions involving specific events. Negative relationship expectancies have been associated with lower [relationship] satisfaction […] The fourth and fifth categories of cognition are forms of what cognitive therapists have referred to as basic or core beliefs shaping one’s experience of the world. These include (a) assumptions, or beliefs that each individual holds about the characteristics of individuals and intimate relationships, and (b) standards, or each individual’s personal beliefs about the characteristics that an intimate relationship and its members “should” have […] Couples’ assumptions and standards are associated with current relationship distress, either when these beliefs are unrealistic or when the partners are not satisfied with how their personal standards are being met in their relationship […] many of the problematic behavioral interactions between spouses may evolve from the partners’ relatively stable cognitions about the relationship. Unless these cognitions are taken into account, successful intervention is likely to be compromised.” [The important point being that in a distressed relationship you can address: a) behaviours, b) how people in the relationship think about the behaviours, or c) both – and c seems at least theoretically to be superior to either of the other choices].
“CBCT teaches partners to monitor and test the appropriateness of their cognitions. It incorporates some standard cognitive restructuring strategies, such as (a) considering alternative attributions for a partner’s negative behavior; (b) asking for behavioral data to test a negative perception concerning a partner (e.g., that the partner never complies with requests); and (c) evaluating extreme standards by generating lists of the advantages and disadvantages of expectations to live up to this standard. […] Overall, we propose that some of the common elements in the effective approaches that we have reviewed include (a) broadening partners’ perspectives on sources of their difficulties as a couple, as well as on their strengths as a couple; (b) increasing the partners’ abilities to differentiate between the strengths and problems within their current relationship, versus characteristics that occurred in prior relationships; (c) motivating and directing the couple to reduce behavioral patterns that maintain or worsen relationship distress; and (d) increasing the range of constructive strategies that partners have available for influencing each other. […] Although the quality of the therapeutic alliance in explaining treatment effects has not been investigated empirically in couple therapy, the therapeutic alliance has received considerable attention in psychotherapy research more generally. A recent meta-analysis of psychotherapy concluded that the therapeutic alliance explains between 38% and 77% of the variance in treatment outcome, whereas specific techniques account for only 0% to 8% of the variance (Wampold, 2001).”
The last chapter is a sort of ‘bringing it all together’-chapter with some key points to take away from the book. I thought I’d include a few of these here even if I’ve talked about them before:
“The ratio of positive and negative behaviors during conflict interactions is also critical to relationships as viewed from a social exchange perspective […]. The study of conflict communication in married couples, however, has shown that negative behavior tends to have a stronger impact on relationship satisfaction than positive behavior. […] In discussing social exchange processes and emotion, Planalp, Fitness, and Fehr debunk the idea that social exchange processes are cold and calculating and argue that “the basic concepts and processes of social exchange theory can be viewed as deeply emotional.” For example, they note that rewards and costs are often experienced as positive and negative feelings. In addition, our reactions to inequity and inequality in our relationships are likely to be highly emotional, and indeed such social exchange concepts as comparison levels and comparison levels for alternatives are basically about positive and negative feelings toward the partner and toward potential alternatives. […] Although there is some controversy about the extent to which social exchange processes are relevant to committed relationships that are going well, it is clear that people want their relationships to be fair and equitable, and exchange processes tend to become the focus when relationships are not going well.”
“Fincham and Beach suggest that the evidence for an association between attributions and relationship satisfaction is one of the most robust findings in the area of close relationships […] understanding a person’s interpretation of partner behavior may be as important as observing that behavior […] [However] many cognitive variables, apart from attributions, are associated with relationship satisfaction. Their list includes discrepancies between the partner’s behavior and one’s ideal standards, social comparison processes such as seeing one’s relationships as superior to the norm, memory processes that lead to the recall of positive versus negative memories, and self-evaluation maintenance processes that serve to maintain self-esteem even when one compares poorly with the partner.”
“Commitment seems to be the strongest predictor of relational stability, and other factors include religious beliefs about the sanctity of marriage, viewing one’s identity in terms of the relationship, personal investments in the relationship, and children. Le and Agnew (2003) conducted a meta-analysis to test Rusbult’s (1980) investment model of commitment. They found that Rusbult’s three variables of satisfaction with, alternatives to, and investment in the relationship were significantly related to commitment to that relationship and together accounted for two-thirds of the variance in commitment.”
“cognitive distortions in a positive direction tend to be characteristic of happy couples. Those who idealize their partners and who tend to see their partners in a more positive light than their partners view themselves are likely to be happier than other couples. The attributions of these couples are likely to be affected, and they are likely to blame themselves for negative events and give their partners the credit for positive events […] there is a lot of evidence in this volume supporting the powerful role that cognitions can play in personal relationships. Whether our focus is on cognitions at the cultural level or at the interpersonal level, they seem to have powerful effects on relationship behavior and satisfaction. Also, the effects are likely to be reciprocal, with cognitions affecting relationship satisfaction and satisfaction affecting cognitions.”
Yesterday I gave some of the reasons I had for disliking the book; in this post I’ll provide some of the reasons why I kept reading. The book had a lot of interesting data. I know I’ve covered some of these topics and numbers before (e.g. here), but I don’t mind repeating myself every now and then; some things are worth saying more than once, and as for the those that are not I must admit I don’t really care enough about not repeating myself here to spend time perusing the archives in order to make sure I don’t repeat myself here. Anyway, here are some number from the coverage:
“Twenty-two high-burden countries account for over 80 % of the world’s TB cases […] data referring to 2011 revealed 8.7 million new cases of TB [worldwide] (13 % coinfected with HIV) and 1.4 million people deaths due to such disease […] Around 80 % of TB cases among people living with HIV were located in Africa. In 2011, in the WHO European Region, 6 % of TB patients were coinfected with HIV […] In 2011, the global prevalence of HIV accounted for 34 million people; 69 % of them lived in Sub-Saharan Africa. Around five million people are living with HIV in South, South-East and East Asia combined. Other high-prevalence regions include the Caribbean, Eastern Europe and Central Asia . Worldwide, HIV incidence is in downturn. In 2011, 2.5 million people acquired HIV infection; this number was 20 % lower than in 2001. […] Sub-Saharan Africa still accounts for 70 % of all AIDS-related deaths […] Worldwide, an estimated 499 million new cases of curable STIs (as gonorrhoea, chlamydia and syphilis) occurred in 2008; these findings suggested no improvement compared to the 448 million cases occurring in 2005. However, wide variations in the incidence of STIs are reported among different regions; the burden of STIs mainly occurs in low-income countries”.
“It is estimated that in 2010 alone, malaria caused 216 million clinical episodes and 655,000 deaths. An estimated 91 % of deaths in 2010 were in the African Region […]. A total of 3.3 billion people (half the world’s population) live in areas at risk of malaria transmission in 106 countries and territories”.
“Diarrhoeal diseases amount to an estimated 4.1 % of the total disability-adjusted life years (DALY) global burden of disease, and are responsible for 1.8 million deaths every year. An estimated 88 % of that burden is attributable to unsafe supply of water, sanitation and hygiene […] It is estimated that diarrhoeal diseases account for one in nine child deaths worldwide, making diarrhoea the second leading cause of death among children under the age of 5 after pneumonia”
“NCDs [Non-Communicable Diseases] are the leading global cause of death worldwide, being responsible for more
deaths than all other causes combined. […] more than 60 % of all deaths worldwide currently stem from NCDs .
In 2008, the leading causes of all NCD deaths (36 million) were:
• CVD [cardiovascular disease] (17 million, or 48 % of NCD deaths) [nearly 30 % of all deaths];
• Cancer (7.6 million, or 21 % of NCD deaths) [about 13 % of all deaths]
• Respiratory diseases (4.2 million, or 12 % of NCD deaths) [7 % of all deaths]
• Diabetes (1.3 million, 4 % of NCD deaths) .” [Elsewhere in the publication they report that: “In 2010, diabetes was responsible for 3.4 million deaths globally and 3.6 % of DALYs” – obviously there’s a lot of uncertainty here. How to avoid ‘double-counting’ is one of the major issues, because we have a pretty good idea what they die of: “CVD is by far the most frequent cause of death in both men and women with diabetes, accounting for about 60 % of all mortality”].
“Behavioural risk factors such as physical inactivity, tobacco use and unhealthy diet explain nearly 80 % of the CVD burden”
“nearly 80 % of NCD deaths occur in low- and middle-income countries , up sharply from just under 40 % in 1990 […] Low- and lower-middle-income countries have the highest proportion of deaths from NCDs under 60 years. Premature deaths under 60 years for high-income countries were 13 and 25 % for upper-middle-income countries. […] In low-income countries, the proportion of premature NCD deaths under 60 years is 41 %, three times the proportion in high-income countries . […] Overall, NCDs account for more than 50 % of DALYs [disability-adjusted life years] in most counties. This percentage rises to over 80 % in Australia, Japan and the richest countries of Western Europe and North America […] In Europe, CVD causes over four million deaths per year (52 % of deaths in women and 42 % of deaths in men), and they are the main cause of death in women in all European countries.”
“Overall, age-adjusted CVD death rates are higher in most low- and middle-income countries than in developed countries […]. CHD [coronary heart disease] and stroke together are the first and third leading causes of death in developed and developing countries, respectively. […] excluding deaths from cancer, these two conditions were responsible for more deaths in 2008 than all remaining causes among the ten leading causes of death combined (including chronic diseases of the lungs, accidents, diabetes, influenza, and pneumonia)”.
“The global prevalence of diabetes was estimated to be 10 % in adults aged 25 + years […] more than half of all nontraumatic lower limb amputations are due to diabetes [and] diabetes is one of the leading causes of visual impairment and blindness in developed countries .”
“Almost six million people die from tobacco each year […] Smoking is estimated to cause nearly 10 % of CVD […] Approximately 2.3 million die each year from the harmful use of alcohol. […] Alcohol abuse is responsible for 3.8 % of all deaths (half of which are due to CVD, cancer, and liver cirrhosis) and 4.5 % of the global burden of disease […] Heavy alcohol consumption (i.e. ≥ 4 drinks/day) is significantly associated with an about fivefold increased risk of oral and pharyngeal cancer and oesophageal squamous cell carcinoma (SqCC), 2.5-fold for laryngeal cancer, 50 % for colorectal and breast cancers and 30 % for pancreatic cancer . These estimates are based on a large number of epidemiological studies, and are generally consistent across strata of several covariates. […] The global burden of cancer attributable to alcohol drinking has been estimated at 3.6 and 3.5 % of cancer deaths , although this figure is higher in high-income countries (e.g. the figure of 6 % has been proposed for UK  and 9 % in Central and Eastern Europe).”
“At least two million cancer cases per year (18 % of the global cancer burden) are attributable to chronic infections by human papillomavirus, hepatitis B virus, hepatitis C virus and Helicobacter pylori. These infections are largely preventable or treatable […] The estimate of the attributable fraction is higher in low- and middle-income countries than in high-income countries (22.9 % of total cancer vs. 7.4 %).”
“Information on the magnitude of CVD in high-income countries is available from three large longitudinal studies that collect multidisciplinary data from a representative sample of European and American individuals aged 50 and older […] according to the Health Retirement Survey (HRS) in the USA, almost one in three adults have one or more types of CVD [11, 12]. By contrast, the data of Survey of Health, Ageing and Retirement in Europe (SHARE), obtained from 11 European countries, and English Longitudinal Study of Aging (ELSA) show that disease rates (specifically heart disease, diabetes, and stroke) across these populations are lower (almost one in five)”
“In 1990, the major fraction of morbidity worldwide was due to communicable, maternal, neonatal, and nutritional disorders (47 %), while 43 % of disability adjusted life years (DALYs) lost were attributable to NCDs. Within two decades, these estimates had undergone a drastic change, shifting to 35 % and 54 %, respectively”
“Estimates of the direct health care and nonhealth care costs attributable to CVD in many countries, especially in low- and middle-income countries, are unclear and fragmentary. In high-income countries (e.g., USA and Europe), CVD is the most costly disease both in terms of economic costs and human costs. Over half (54 %) of the total cost is due to direct health care costs, while one fourth (24 %) is attributable to productivity losses and 22 % to the informal care of people with CVD. Overall, CVD is estimated to cost the EU economy, in terms of health care, almost €196 billion per year, i.e., 9 % of the total health care expenditure across the EU”
“In the WHO European Region, the Eastern Mediterranean Region, and the Region of the Americas, over 50 % of women are overweight. The highest prevalence of overweight among infants and young children is in upper-to-middle-income populations, while the fastest rise in overweight is in the lower-to-middle-income group . Globally, in 2008, 9.8 % of men and 13.8 % of women were obese compared to 4.8 % of men and 7.9 % of women in 1980 .”
“In low-income countries, around 25 % of adults have raised total cholesterol, while in high-income countries, over 50 % of adults have raised total cholesterol […]. Overall, one third of CHD disease is attributable to high cholesterol levels” (These numbers seem very high to me, but I’m reporting them anyway).
“interventions based on tobacco taxation have a proportionally greater effect on smokers of lower SES and younger smokers, who might otherwise be difficult to influence. Several studies suggest that the application of a 10 % rise in price could lead to as much as a 2.5–10 % decline in smoking [20, 45, 50, 56].”
“The decision to allocate resources for implementing a particular health intervention depends not only on the strength of the evidence (effectiveness of intervention) but also on the cost of achieving the expected health gain. Cost-effectiveness analysis is the primary tool for evaluating health interventions on the basis of the magnitude of their incremental net benefits in comparison with others, which allows the economic attractiveness of one program over another to be determined [More about this kind of stuff here]. If an intervention is both more effective and less costly than the existing one, there are compelling reasons to implement it. However, the majority of health interventions do not meet these criteria, being either more effective but more costly, or less costly but less effective, than the existing interventions [see also this]. Therefore, in most cases, there is no “best” or absolute level of cost-effectiveness, and this level varies mainly on the basis of health care system expenditure and needs .”
“The number of new cases of cancer worldwide in 2008 has been estimated at about 12,700,000 . Of these, 6,600,000 occurred in men and 6,000,000 in women. About 5,600,000 cases occurred in high-resource countries […] and 7,100,000 in low- and middle-income countries. Among men, lung, stomach, colorectal, prostate and liver cancers are the most common […], while breast, colorectal, cervical, lung and stomach are the most common neoplasms among women […]. The number of deaths from cancer was estimated at about 7,600,000 in 2008 […] No global estimates of survival from cancer are available: Data from selected cancer registries suggest wide disparities between high- and low-income countries for neoplasms with effective but expensive treatment, such as leukaemia, while the gap is narrow for neoplasms without an effective therapy, such as lung cancer […]. The overall 5-year survival of cases diagnosed during 1995– 1999 in 23 European countries was 49.6 % […] Tobacco smoking is the main single cause of human cancer worldwide […] In high-income countries, tobacco smoking causes approximately 30 % of all human cancers .”
“Systematic reviews have concluded that nutritional factors may be responsible for about one fourth of human cancers in high-income countries, although, because of the limitations of the current understanding of the precise role of diet in human cancer, the proportion of cancers known to be avoidable in practicable ways is much smaller . The only justified dietary recommendation for cancer prevention is to reduce the total caloric intake, which would contribute to a decrease in overweight and obesity, an established risk factor for human cancer. […] The magnitude of the excess risk [associated with obesity] is not very high (for most cancers, the relative risk (RR) ranges between 1.5 and 2 for body weight higher than 35 % above the ideal weight). Estimates of the proportion of cancers attributable to overweight and obesity in Europe range from 2 %  to 5 % . However, this figure is likely to be larger in North America, where the prevalence of overweight and obesity is higher.”
“Estimates of the global burden of cancer attributable to occupation in high-income countries result in the order of 1–5 % [9, 42]. In the past, almost 50 % of these were due to asbestos alone […] The available evidence suggests, in most populations, a small role of air, water and soil pollutants. Global estimates are in the order of 1 % or less of total cancers [9, 42]. This is in striking contrast with public perception, which often identifies pollution as a major cause of human cancer.”
“Avoidance of sun exposure, in particular during the middle of the day, is the primary preventive measure to reduce the incidence of skin cancer. There is no adequate evidence of a protective effect of sunscreens, possibly because use of sunscreens is associated with increased exposure to the sun. The possible benefit in reducing skin cancer risk by reduction of sun exposure, however, should be balanced against possible favourable effects of UV radiation in promoting vitamin D metabolism.”
In my review of the book on goodreads I did not have many nice things to say about this book, but I do note that the book had some interesting data. I’ll save those for another post – in this post I’ll provide some of the reasons why the book got a one star rating. Given the format of the book I thought I should clarify a bit what I didn’t like about it, because both the title and actually also the basic structure maked the book seem quite promising; they cover a lot of review articles and a lot of studies, so how could I possibly dislike a book like that? Well…
The main issue: If I thought the Psychology of Lifestyle book was bad in terms of implicit political assumptions etc., this book takes this to a whole different level. Outright bans and severe restrictions on behaviours harming health are repeatedly described as either cost-effective or ‘best buys’, and many chapters don’t even touch upon potential problems associated with such policies, making you start wondering along the way why policies such as national bans on alcohol and tobacco and special police forces armed with automatic weapons coming to your house during the night and throwing you in jail if you’re found smoking a cigarette aren’t already implemented worldwide, if the research looks that way. The political agenda here seems so apparent in many chapters that you start questioning the reporting because you figure these people would not be above lying to you to get the sort of policies they’d like. Faulty assumptions throughout the coverage don’t help – as a rule you don’t get significant health effects by simply providing information about healthy behaviours and behavioural risk factors to the population; we know this from a large number of studies – and I know this because I just read a book about this research – so the fact that some authors assume such interventions to be ‘cost-effective’, and that they can point to one very old example where there does seem to have been some measurable effects, does not convince me. Some of the authors point to interventions involving primary care physicians lecturing people about healthy lifestyle behaviours being cost effective, without at all going into the many issues related to even evaluating the long-run health effects of such interventions. That effects might not persist over time is not the impression you get from this kind of coverage:
“The evidence suggests that counseling by physicians to reduce intake of total fat, saturated fat intake, and daily salt, and to increase fruit and vegetable intake, is very cost-effective, leading to dietary changes, improved weight control, and increased physical activity [64–69].” (p. 55).
Compare with for example this quote from Thirlaway and Upton:
“Hundreds of interventions to combat the obesity epidemic are currently being introduced worldwide, but there are significant gaps in the evidence base for such interventions and few been evaluated in a way that enables any definitive conclusions to be drawn about their effectiveness. Those that have shown an impact are limited to easily controlled settings and it remains unclear how promising small-scale initiatives would be scaled up for whole population impact”.
What people compare when doing the CEAs in the book is occasionally/often unclear, which tends to make that sort of reporting close to worthless. I had the impression in some parts of the coverage that what was driving cost-effectiveness in some of the studies was a combination of large health impacts of disease + assumed but unproven/speculative health impacts of the interventions; an impression probably partly a result of the intervention study coverage provided in Thirlaway & Upton.
‘Implicit assumptions’ and more or less overtly politicizing comments along the way spoiled the reading experience. Below I have added some examples of sentences I for various reasons did not like:
“Several countries have explored fiscal measures such as increased taxation on foods that should be consumed in lower quantities and decreased taxation, price subsidies or production incentives for foods that are encouraged.” (‘foods that should be consumed…’).
“Restriction of alcohol drinking to the limits indicated by the European Code Against Cancer  (20 g/day for men and 10 g/day for women) would avoid about 90 % of alcohol-related cancers and cancer deaths in men and over 50 % of cancers in women, i.e. about 330/360,000 cancer cases and about 200/220,000 cancer deaths. Avoidance or moderation of alcohol consumption to 2 drinks/day in men and 1 drink/day in women is therefore a global public health priority” [The idea that men might not want to avoid 90% of alcohol-related cancers doesn’t seem to cross the minds of these authors – they want them to not get cancer, and they’re going to get their way one way or the other, dammit!]
“Nowadays, obesity is the most frequently encountered metabolic disease” [Disease? Disease???]
“T2D is the most common type of diabetes, representing 90 % of cases worldwide and it is named non-insulin-dependent diabetes mellitus (NIDDM)” [My comment in the margin: “No, it’s actually not. No longer. Because this is a terrible name. A majority of diabetics on insulin treatment are type 2 diabetics.” (see also my comments in the last paragraph here if you’re curious to know more about this topic)]
“The difficulty of communicating is, however, exactly the major obstacle in this communion of responsibility. In this regard, we shall analyze the dynamics of interpersonal communication based on the scheme proposed by Slama-Cazacu . According to this model the elements of a communicative act are: (1) the transmitter, who produces the message, (2) the message conveyed according to the rules provided by code; (3) the code according to which the message is produced; (4) the transmission channel; (5) the context in which the message is found and to which it refers; and (6) the receiver” [To be frank, the chapter from which this quote is taken – Some Ethical Reflections in Public Health – had almost nothing but problematic sentences, despite actually addressing a few issues I’d had with the coverage elsewhere in the publication. I thought the quote illustrated how rambling and besides-the-point that coverage was; recall that this is a chapter about ethics. The quote was used to provide context so that you’d understand e.g. that people sometimes don’t understand health messages. Incidentally you should not be fooled by the quote into assuming that the author actually covered any data about how sensitive people are to health data in this coverage (how information impacts behaviour). She of course did not.]
“The distal risk factors of ethnic groups thus explain why a certain proximal risk factor is unevenly distributed across ethnic groups. If, for example, a certain ethnic minority group has an increased prevalence of smoking, this may be due to the fact that the group is exposed to discrimination in the host country (relational), or to specific sociocultural values characteristic for that group (attributional).” [My comment in the margin: “Discrimination => smoking? Seriously? Stop being stupid.” I was close to losing my patience at this point…]
“metabolic control is poor among migrant groups with diabetes, and HbA1c in migrants is generally higher than in the local-born population [3, 32]. These findings suggest shortfalls in diabetes health care among migrant populations.” [“Or some of the immigrants are stupid and irresponsible.” As mentioned, I was losing patience fast… (In the margin the words ‘some of’ were of course not included, but I live in a wonderful country where omitting such qualifiers in texts like this one run you the risk of getting thrown in jail for ‘racism’..)]
“For European health care contexts, empirical research on inequalities in healthcare outcomes is scarce. For some diseases or care contexts, ethnic inequalities in outcomes, attributable to deficient care, have been shown.” [Stuff like this was also part of the reason for the outburst above – I got really annoyed in this chapter, because the author repeatedly seemed to assume/implicitly assert that anything less than equal coverage for all individuals living in a country was a state that was really morally unjustifiable – later talk about ‘diversity-responsive care’ did not help. I don’t understand how anyone would consider it to be fair that a guy getting sick after paying taxes into the cost-sharing mechanism financing his care for 30 years do not get better health care coverage than some poor immigrant who just arrived yesterday and haven’t paid anything into the scheme, but anyway this is politics and so I shouldn’t bother.]
“In developing countries, the prevalence of some form of depression among urban adults ranges from 12 to 51 %” [No, it probably doesn’t…]
“Of course, in a millennium in which next to the advancement of health technologies (digital, with the development of nanotechnology; social and cultural, with the emergence of new values that should be conjugated with the old; scientific and medical, through imaging and the study of genomics, proteomics, and metabolomics; etc.) there is a global crisis of the world economy, it is fundamental to strengthen and use the assets of individual and community resilience (most definitions of resilience refer to notions—derived from physics—of rebound, or bouncing back, from deformation or distress), also because action to improve community health requires the coordination and the cooperation of decision makers in many sectors responsible for shaping wider determinants, and also because the traditional management of policy may be ineffective to address the problems of the “future cities” and requires an institutional change, given the discrepancy that can exist between technological innovation, scientific evolution, and adaptive flexibility of governance systems.” [This was around the point where I decided that no matter what happened in the last couple of chapters, this book is going to get one star]
“The National Institute for Public Health and the Environment was committed to analyze opportunities to address health inequalities through the HiAP strategy. On the basis of data derived from the document analysis, 38 out of 153 policy resolutions were identified to have a potential impact on determinants of health inequalities. Resolutions often consisted of a combination of policy measures, projects, and programs and were mostly released by the Ministry of Housing, Communities, and Integration and by the Ministry of the Education, Culture, and Science. Fifteen resolutions were on the enhancement of socioeconomic position; 4 on striving participation of people with health problems; 19 on improving living and working environment and lifestyle; and 4 on accessibility and quality of care. Interestingly, only 11 were inter-sectoral collaboration between the Ministry of Health and other ministries. This aspect allows us to conclude that even though HiAP is officially recognized as a strategic approach to be followed in setting policies and programs, further efforts are needed at European and global levels in order to implement in a practical manner.” [I’m pretty sure if this stuff had not been located in the last chapter of the book, I’d never have finished the book.]
I haven’t really blogged this book in anywhere near the amount of detail it deserves even though my first post about the book actually had a few quotes illustrating how much different stuff is covered in the book.
This book is technical, and even if I’m trying to make it less technical by omitting the math in this post it may be a good idea to reread the first post about the book before reading this post to refresh your knowledge of these things.
Quotes and comments below – most of the coverage here focuses on stuff covered in chapters 3 and 4 in the book.
“Tests of null hypotheses and information-theoretic approaches should not be used together; they are very different analysis paradigms. A very common mistake seen in the applied literature is to use AIC to rank the candidate models and then “test” to see whether the best model (the alternative hypothesis) is “significantly better” than the second-best model (the null hypothesis). This procedure is flawed, and we strongly recommend against it […] the primary emphasis should be on the size of the treatment effects and their precision; too often we find a statement regarding “significance,” while the treatment and control means are not even presented. Nearly all statisticians are calling for estimates of effect size and associated precision, rather than test statistics, P-values, and “significance.” [Borenstein & Hedges certainly did as well in their book (written much later), and this was not an issue I omitted to talk about in my coverage of their book…] […] Information-theoretic criteria such as AIC, AICc, and QAICc are not a “test” in any sense, and there are no associated concepts such as test power or P-values or α-levels. Statistical hypothesis testing represents a very different, and generally inferior, paradigm for the analysis of data in complex settings. It seems best to avoid use of the word “significant” in reporting research results under an information-theoretic paradigm. […] AIC allows a ranking of models and the identification of models that are nearly equally useful versus those that are clearly poor explanations for the data at hand […]. Hypothesis testing provides no general way to rank models, even for models that are nested. […] In general, we recommend strongly against the use of null hypothesis testing in model selection.”
“The bootstrap is a type of Monte Carlo method used frequently in applied statistics. This computer-intensive approach is based on resampling of the observed data […] The fundamental idea of the model-based sampling theory approach to statistical inference is that the data arise as a sample from some conceptual probability distribution f. Uncertainties of our inferences can be measured if we can estimate f. The bootstrap method allows the computation of measures of our inference uncertainty by having a simple empirical estimate of f and sampling from this estimated distribution. In practical application, the empirical bootstrap means using some form of resampling with replacement from the actual data x to generate B (e.g., B = 1,000 or 10,000) bootstrap samples […] The set of B bootstrap samples is a proxy for a set of B independent real samples from f (in reality we have only one actual sample of data). Properties expected from replicate real samples are inferred from the bootstrap samples by analyzing each bootstrap sample exactly as we first analyzed the real data sample. From the set of results of sample size B we measure our inference uncertainties from sample to (conceptual) population […] For many applications it has been theoretically shown […] that the bootstrap can work well for large sample sizes (n), but it is not generally reliable for small n […], regardless of how many bootstrap samples B are used. […] Just as the analysis of a single data set can have many objectives, the bootstrap can be used to provide insight into a host of questions. For example, for each bootstrap sample one could compute and store the conditional variance–covariance matrix, goodness-of-fit values, the estimated variance inflation factor, the model selected, confidence interval width, and other quantities. Inference can be made concerning these quantities, based on summaries over the B bootstrap samples.”
“Information criteria attempt only to select the best model from the candidate models available; if a better model exists, but is not offered as a candidate, then the information-theoretic approach cannot be expected to identify this new model. Adjusted R2 […] are useful as a measure of the proportion of the variation “explained,” [but] are not useful in model selection […] adjusted R2 is poor in model selection; its usefulness should be restricted to description.”
“As we have struggled to understand the larger issues, it has become clear to us that inference based on only a single best model is often relatively poor for a wide variety of substantive reasons. Instead, we increasingly favor multimodel inference: procedures to allow formal statistical inference from all the models in the set. […] Such multimodel inference includes model averaging, incorporating model selection uncertainty into estimates of precision, confidence sets on models, and simple ways to assess the relative importance of variables.”
“If sample size is small, one must realize that relatively little information is probably contained in the data (unless the effect size if very substantial), and the data may provide few insights of much interest or use. Researchers routinely err by building models that are far too complex for the (often meager) data at hand. They do not realize how little structure can be reliably supported by small amounts of data that are typically “noisy.””
“Sometimes, the selected model [when applying an information criterion] contains a parameter that is constant over time, or areas, or age classes […]. This result should not imply that there is no variation in this parameter, rather that parsimony and its bias/variance tradeoff finds the actual variation in the parameter to be relatively small in relation to the information contained in the sample data. It “costs” too much in lost precision to add estimates of all of the individual θi. As the sample size increases, then at some point a model with estimates of the individual parameters would likely be favored. Just because a parsimonious model contains a parameter that is constant across strata does not mean that there is no variation in that process across the strata.”
“[In a significance testing context,] a significant test result does not relate directly to the issue of what approximating model is best to use for inference. One model selection strategy that has often been used in the past is to do likelihood ratio tests of each structural factor […] and then use a model with all the factors that were “significant” at, say, α = 0.05. However, there is no theory that would suggest that this strategy would lead to a model with good inferential properties (i.e., small bias, good precision, and achieved confidence interval coverage at the nominal level). […] The purpose of the analysis of empirical data is not to find the “true model”— not at all. Instead, we wish to find a best approximating model, based on the data, and then develop statistical inferences from this model. […] We search […] not for a “true model,” but rather for a parsimonious model giving an accurate approximation to the interpretable information in the data at hand. Data analysis involves the question, “What level of model complexity will the data support?” and both under- and overfitting are to be avoided. Larger data sets tend to support more complex models, and the selection of the size of the model represents a tradeoff between bias and variance.”
“The easy part of the information-theoretic approaches includes both the computational aspects and the clear understanding of these results […]. The hard part, and the one where training has been so poor, is the a priori thinking about the science of the matter before data analysis — even before data collection. It has been too easy to collect data on a large number of variables in the hope that a fast computer and sophisticated software will sort out the important things — the “significant” ones […]. Instead, a major effort should be mounted to understand the nature of the problem by critical examination of the literature, talking with others working on the general problem, and thinking deeply about alternative hypotheses. Rather than “test” dozens of trivial matters (is the correlation zero? is the effect of the lead treatment zero? are ravens pink?, Anderson et al. 2000), there must be a more concerted effort to provide evidence on meaningful questions that are important to a discipline. This is the critical point: the common failure to address important science questions in a fully competent fashion. […] “Let the computer find out” is a poor strategy for researchers who do not bother to think clearly about the problem of interest and its scientific setting. The sterile analysis of “just the numbers” will continue to be a poor strategy for progress in the sciences.
Researchers often resort to using a computer program that will examine all possible models and variables automatically. Here, the hope is that the computer will discover the important variables and relationships […] The primary mistake here is a common one: the failure to posit a small set of a priori models, each representing a plausible research hypothesis.”
“Model selection is most often thought of as a way to select just the best model, then inference is conditional on that model. However, information-theoretic approaches are more general than this simplistic concept of model selection. Given a set of models, specified independently of the sample data, we can make formal inferences based on the entire set of models. […] Part of multimodel inference includes ranking the fitted models from best to worst […] and then scaling to obtain the relative plausibility of each fitted model (gi) by a weight of evidence (wi) relative to the selected best model. Using the conditional sampling variance […] from each model and the Akaike weights […], unconditional inferences about precision can be made over the entire set of models. Model-averaged parameter estimates and estimates of unconditional sampling variances can be easily computed. Model selection uncertainty is a substantial subject in its own right, well beyond just the issue of determining the best model.”
“There are three general approaches to assessing model selection uncertainty: (1) theoretical studies, mostly using Monte Carlo simulation methods; (2) the bootstrap applied to a given set of data; and (3) utilizing the set of AIC differences (i.e., ∆i) and model weights wi from the set of models fit to data.”
“Statistical science should emphasize estimation of parameters and associated measures of estimator uncertainty. Given a correct model […], an MLE is reliable, and we can compute a reliable estimate of its sampling variance and a reliable confidence interval […]. If the model is selected entirely independently of the data at hand, and is a good approximating model, and if n is large, then the estimated sampling variance is essentially unbiased, and any appropriate confidence interval will essentially achieve its nominal coverage. This would be the case if we used only one model, decided on a priori, and it was a good model, g, of the data generated under truth, f. However, even when we do objective, data-based model selection (which we are advocating here), the [model] selection process is expected to introduce an added component of sampling uncertainty into any estimated parameter; hence classical theoretical sampling variances are too small: They are conditional on the model and do not reflect model selection uncertainty. One result is that conditional confidence intervals can be expected to have less than nominal coverage.”
“Data analysis is sometimes focused on the variables to include versus exclude in the selected model (e.g., important vs. unimportant). Variable selection is often the focus of model selection for linear or logistic regression models. Often, an investigator uses stepwise analysis to arrive at a final model, and from this a conclusion is drawn that the variables in this model are important, whereas the other variables are not important. While common, this is poor practice and, among other issues, fails to fully consider model selection uncertainty. […] Estimates of the relative importance of predictor variables xj can best be made by summing the Akaike weights across all the models in the set where variable j occurs. Thus, the relative importance of variable j is reflected in the sum w+ (j). The larger the w+ (j) the more important variable j is, relative to the other variables. Using the w+ (j), all the variables can be ranked in their importance. […] This idea extends to subsets of variables. For example, we can judge the importance of a pair of variables, as a pair, by the sum of the Akaike weights of all models that include the pair of variables. […] To summarize, in many contexts the AIC selected best model will include some variables and exclude others. Yet this inclusion or exclusion by itself does not distinguish differential evidence for the importance of a variable in the model. The model weights […] summed over all models that include a given variable provide a better weight of evidence for the importance of that variable in the context of the set of models considered.” [The reason why I’m not telling you how to calculate Akaike weights is that I don’t want to bother with math formulas in wordpress – but I guess all you need to know is that these are not hard to calculate. It should perhaps be added that one can also use bootstrapping methods to obtain relevant model weights to apply in a multimodel inference context.]
“If data analysis relies on model selection, then inferences should acknowledge model selection uncertainty. If the goal is to get the best estimates of a set of parameters in common to all models (this includes prediction), model averaging is recommended. If the models have definite, and differing, interpretations as regards understanding relationships among variables, and it is such understanding that is sought, then one wants to identify the best model and make inferences based on that model. […] The bootstrap provides direct, robust estimates of model selection probabilities πi , but we have no reason now to think that use of bootstrap estimates of model selection probabilities rather than use of the Akaike weights will lead to superior unconditional sampling variances or model-averaged parameter estimators. […] Be mindful of possible model redundancy. A carefully thought-out set of a priori models should eliminate model redundancy problems and is a central part of a sound strategy for obtaining reliable inferences. […] Results are sensitive to having demonstrably poor models in the set of models considered; thus it is very important to exclude models that are a priori poor. […] The importance of a small number (R) of candidate models, defined prior to detailed analysis of the data, cannot be overstated. […] One should have R much smaller than n. MMI [Multi-Model Inference] approaches become increasingly important in cases where there are many models to consider.”
“In general there is a substantial amount of model selection uncertainty in many practical problems […]. Such uncertainty about what model structure (and associated parameter values) is the K-L [Kullback–Leibler] best approximating model applies whether one uses hypothesis testing, information-theoretic criteria, dimension-consistent criteria, cross-validation, or various Bayesian methods. Often, there is a nonnegligible variance component for estimated parameters (this includes prediction) due to uncertainty about what model to use, and this component should be included in estimates of precision. […] we recommend assessing model selection uncertainty rather than ignoring the matter. […] It is […] not a sound idea to pick a single model and unquestioningly base extrapolated predictions on it when there is model uncertainty.”