i. “The curse which lies upon marriage is that too often the individuals are joined in their weakness rather than in their strength, each asking from the other instead of finding pleasure in giving.” (Simone de Beauvoir)
ii. “Revolt against a tyrant is legitimate; it can succeed. Revolt against human nature is doomed to failure.” (André Maurois)
iii. “It is easy to be admired when one remains inaccessible.” (-ll-)
iv. “The life of a couple is lived on the mental level of the more mediocre of the two beings who compose it.” (-ll-)
v. “Marriage is not something that can be accomplished all at once; it has to be constantly reaccomplished. A couple must never indulge in idle tranquility with the remark: “The game is won; let’s relax.” The game is never won. […] A successful marriage is an edifice that must be rebuilt every day.” (-ll-)
vi. “Almost all men improve on acquaintance.” (-ll-)
vii. “There is no absurdity or contradiction to which passion may not lead a man. When love or hate takes control, reason must submit and then discover justifications for their folly.” (-ll-)
viii. “When I hear somebody sigh that “Life is hard,” I am always tempted to ask, “Compared to what?”” (Sydney J. Harris)
ix. “The most worthwhile form of education is the kind that puts the educator inside you, as it were, so that the appetite for learning persists long after the external pressure for grades and degrees has vanished. Otherwise you are not educated; you are merely trained.” (-ll-)
x. “As we grow older, we should learn that these are two quite different things. Character is something you forge for yourself; temperament is something you are born with and can only slightly modify. Some people have easy temperaments and weak characters; others have difficult temperaments and strong characters. We are all prone to confuse the two in assessing people we associate with. Those with easy temperaments and weak characters are more likable than admirable; those with difficult temperaments and strong characters are more admirable than likable.” (-ll-)
xi. “There seems to be a kind of order in the universe, in the movement of the stars and the turning of the earth and the changing of the seasons, and even in the cycle of human life. But human life itself is almost pure chaos. Everyone takes his stance, asserts his own rights and feelings, mistaking the motives of others, and his own.” (Katherine Anne Porter)
xii. “The historian’s one task is to tell the thing as it happened.” (Lucian of Samosata)
xiii. “Innocence most often is a good fortune and not a virtue.” (Anatole France) (L’innocence, le plus souvent, est un bonheur et non pas une vertu.)
xiv. “It is almost impossible systematically to constitute a natural moral law. Nature has no principles. She furnishes us with no reason to believe that human life is to be respected. Nature, in her indifference, makes no distinction between good and evil.” (-ll-) (Il est à peu près impossible de constituer systématiquement une morale naturelle. La nature n’a pas de principes. Elle ne nous fournit aucune raison de croire que la vie humaine est respectable. La nature, indifférente, ne fait nulle distinction du bien et du mal.)
xv. “When a thing has been said and well said, have no scruple: take it and copy it.” (-ll-) (Quand une chose a été dite et bien dite, n’ayez aucun scrupule, prenez-la, copiez.)
xvi. “The whole art of teaching is only the art of awakening the natural curiosity of young minds for the purpose of satisfying it afterwards.” (-ll-) (L’art d’enseigner n’est que l’art d’éveiller la curiosité des jeunes âmes pour la satisfaire ensuite.) (Two related links)
xvii. “All changes, even the most longed for, have their melancholy; for what we leave behind us is a part of ourselves; we must die to one life before we can enter another.” (-ll-) (Tous les changements, même les plus souhaités ont leur mélancolie, car ce que nous quittons, c’est une partie de nous-mêmes; il faut mourir à une vie pour entrer dans une autre.)
xviii. “We need some imaginative stimulus, some not impossible ideal such as may shape vague hope, and transform it into effective desire, to carry us year after year, without disgust, through the routine-work which is so large a part of life.” (Walter Pater)
xix. “When we lose one we love, our bitterest tears are called forth by the memory of hours when we loved not enough.” (Maurice Maeterlinck) (Quand nous perdons un être aimé, ce qui nous fait pleurer les larmes qui ne soulagent point, c’est le souvenir des moments où nous ne l’avons pas assez aimé.)
xx. “All our knowledge merely helps us to die a more painful death than the animals that know nothing.” (-ll-)
This will be my last post about the book. Below I have added some more observations from some of the remaining chapters of the book – I noticed after I’d published this post that a few similar observations were also included in the first post, but in the end I decided against removing those arguably superfluous observations from this post (‘if the authors are allowed to repeat themselves, so am I…’).
“General avoidance often weaves its way through the fabric of depressed persons’ lives. One of my (Pettit’s) first clinical supervisors proposed that avoidance captures the true essence of depression. He argued that depression, at its root, is simply the opposite of participation. Although this is surely an overly broad and simplistic conceptualization of depression, it highlights the idea that depressed people are often passive recipients of life, rather than active participants in life. […] depressed persons often have less developed social networks […] depressed persons often exhibit generalized problems with social avoidance. What’s more, evidence suggests that depression is frequently preceded by anxiety of some form […]. Anxiety, of course, is characterized by avoidance of some feared object, cognition, or event. Although anxiety and general social avoidance certainly play a role in depression, a more specific form of social avoidance appears more pertinent to the propagation of depression: Avoidance of interpersonal conflict. We argue that depression is characterized, and even propagated, by a pattern of interpersonal conflict avoidance.”
“It is well-known that depression is associated with a number of factors related to interpersonal avoidance. Four such factors (i.e., low assertiveness, social withdrawal, general avoidance, and shyness) have specifically been identified as interpersonal characteristics of a large number of depressed individuals. […] Two characteristics of assertive behaviors may be of great difficulty to depressed persons. First, asserting oneself requires active engagement with others, which forces the depressed person to overcome feelings of general social anxiety, in addition to overcoming the lethargy and indifference that retard activity in general among such individuals. Second, and more important, assertive behaviors entail making explicit requests of others. Social conventions dictate that requests naturally merit a response, be it positive or negative, and it is at this point that the interpersonal stage for potential disharmony is set. To reach this point, one must conquer general social anxiety and place himself or herself in a position that allows for the possibility of negative, rejecting responses from others. This latter possibility appears to be the sticking point for many depressed persons. That is, depressed persons overcome social inhibition and inertia but are often unwilling to knowingly make themselves vulnerable to interpersonal rejection (although they often engage in behaviors that unknowingly place themselves at greater risk of rejection, such as excessive reassurance-seeking). […] assertiveness is often quite difficult for depressed people. Assertiveness is a necessary component of successful conflict negotiation. […] Avoiding assertive behaviors […] allows the individual to escape the discomfort of receiving negative reactions from others. It also, however, lessens the individual’s chances of obtaining desired outcomes”.
“Price, Sloman, Gardner, Gilbert, and Rohde (1994) argued that depression-related states and behaviors represent evolved forms of a primordial “involuntary subordinate strategy.” Price et al. contended that the involuntary subordinate strategy arose primarily as a means to cope with social competition and conflict, particularly losses therein. Out of this framework, the primary function of depression is to resolve interpersonal conflicts by presenting a “no threat” signal to others. Recent animal research has provided a degree of support for this proposition. In work with cynomolgus monkeys, Shively, Laber-Laird, and Anton (1997) manipulated the social status of a group of females, such that previously dominant monkeys became subordinate to formerly subordinate monkeys. This reduction in social status produced behavioral and hormonal reactions corresponding to depressive reactions among humans. Behaviorally, these monkeys exhibited fearful scanning of the environment, and more important, decreased social affiliation. These behavioral changes suggest that the newly subordinate monkeys were engaging in interpersonal avoidance. Similarly, the monkeys’ hormonal activity transformed, and they began hypersecreting cortisol. Research has demonstrated that hypersecretion of cortisol occurs more frequently among depressed humans […]. Other studies of animal social hierarchies and avoidance provide results consistent with those of Shively et al.”
“It is interesting to note that [a] pattern of being overly reserved or acquiescing with strangers, but extremely negative and antagonistic toward close relatives or romantic partners, is quite common among people who are depressed.”
“[R]esearch on a phenomenon called self-handicapping indicates that depressed people may gain self-protective and other rewards for depressive cognition and behavior. These rewards serve to maintain depressive cognition and behavior, and thereby perpetuate depression. Self-handicapping, a concept with origins in the field of social psychology, refers to placing obstacles in the way of one’s performance on tasks so as to furnish oneself with an external attribution when future outcomes are uncertain […]. That is, in the anticipation of a possible failure or a poor performance of some sort, people may either claim to have some limitation or actuallyproduce a limitation that provides an explanation in the event that they perform poorly. Self-handicapping is a frequently occurring phenomenon and is not limited to depressed people. […] A series of studies by Baumgardner (1991) found self-handicapping, in general, appears to occur in one of two situations. First, it may occur when people have experienced a failure privately and hold concerns about that failure becoming public. […] People may also self-handicap when they have experienced a success publicly yet doubt their ability to maintain that success.”
“[T]wo forms of self-handicapping exist: claimed and acquired (these have also been referred to as self-reported and behavioral, respectively). Claimed handicaps are likely more common than acquired handicaps. […] claimed handicaps occur when people believe that the handicap will explain their poor performance. Acquired handicaps, however, occur when people (a) believe the handicap will explain their poor performance and (b) believe that the handicap will lower others’ future expectations. Both forms of self-handicapping are likely relevant to depression […] evidence suggests that self-handicapping, at least behavioral self-handicapping, occurs more frequently among men than women […]. Self-reported, or claimed, handicaps occur at similar rates among men and women. […] empirical evidence confirms that depressed people are more likely to self-handicap than others.”
“[S]elf-handicapping operates on both intrapsychic and interpersonal levels. That is, the handicap provides the individual a cognitive explanation of the failed performance and also provides an explanation to others who may be privy to the failure. […] In addition to providing an external attribution for failure, barriers to performance increase the likelihood of internal credit for success (i.e., self-enhancement). Consequently, individuals who self-handicap apparently benefit regardless of whether they succeed or fail. […] people self-handicap in the social arena with the goal of promoting a more positive image. Ironically, self-handicapping tends to have the reverse effect [“the perception that people [are] making excuses for their performance” can be damaging, regardless of actual performance] […] lower social expectations presage lower social opportunities, which by itself is a bad prognostic sign for depression. […] Although further empirical research on the interpersonal sequelae of these behaviors is needed before firm conclusions are drawn, they likely reduce opportunities for positive social engagement, increase antagonistic behaviors from others, and reconfirm depressed persons’ views that they are socially inept. The end result, tragically, is continued and exacerbated depression. […] When taken to an extreme, chronic behavioral self-handicapping may significantly hinder interpersonal relations and lead to the development of maladaptive behaviors such as alcoholism and substance dependence.”
“Sacco [argued in a publication] that depressed people’s relationship partners develop mental representations of them that become relatively autonomous and that bias subsequent perceptions of their depressed partners. […] students [in the study] were more likely to attribute the depressed person’s failures to internal, controllable causes, whereas the nondepressed person’s failures were judged to result more from external, uncontrollable factors. Students also considered the causes for failures to be more stable and have a wider impact on the depressed person’s life, as compared with the nondepressed person. The reverse pattern was seen for successful events. That is, successes were judged to result from external, uncontrollable factors among the depressed person but internal, controllable factors for the nondepressed person. Both of these processes are important — depressed persons not only are blamed more for negative events but also are given less credit when positive events occur […] [That] [f]ailures [are] attributed to enduring traits of depressed persons [by others] [and are] viewed as under the control of depressed persons (i.e., the depressed person could have avoided the failure if they had really tried) […] is strikingly similar to depressed persons’ characteristic self-blaming attitude. They attribute their own failures to internal, stable, and global factors […] [In short,] most depressed persons’, attributions are clearly saturated with self-directed blame [and] others tend to adopt the same negative, blaming attributions and behaviors toward depressed persons. […] Once formed, the mental representations [of others] remain stable, regardless of whether the depression has remitted. […] representations of negative behaviors, once solidified, are more difficult to alter than representations of positive behaviors”.
“Others not only fail to recognize positive attributes but also alter their interaction styles with the depressed person. There is evidence that others’ negative views subserve the communications they emit to the negatively represented person. For example, the literature on attributions and relationship functioning has documented a connection between negative attributions and blaming communications […] In short, negative attributions lead to blaming communications and decreased relationship satisfaction. […] Blaming communications, in turn, represent a specific instance of the array of interpersonal indicators shown to predict depression chronicity. […] blame maintenance essentially assures that others will continue to hold negative views of depressed persons, regardless of the depressed persons’ presentations. […] In the context of specifically targeting blame maintenance (or any of the other processes), simultaneous attention to the other processes is warranted, lest resolution of one exacerbate another […]. As an example, blame may serve as a source of selfverification for depressed people. If blame is reduced, negative self-verification strivings may be thwarted, which, in turn, may lead to attempts to restore blame or to meet self-verification needs in other ways (e.g., by reducing performance in a previously adaptive domain) or in other relationships (e.g., with friends).”
“The tendency to avoid potentially uncomfortable social interactions is a stable trait, with shyness demonstrating remarkable consistency from early childhood until late adulthood. […] Shyness has been implicated as a risk factor for depression. […] Social skills represent another potential stable vulnerability to depression. […] people who display poor social skills are less likely to obtain positive outcomes and avoid negative outcomes in interpersonal relationships. As a result, they are more likely to become and stay depressed […] an impressive amount of evidence suggests that people who are depressed also display poor social skills. […] When maladaptive interpersonal behaviors compromise relationships, shy people are more likely to experience loneliness [and] less likely to have good social support […], and are therefore more likely to become even more depressed.”
i. “Habit is habit and not to be flung out of the window by any man, but coaxed downstairs a step at a time.” (Mark Twain)
ii. “We can do without any article of luxury we have never had; but when once obtained, it is not in human natur’ to surrender it voluntarily.” (Thomas Chandler Haliburton)
iii. “An aphorism can contain only as much wisdom as overstatement will permit.” (Clifton Fadiman)
iv. “When you re-read a classic you do not see in the book more than you did before. You see more in you than there was before.” (-ll-)
v. “Statistician: A man who believes figures don’t lie, but admits that under analysis some of them won’t stand up either.” (Esar’s Comic Dictionary, by Evan Esar)
vi. “Many a man who falls in love with a dimple make the mistake of marrying the whole girl.” (-ll-)
vii. “The wise man will live as long as he ought, not as long as he can.” (Seneca the Younger)
viii. “No one is bound to be clever, but every one is under an obligation to be good.” (Jean-Louis Guez de Balzac) (Il n’y a personne qui soit tenu d’être habile; mais il n’y en a point qui ne soit obligé d’être bon.)
ix. “Solitude is certainly a fine thing; but there is pleasure in having someone who can answer, from time to time, that it is a fine thing.” (-ll-) (La solitude est certainement une belle chose, mais il y a plaisir d’avoir quelqu’un qui sache répondre, à qui on puisse dire de temps en temps, que c’est un belle chose.)
x. “Zealous men are ever displaying to you the strength of their belief, while judicious men are shewing you the grounds of it.” (William Shenstone)
xi. “A man has generally the good or ill qualities which he attributes to mankind.” (-ll-)
xii. “Any knowledge that doesn’t lead to new questions quickly dies out: it fails to maintain the temperature required for sustaining life.” (Wisława Szymborska)
xiii. “We are not indeed obliged always to speak what we think, but we must always think what we speak.” (Anne-Thérèse de Marguenat de Courcelles, marquise de Lambert)
xiv. “The most necessary disposition to relish pleasures is to know how to be without them.” (-ll-)
xv. “The pleasures of the world are deceitful; they promise more than they give. They trouble us in seeking them, they do not satisfy us when possessing them and they make us despair in losing them.” (-ll-)
xvi. “Would you be esteemed? live with persons that are estimable.” (-ll-)
xvii. “Nothing is more dangerous than an idea, when it’s the only one we have.” (Émile Chartier)
xviii. “Politeness is for people toward whom we feel indifferent, and moods, both good and bad, are for those we love.” (-ll-)
xix. “Happiness is a reward that comes to those that have not looked for it.” (-ll-)
xx. “It is very true that we ought to think of the happiness of others; but it is not often enough said that the best thing we can do for those who love us is to be happy ourselves.” (-ll-)
“This book develops a new explanatory framework for chronic depression […]. The framework rests on the premise that depression appears to include self-sustaining processes, that these processes may be, at least in part, interpersonal, and that understanding of these processes from an interpersonal standpoint may be useful in applied settings.”
I read this book a couple of days ago – here’s my short goodreads review. As mentioned in the review, the book includes quite a bit of rather old research and a lot of theorizing, which would probably be my two main points of criticism. I gave the book two stars on goodreads, but I should emphasize that this two star rating doesn’t really mean I think it’s a bad book; sometimes two star books are ‘borderline’, but this one isn’t and I found some of the coverage quite interesting. Given that the book explores how depression relates to interpersonal factors, I’ve of course read at least some stuff about many of the topics touched upon in the book before, e.g. here, here, here, here, and here. There was quite a bit of new stuff as well, though – in particular previous works dealing with depression which I’ve read have not had that much to say about how depression impacts people’s behaviours towards others and others’ behaviours towards the person who is depressed; the ‘interpersonal factors’ part of the coverage of previous works I’ve read on these topics has usually been limited to observations to the effect that lonely people are more likely to be depressed and similar, though the effects of social anxiety, often observed in depressed individuals (“the co-occurrence of depression and anxiety is very common. As many as 50% of people with major depression also experience an anxiety disorder” – a quote from the book), have admittedly also been pointed out to me before. This book however goes into more detail about these things and cover effects I do not recall having seen mentioned before.
Below I have added some observations from the book.
“There is good reason to emphasize the chronic nature of depression. First, some forms of depression, such as dysthymia, are chronic by definition (at least 2 years’ persistence in the case of dysthymia). Second, depression appears to be persistent within an episode, and, once it finally lets up, it tends to come back. Depression is both persistent within episodes and recurrent across episodes. […] Recurrence is defined as the reestablishment of clinical depression following a diagnosis-free period. […] Relapse is the resumption of symptoms in the vulnerable timeframe just following remission of a depressive episode. […] Several theorists have argued that depressive symptoms have a way of sustaining themselves. In this view, it is as if depression “feeds off itself,” “maintains its own momentum,” and “self-amplifies.” A key argument of this book is that these self-sustaining processes in depression may be interpersonal in nature. […] interpersonal factors are among the strongest predictors of depression chronicity. […] People with more interpersonal problems experience longer duration of depressive episodes”.
“Dysregulations of serotonin neurotransmitter systems, as well as of the hypothalamic-pituitary-adrenal axis (which regulates cortisol levels), have […] been proposed as a stable depression risk. There is little question that serotonin and cortisol levels are altered during depressive episodes (and related phenomena such as suicide). It is interesting to note that animals defeated in social skirmishes display behavioral and neurochemical similarities to depressed people […]. However, there is little persuasive evidence that dysregulation of these systems provides a full account of depression’s causes. [You can find some recent discussion of these topics, which also touches upon the observations included below, here] […] Regarding psychological explanations […], these theories can be grouped into those emphasizing cognitive vulnerability factors (e.g., pessimism), those emphasizing interpersonal vulnerability factors (e.g., excessive dependency), and those emphasizing personality-based vulnerability factors (e.g., high neuroticism, low extroversion). As with genetic—neurobiological explanations, psychological approaches have made some progress but cannot claim to provide a complete account of depression’s causes. […] shyness represents a stable vulnerability for depression. […] Although the notion that stable vulnerabilities like shyness only lead to depression in the context of some stressor is a reasonable view, it does not account for the finding that some shy (or otherwise vulnerable) people experience depression independent of negative life events. Partly in response to this quandary, a main purpose of this book is to explore interpersonal mechanisms whereby depression prolongs itself, even in the absence of external causes like negative life events.”
“there is some reason to suspect that depression’s properties may differ in certain subsets of people. Regarding late-life populations — for example, depressions that first occur in later life, as compared with those that first occur in early adulthood — occurrences may be about equally common in men and women (whereas “early” depressions are more common in women […]). In addition, late-life first occurrences are less associated with first-degree relatives’ depression risk […]; more related to neurological or medical disease […]; less severe […]; less associated with suicidal and anxious symptoms […]; and less related to personality problems, such as excessive dependency and avoidance […] Definitional problems plague depression research.”
“The incidence of clinical depression and depressive symptoms is two to three times higher among women than men. […] Although the gender differential in depression is consistent and well documented, little is known about the processes that underlie these differences.”
“In studies with follow-ups of 10 years or more, Coryell and Winokur (1992) found that 70% of people with one depressive episode subsequently experienced at least one more.”
“The link between life stress and depression is well-known — in general, the occurrence of life stressors appears to contribute to the development of depression […] Haramen (1991) theorized that depressed people are particularly stress-prone, in the sense that they actively generate negative life events. (It is important to distinguish active from intentional here — we do not believe that depressed people intentionally create problems for themselves, but that some of their behaviors have the unintended consequence of making life more stressful). If so, a self-sustaining process would be implicated in which formerly depressed people actively generate future life events that, in turn, sow the seeds of future depression. This would explain, at least in part, why depression persists and recurs. In a series of empirical studies, Hammen, Davila, Brown, Ellicott, and Gitlin, (1992) have documented the phenomenon of “stress generation.” […] In a 1-year study of women with depression, bipolar disorder, medical illness, or no disorder, Hammen (1991) showed that depressed women experienced more interpersonal stress to which the women themselves contributed (e.g., disputes with teachers or bosses; conflicts with children or partners), even compared with the women with bipolar disorder and medical illness. The finding was specific to interpersonal events — depressed women did not differ from others with regard to “fateful” events (i.e., those that really are randomly foisted on people). This result highlights the importance of interpersonal events, as well as the idea that nonrandom, self-produced negative events are characteristic of depressed people. Notably, the depressed women in Hammen’s (1991) study all experienced chronic forms of depression. This finding has been replicated in samples of men and women […] marital couples […] adolescent women […] children […] as well as by other research groups […] This line of research implicates the important possibility that, although depression may occur in the wake of stress, so may stress occur in depression’s wake.”
“Evidence suggests that people who are depressed demonstrate average problem-solving skills in impersonal settings (e.g., solving a puzzle), yet display specific problem-solving deficits in interpersonal settings […]. Examples of lapses in interpersonal problem solving may include the misperception of an offhand, trivial comment as an insulting attack; persistently avoiding someone who represents a key source of social support because of a minor misunderstanding; and angry, accusing confrontation of someone who was sincerely trying to help.”
“Many stress generation studies examine the links between self-reported depression and self-reported negative life stress. Because self-report is the source for assessment of both depression and stress, it is possible that increases in reported negative life events may merely reflect an increasingly hopeless outlook, rather than actual stress increases. A depressed person may thus perceive and report stress, even when stress is not actually present. […] the possible influence of increasing hopelessness in stress generation deserves attention […], especially insofar as hopelessness is a hallmark of major descriptive and etiological accounts of depression […] hopelessness, because of its embittering and stultifying effects on other people, may be particularly likely to disaffect others (i.e., to generate the stress of interpersonal rejection).”
“depression chronicity itself may be involved in stress generation. As depression persists, those who experience it may become more and more hopeless, and their significant others may become more and more burdened and disaffected. […] A second possibility is that hopelessness, because it has embittering and stultifying effects, may lead to cognitive representations of depressed people in the minds of significant others that are negative and change resistant. Sacco (1999) argued that once these representations are developed, they selectively guide attention and expectancies to confirm the representation. These social-cognitive processes may occur spontaneously and outside of awareness […] Once crystallized, cognitive representations of negative behaviors are more change resistant than representations of positive behaviors […]. Moreover, such representations gain momentum with use, in that they come to disproportionately influence social cognition relative to actual subsequent behaviors of the represented person […] With regard to others’ perceptions, the hopeless and potentially depressed person may face a very difficult problem: Continued hopelessness may only serve to maintain others’ negative views and thus generate stress in the form of criticism; positive changes, because they do not match others’ schemata, may be unnoticed or misattributed, leaving others’ negative representations unchanged.”
“there is accumulating evidence that depressed people actively generate their own stress, especially interpersonal stress […] For example, negative feedback-seeking (defined as the tendency to directly or indirectly invite criticism from other people and viewed as motivated by self-verification strivings […] represents a specific mechanism by which depression-prone people contribute to such stressors as relationship dissatisfaction and dissolution. Similarly, excessive reassurance-seeking (defined as the tendency to repeatedly and persistently demand assurance from others as to one’s lovability and worth, even after such is provided; […] also directly contributes to interpersonal stress. Interpersonal conflict avoidance ([…] defined as the anxious avoidance of self-assertion situations), also sows the seeds of stress generation. […] Research on self-handicapping and inoculation indicates that depressed people may gain self-protective and other rewards for depressive cognition and behavior. These rewards serve to maintain depressive cognition and behavior, and thereby [also] increase depression chronicity”.
[S]elf-verification theory […] proposes that people strive to attain and preserve predictable, certain, and familiar self-concepts. Further, the theory indicates that people accomplish this by actively seeking self-confirming interpersonal responses from those in their social environment. A key and perhaps counterintuitive implication of the theory is that there is no difference in the self-verification needs between people with positive self-concepts and people with negative self-concepts. […] In Study 1 [of Katz and Joiner (2002)], people in stable dating relationships were most intimate with and somewhat more committed to partners when they perceived that partners evaluated them as they evaluated themselves (even if negative). In Study 2, men reported the greatest esteem for same-sex roommates who evaluated them in a self-verifying manner (even if negative). Results from Study 2 were replicated and extended to both male and female roommate dyads in Study 3. […] In related research, it has been demonstrated that feedback that matches one’s self-concept is more “attention-grabbing,” more memorable, more rewarding, and more believable […]. In addition, a growing body of research suggests that people are more satisfied with and intimate in self-verifying relationships”.
“neither we nor self-verification theory imply that people enjoy the pain of abusive relationships and therefore seek them out. That is, self-verification theory is not talking about masochism. Rather, the theory points out the intractable dilemma of people with low self-esteem. If they choose (by whatever means, conscious or not) affirming relationships, relationship dysfunction, including abuse, may be in the cards. If they choose healthier relationships, they may have to grapple with the feeling that these relationships, despite their healthy qualities, do not provide them with self-confirmation. This represents a very difficult problem that has obvious effects on people’s well-being. […] there is growing evidence that people with depressive symptoms actively seek self-verification (i.e., negative feedback), often receive it, and may become depressed as a result. […] Depression may [also] perpetuate itself as a function of encouraging negative feedback-seeking. That is, people with depressive symptoms may solicit negative appraisals (and get them), and the receipt of negative feedback may serve to maintain or amplify their depression.”
“Coyne’s interpersonal theory of depression (1976b) proposed that in response to doubts as to their own worth or as to whether others truly care about them, initially nondepressed individuals may seek reassurance from others. Others may provide reassurance, but with little effect, because potentially depressed people doubt the reassurance, attributing it instead to others’ sense of pity or obligation. Potentially depressed people thus face a very difficult problem: They both need and doubt others’ reassurance. The need is emotionally powerful and thus may win out (at least temporarily), compelling the potentially depressed individuals to again “go back to the well” for reassurance from others; even if received, however, the reassurance is again doubted, and the pattern is repeated. Because the pattern is repetitive and resistant to change, the increasingly depressed persons’ significant others become confused, frustrated, and irritated and thus increasingly likely to reject the depressed persons and to become depressed themselves. […] We suggest that there is a considerable difference between the routine and adaptive solicitation of social support across distinct situations, and the repeated and persistent seeking of reassurance within the same situation, even when reassurance has already been provided.”
“Joiner and Katz (1999) reviewed the literature on contagious depression, and concluded that 40 findings from 36 separate studies provided substantial overall support for the proposition that depressed mood, and particularly, depressive symptoms, are contagious. […] excessive reassurance-seeking may explain, in part, when depression will be interpersonally transmitted. Taken together with research on interpersonal rejection, the work on contagious depression suggests that the joint operation of depressive symptoms and excessive reassurance-seeking disaffects significant others, by distancing them actually (e.g., ending the relationship) or functionally (e.g., emotional unavailability due to frustration or to contagious depression). […] In the interpersonal arena, excessive reassurance-seeking and negative feedback-seeking compound one another by creating a particularly confusing and frustrating experience for relationship partners of depressed people. […] It is interesting to note that this process has been confirmed empirically: Joiner and Metalsky (1995) found that relationship partners of depressed people are particularly likely to evaluate them negatively if they engaged in both excessive reassurance-seeking and negative feedback-seeking.”
i. Motte-and-bailey castle (‘good article’).
“A motte-and-bailey castle is a fortification with a wooden or stone keep situated on a raised earthwork called a motte, accompanied by an enclosed courtyard, or bailey, surrounded by a protective ditch and palisade. Relatively easy to build with unskilled, often forced labour, but still militarily formidable, these castles were built across northern Europe from the 10th century onwards, spreading from Normandy and Anjou in France, into the Holy Roman Empire in the 11th century. The Normans introduced the design into England and Wales following their invasion in 1066. Motte-and-bailey castles were adopted in Scotland, Ireland, the Low Countries and Denmark in the 12th and 13th centuries. By the end of the 13th century, the design was largely superseded by alternative forms of fortification, but the earthworks remain a prominent feature in many countries. […]
Various methods were used to build mottes. Where a natural hill could be used, scarping could produce a motte without the need to create an artificial mound, but more commonly much of the motte would have to be constructed by hand. Four methods existed for building a mound and a tower: the mound could either be built first, and a tower placed on top of it; the tower could alternatively be built on the original ground surface and then buried within the mound; the tower could potentially be built on the original ground surface and then partially buried within the mound, the buried part forming a cellar beneath; or the tower could be built first, and the mound added later.
Regardless of the sequencing, artificial mottes had to be built by piling up earth; this work was undertaken by hand, using wooden shovels and hand-barrows, possibly with picks as well in the later periods. Larger mottes took disproportionately more effort to build than their smaller equivalents, because of the volumes of earth involved. The largest mottes in England, such as Thetford, are estimated to have required up to 24,000 man-days of work; smaller ones required perhaps as little as 1,000. […] Taking into account estimates of the likely available manpower during the period, historians estimate that the larger mottes might have taken between four and nine months to build. This contrasted favourably with stone keeps of the period, which typically took up to ten years to build. Very little skilled labour was required to build motte and bailey castles, which made them very attractive propositions if forced peasant labour was available, as was the case after the Norman invasion of England. […]
The type of soil would make a difference to the design of the motte, as clay soils could support a steeper motte, whilst sandier soils meant that a motte would need a more gentle incline. Where available, layers of different sorts of earth, such as clay, gravel and chalk, would be used alternatively to build in strength to the design. Layers of turf could also be added to stabilise the motte as it was built up, or a core of stones placed as the heart of the structure to provide strength. Similar issues applied to the defensive ditches, where designers found that the wider the ditch was dug, the deeper and steeper the sides of the scarp could be, making it more defensive. […]
Although motte-and-bailey castles are the best known castle design, they were not always the most numerous in any given area. A popular alternative was the ringwork castle, involving a palisade being built on top of a raised earth rampart, protected by a ditch. The choice of motte and bailey or ringwork was partially driven by terrain, as mottes were typically built on low ground, and on deeper clay and alluvial soils. Another factor may have been speed, as ringworks were faster to build than mottes. Some ringwork castles were later converted into motte-and-bailey designs, by filling in the centre of the ringwork to produce a flat-topped motte. […]
In England, William invaded from Normandy in 1066, resulting in three phases of castle building in England, around 80% of which were in the motte-and-bailey pattern. […] around 741 motte-and-bailey castles [were built] in England and Wales alone. […] Many motte-and-bailey castles were occupied relatively briefly and in England many were being abandoned by the 12th century, and others neglected and allowed to lapse into disrepair. In the Low Countries and Germany, a similar transition occurred in the 13th and 14th centuries. […] One factor was the introduction of stone into castle building. The earliest stone castles had emerged in the 10th century […] Although wood was a more powerful defensive material than was once thought, stone became increasingly popular for military and symbolic reasons.”
ii. Battle of Midway (featured). Lots of good stuff in there. One aspect I had not been aware of beforehand was that Allied codebreakers also here (I was quite familiar with the works of Turing and others in Bletchley Park) played a key role:
“Admiral Nimitz had one priceless advantage: cryptanalysts had partially broken the Japanese Navy’s JN-25b code. Since the early spring of 1942, the US had been decoding messages stating that there would soon be an operation at objective “AF”. It was not known where “AF” was, but Commander Joseph J. Rochefort and his team at Station HYPO were able to confirm that it was Midway; Captain Wilfred Holmes devised a ruse of telling the base at Midway (by secure undersea cable) to broadcast an uncoded radio message stating that Midway’s water purification system had broken down. Within 24 hours, the code breakers picked up a Japanese message that “AF was short on water.” HYPO was also able to determine the date of the attack as either 4 or 5 June, and to provide Nimitz with a complete IJN order of battle. Japan had a new codebook, but its introduction had been delayed, enabling HYPO to read messages for several crucial days; the new code, which had not yet been cracked, came into use shortly before the attack began, but the important breaks had already been made.[nb 8]
As a result, the Americans entered the battle with a very good picture of where, when, and in what strength the Japanese would appear. Nimitz knew that the Japanese had negated their numerical advantage by dividing their ships into four separate task groups, all too widely separated to be able to support each other.[nb 9] […] The Japanese, by contrast, remained almost totally unaware of their opponent’s true strength and dispositions even after the battle began. […] Four Japanese aircraft carriers — Akagi, Kaga, Soryu and Hiryu, all part of the six-carrier force that had attacked Pearl Harbor six months earlier — and a heavy cruiser were sunk at a cost of the carrier Yorktown and a destroyer. After Midway and the exhausting attrition of the Solomon Islands campaign, Japan’s capacity to replace its losses in materiel (particularly aircraft carriers) and men (especially well-trained pilots) rapidly became insufficient to cope with mounting casualties, while the United States’ massive industrial capabilities made American losses far easier to bear. […] The Battle of Midway has often been called “the turning point of the Pacific”. However, the Japanese continued to try to secure more strategic territory in the South Pacific, and the U.S. did not move from a state of naval parity to one of increasing supremacy until after several more months of hard combat. Thus, although Midway was the Allies’ first major victory against the Japanese, it did not radically change the course of the war. Rather, it was the cumulative effects of the battles of Coral Sea and Midway that reduced Japan’s ability to undertake major offensives.”
One thing which really strikes you (well, struck me) when reading this stuff is how incredibly capital-intensive the war at sea really was; this was one of the most important sea battles of the Second World War, yet the total Japanese death toll at Midway was just 3,057. To put that number into perspective, it is significantly smaller than the average number of people killed each day in Stalingrad (according to one estimate, the Soviets alone suffered 478,741 killed or missing during those roughly 5 months (~150 days), which comes out at roughly 3000/day).
iii. History of time-keeping devices (featured). ‘Exactly what it says on the tin’, as they’d say on TV Tropes.
It took a long time to get from where we were to where we are today; the horologists of the past faced a lot of problems you’ve most likely never even thought about. What do you do for example do if your ingenious water clock has trouble keeping time because variation in water temperature causes issues? Well, you use mercury instead of water, of course! (“Since Yi Xing’s clock was a water clock, it was affected by temperature variations. That problem was solved in 976 by Zhang Sixun by replacing the water with mercury, which remains liquid down to −39 °C (−38 °F).”).
iv. Microbial metabolism.
“Microbial metabolism is the means by which a microbe obtains the energy and nutrients (e.g. carbon) it needs to live and reproduce. Microbes use many different types of metabolic strategies and species can often be differentiated from each other based on metabolic characteristics. The specific metabolic properties of a microbe are the major factors in determining that microbe’s ecological niche, and often allow for that microbe to be useful in industrial processes or responsible for biogeochemical cycles. […]
All microbial metabolisms can be arranged according to three principles:
1. How the organism obtains carbon for synthesising cell mass:
- autotrophic – carbon is obtained from carbon dioxide (CO2)
- heterotrophic – carbon is obtained from organic compounds
- mixotrophic – carbon is obtained from both organic compounds and by fixing carbon dioxide
2. How the organism obtains reducing equivalents used either in energy conservation or in biosynthetic reactions:
- lithotrophic – reducing equivalents are obtained from inorganic compounds
- organotrophic – reducing equivalents are obtained from organic compounds
3. How the organism obtains energy for living and growing:
- chemotrophic – energy is obtained from external chemical compounds
- phototrophic – energy is obtained from light
In practice, these terms are almost freely combined. […] Most microbes are heterotrophic (more precisely chemoorganoheterotrophic), using organic compounds as both carbon and energy sources. […] Heterotrophic microbes are extremely abundant in nature and are responsible for the breakdown of large organic polymers such as cellulose, chitin or lignin which are generally indigestible to larger animals. Generally, the breakdown of large polymers to carbon dioxide (mineralization) requires several different organisms, with one breaking down the polymer into its constituent monomers, one able to use the monomers and excreting simpler waste compounds as by-products, and one able to use the excreted wastes. There are many variations on this theme, as different organisms are able to degrade different polymers and secrete different waste products. […]
Biochemically, prokaryotic heterotrophic metabolism is much more versatile than that of eukaryotic organisms, although many prokaryotes share the most basic metabolic models with eukaryotes, e. g. using glycolysis (also called EMP pathway) for sugar metabolism and the citric acid cycle to degrade acetate, producing energy in the form of ATP and reducing power in the form of NADH or quinols. These basic pathways are well conserved because they are also involved in biosynthesis of many conserved building blocks needed for cell growth (sometimes in reverse direction). However, many bacteria and archaea utilize alternative metabolic pathways other than glycolysis and the citric acid cycle. […] The metabolic diversity and ability of prokaryotes to use a large variety of organic compounds arises from the much deeper evolutionary history and diversity of prokaryotes, as compared to eukaryotes. […]
Many microbes (phototrophs) are capable of using light as a source of energy to produce ATP and organic compounds such as carbohydrates, lipids, and proteins. Of these, algae are particularly significant because they are oxygenic, using water as an electron donor for electron transfer during photosynthesis. Phototrophic bacteria are found in the phyla Cyanobacteria, Chlorobi, Proteobacteria, Chloroflexi, and Firmicutes. Along with plants these microbes are responsible for all biological generation of oxygen gas on Earth. […] As befits the large diversity of photosynthetic bacteria, there are many different mechanisms by which light is converted into energy for metabolism. All photosynthetic organisms locate their photosynthetic reaction centers within a membrane, which may be invaginations of the cytoplasmic membrane (Proteobacteria), thylakoid membranes (Cyanobacteria), specialized antenna structures called chlorosomes (Green sulfur and non-sulfur bacteria), or the cytoplasmic membrane itself (heliobacteria). Different photosynthetic bacteria also contain different photosynthetic pigments, such as chlorophylls and carotenoids, allowing them to take advantage of different portions of the electromagnetic spectrum and thereby inhabit different niches. Some groups of organisms contain more specialized light-harvesting structures (e.g. phycobilisomes in Cyanobacteria and chlorosomes in Green sulfur and non-sulfur bacteria), allowing for increased efficiency in light utilization. […]
Most photosynthetic microbes are autotrophic, fixing carbon dioxide via the Calvin cycle. Some photosynthetic bacteria (e.g. Chloroflexus) are photoheterotrophs, meaning that they use organic carbon compounds as a carbon source for growth. Some photosynthetic organisms also fix nitrogen […] Nitrogen is an element required for growth by all biological systems. While extremely common (80% by volume) in the atmosphere, dinitrogen gas (N2) is generally biologically inaccessible due to its high activation energy. Throughout all of nature, only specialized bacteria and Archaea are capable of nitrogen fixation, converting dinitrogen gas into ammonia (NH3), which is easily assimilated by all organisms. These prokaryotes, therefore, are very important ecologically and are often essential for the survival of entire ecosystems. This is especially true in the ocean, where nitrogen-fixing cyanobacteria are often the only sources of fixed nitrogen, and in soils, where specialized symbioses exist between legumes and their nitrogen-fixing partners to provide the nitrogen needed by these plants for growth.
Nitrogen fixation can be found distributed throughout nearly all bacterial lineages and physiological classes but is not a universal property. Because the enzyme nitrogenase, responsible for nitrogen fixation, is very sensitive to oxygen which will inhibit it irreversibly, all nitrogen-fixing organisms must possess some mechanism to keep the concentration of oxygen low. […] The production and activity of nitrogenases is very highly regulated, both because nitrogen fixation is an extremely energetically expensive process (16–24 ATP are used per N2 fixed) and due to the extreme sensitivity of the nitrogenase to oxygen.” (A lot of the stuff above was of course for me either review or closely related to stuff I’ve already read in the coverage provided in Beer et al., a book I’ve talked about before here on the blog).
v. Uranium (featured). It’s hard to know what to include here as the article has a lot of stuff, but I found this part in particular, well, interesting:
“During the Cold War between the Soviet Union and the United States, huge stockpiles of uranium were amassed and tens of thousands of nuclear weapons were created using enriched uranium and plutonium made from uranium. Since the break-up of the Soviet Union in 1991, an estimated 600 short tons (540 metric tons) of highly enriched weapons grade uranium (enough to make 40,000 nuclear warheads) have been stored in often inadequately guarded facilities in the Russian Federation and several other former Soviet states. Police in Asia, Europe, and South America on at least 16 occasions from 1993 to 2005 have intercepted shipments of smuggled bomb-grade uranium or plutonium, most of which was from ex-Soviet sources. From 1993 to 2005 the Material Protection, Control, and Accounting Program, operated by the federal government of the United States, spent approximately US $550 million to help safeguard uranium and plutonium stockpiles in Russia. This money was used for improvements and security enhancements at research and storage facilities. Scientific American reported in February 2006 that in some of the facilities security consisted of chain link fences which were in severe states of disrepair. According to an interview from the article, one facility had been storing samples of enriched (weapons grade) uranium in a broom closet before the improvement project; another had been keeping track of its stock of nuclear warheads using index cards kept in a shoe box.”
Some other observations from the article below:
“Uranium is a naturally occurring element that can be found in low levels within all rock, soil, and water. Uranium is the 51st element in order of abundance in the Earth’s crust. Uranium is also the highest-numbered element to be found naturally in significant quantities on Earth and is almost always found combined with other elements. Along with all elements having atomic weights higher than that of iron, it is only naturally formed in supernovae. The decay of uranium, thorium, and potassium-40 in the Earth’s mantle is thought to be the main source of heat that keeps the outer core liquid and drives mantle convection, which in turn drives plate tectonics. […]
Natural uranium consists of three major isotopes: uranium-238 (99.28% natural abundance), uranium-235 (0.71%), and uranium-234 (0.0054%). […] Uranium-238 is the most stable isotope of uranium, with a half-life of about 4.468×109 years, roughly the age of the Earth. Uranium-235 has a half-life of about 7.13×108 years, and uranium-234 has a half-life of about 2.48×105 years. For natural uranium, about 49% of its alpha rays are emitted by each of 238U atom, and also 49% by 234U (since the latter is formed from the former) and about 2.0% of them by the 235U. When the Earth was young, probably about one-fifth of its uranium was uranium-235, but the percentage of 234U was probably much lower than this. […]
Worldwide production of U3O8 (yellowcake) in 2013 amounted to 70,015 tonnes, of which 22,451 t (32%) was mined in Kazakhstan. Other important uranium mining countries are Canada (9,331 t), Australia (6,350 t), Niger (4,518 t), Namibia (4,323 t) and Russia (3,135 t). […] Australia has 31% of the world’s known uranium ore reserves and the world’s largest single uranium deposit, located at the Olympic Dam Mine in South Australia. There is a significant reserve of uranium in Bakouma a sub-prefecture in the prefecture of Mbomou in Central African Republic. […] Uranium deposits seem to be log-normal distributed. There is a 300-fold increase in the amount of uranium recoverable for each tenfold decrease in ore grade. In other words, there is little high grade ore and proportionately much more low grade ore available.”
vi. Radiocarbon dating (featured).
Radiocarbon dating (also referred to as carbon dating or carbon-14 dating) is a method of determining the age of an object containing organic material by using the properties of radiocarbon (14C), a radioactive isotope of carbon. The method was invented by Willard Libby in the late 1940s and soon became a standard tool for archaeologists. Libby received the Nobel Prize for his work in 1960. The radiocarbon dating method is based on the fact that radiocarbon is constantly being created in the atmosphere by the interaction of cosmic rays with atmospheric nitrogen. The resulting radiocarbon combines with atmospheric oxygen to form radioactive carbon dioxide, which is incorporated into plants by photosynthesis; animals then acquire 14C by eating the plants. When the animal or plant dies, it stops exchanging carbon with its environment, and from that point onwards the amount of 14C it contains begins to reduce as the 14C undergoes radioactive decay. Measuring the amount of 14C in a sample from a dead plant or animal such as piece of wood or a fragment of bone provides information that can be used to calculate when the animal or plant died. The older a sample is, the less 14C there is to be detected, and because the half-life of 14C (the period of time after which half of a given sample will have decayed) is about 5,730 years, the oldest dates that can be reliably measured by radiocarbon dating are around 50,000 years ago, although special preparation methods occasionally permit dating of older samples.
The idea behind radiocarbon dating is straightforward, but years of work were required to develop the technique to the point where accurate dates could be obtained. […]
The development of radiocarbon dating has had a profound impact on archaeology. In addition to permitting more accurate dating within archaeological sites than did previous methods, it allows comparison of dates of events across great distances. Histories of archaeology often refer to its impact as the “radiocarbon revolution”.”
I’ve read about these topics before in a textbook setting (e.g. here), but/and I should note that the article provides quite detailed coverage and I think most people will encounter some new information by having a look at it even if they’re superficially familiar with this topic. The article has a lot of stuff about e.g. ‘what you need to correct for’, which some of you might find interesting.
vii. Raccoon (featured). One interesting observation from the article:
“One aspect of raccoon behavior is so well known that it gives the animal part of its scientific name, Procyon lotor; “lotor” is neo-Latin for “washer”. In the wild, raccoons often dabble for underwater food near the shore-line. They then often pick up the food item with their front paws to examine it and rub the item, sometimes to remove unwanted parts. This gives the appearance of the raccoon “washing” the food. The tactile sensitivity of raccoons’ paws is increased if this rubbing action is performed underwater, since the water softens the hard layer covering the paws. However, the behavior observed in captive raccoons in which they carry their food to water to “wash” or douse it before eating has not been observed in the wild. Naturalist Georges-Louis Leclerc, Comte de Buffon, believed that raccoons do not have adequate saliva production to moisten food thereby necessitating dousing, but this hypothesis is now considered to be incorrect. Captive raccoons douse their food more frequently when a watering hole with a layout similar to a stream is not farther away than 3 m (10 ft). The widely accepted theory is that dousing in captive raccoons is a fixed action pattern from the dabbling behavior performed when foraging at shores for aquatic foods. This is supported by the observation that aquatic foods are doused more frequently. Cleaning dirty food does not seem to be a reason for “washing”. Experts have cast doubt on the veracity of observations of wild raccoons dousing food.
And here’s another interesting set of observations:
“In Germany—where the racoon is called the Waschbär (literally, “wash-bear” or “washing bear”) due to its habit of “dousing” food in water—two pairs of pet raccoons were released into the German countryside at the Edersee reservoir in the north of Hesse in April 1934 by a forester upon request of their owner, a poultry farmer. He released them two weeks before receiving permission from the Prussian hunting office to “enrich the fauna.”  Several prior attempts to introduce raccoons in Germany were not successful. A second population was established in eastern Germany in 1945 when 25 raccoons escaped from a fur farm at Wolfshagen, east of Berlin, after an air strike. The two populations are parasitologically distinguishable: 70% of the raccoons of the Hessian population are infected with the roundworm Baylisascaris procyonis, but none of the Brandenburgian population has the parasite. The estimated number of raccoons was 285 animals in the Hessian region in 1956, over 20,000 animals in the Hessian region in 1970 and between 200,000 and 400,000 animals in the whole of Germany in 2008. By 2012 it was estimated that Germany now had more than a million raccoons.“
This one is not quite new, but I have never seen it or blogged it before. The sound is not completely optimal and as is so often the case for lectures like these it’s at times slightly annoying that you can’t tell what she’s pointing at when she’s talking about the slides, but these issues are relatively minor and should not keep you from watching the lecture.
This is a really nice introduction to some main ideas in the Nimzo Indian defence.
I’m currently reading this book. Below some observations from part 1.
“The term autotroph is usually associated with the photosynthesising plants (including algae and cyanobacteria) and heterotroph with animals and some other groups of organisms that need to be provided high-energy containing organic foods (e.g. the fungi and many bacteria). However, many exceptions exist: Some plants are parasitic and may be devoid of chlorophyll and, thus, lack photosynthesis altogether6, and some animals contain chloroplasts or photosynthesising algae or
cyanobacteria and may function, in part, autotrophically; some corals rely on the photosynthetic algae within their bodies to the extent that they don’t have to eat at all […] If some plants are heterotrophic and some animals autotrophic, what then differentiates plants from animals? It is usually said that what differs the two groups is the absence (animals) or presence (plants) of a cell wall. The cell wall is deposited outside the cell membrane in plants, and forms a type of exo-skeleton made of polysaccharides (e.g. cellulose or agar in some red algae, or silica in the case of diatoms) that renders rigidity to plant cells and to the whole plant.”
“For the autotrophs, […] there was an advantage if they could live close to the shores where inorganic nutrient concentrations were higher (because of mineral-rich runoffs from land) than in the upper water layer of off-shore locations. However, living closer to shore also meant greater effects of wave action, which would alter, e.g. the light availability […]. Under such conditions, there would be an advantage to be able to stay put in the seawater, and under those conditions it is thought that filamentous photosynthetic organisms were formed from autotrophic cells (ca. 650 million years ago), which eventually resulted in macroalgae (some 450 million years ago) featuring holdfast tissues that could adhere them to rocky substrates. […] Very briefly now, the green macroalgae were the ancestors of terrestrial plants, which started to invade land ca. 400 million years ago (followed by the animals).”
“Marine ‘plants’ (= all photoautotrophic organisms of the seas) can be divided into phytoplankton (‘drifters’, mostly unicellular) and phytobenthos (connected to the bottom, mostly multicellular/macroscopic).
The phytoplankton can be divided into cyanobacteria (prokaryotic) and microalgae (eukaryotic) […]. The phytobenthos can be divided into macroalgae and seagrasses (marine angiosperms, which invaded the shallow seas some 90 million years ago). The micro- and macro-algae are divided into larger groups as based largely on their pigment composition [e.g. ‘red algae‘, ‘brown algae‘, …]
There are some 150 currently recognised species of marine cyanobacteria, ∼20 000 species of eukaryotic microalgae, several thousand species of macroalgae and 50(!) species of seagrasses. Altogether these marine plants are accountable for approximately half of Earth’s photosynthetic (or primary) production.
The abiotic factors that are conducive to photosynthesis and plant growth in the marine environment differ from those of terrestrial environments mainly with regard to light and inorganic carbon (Ci) sources. Light is strongly attenuated in the marine environment by absorption and scatter […] While terrestrial plants rely of atmospheric CO2 for their photosynthesis, marine plants utilise largely the >100 times higher concentration of HCO3− as the main Ci source for their photosynthetic needs. Nutrients other than CO2, that may limit plant growth in the marine environment include nitrogen (N), phosphorus (P), iron (Fe) and, for the diatoms, silica (Si).”
“The conversion of the plentiful atmospheric N2 gas (∼78% in air) into bio-available N-rich cellular constituents is a fundamental process that sustains life on Earth. For unknown reasons this process is restricted to selected representatives among the prokaryotes: archaea and bacteria. N2 fixing organisms, also termed diazotrophs (dia = two; azo = nitrogen), are globally wide-spread in terrestrial and aquatic environments, from polar regions to hot deserts, although their abundance varies widely. [Why is nitrogen important, I hear you ask? Well, when you hear the word ‘nitrogen’ in biology texts, think ‘protein’ – “Because nitrogen is relatively easy to measure and protein is not, protein content is often estimated by assaying organic nitrogen, which comprises from 15 to 18% of plant proteins” (Herrera et al. – see this post]. […] . Cyanobacteria dominate marine diazotrophs and occupy large segments of marine open waters […] sustained N2 fixation […] is a highly energy-demanding process. […] in all diazotrophs, the nitrogenase enzyme complex […] of marine cyanobacteria requires high Fe levels […] Another key nutrient is phosphorus […] which has a great impact on growth and N2 fixation in marine cyanobacteria. […] Recent model-based estimates of N2 fixation suggest that unicellular cyanobacteria contribute significantly to global ocean N budgets.”
“For convenience, we often divide the phytoplankton into different size classes, the pico-phytoplankton (0.2–2 μm effective cell diameter, ECD4); the nanophytoplankton (2–20 μm ECD) and the microphytoplankton (20–200 μm ECD). […] most of the major marine microalgal groups are found in all three size classes […] a 2010 paper estimate that these plants utilise 46 Gt carbon yearly, which can be divided into 15 Gt for the microphytoplankton, 20 Gt for the nanophytoplankton and 11 Gt for the picophytoplankton. Thus, the very small (nano- + pico-forms) of phytoplankton (including cyanobacterial forms) contribute 2/3 of the overall planktonic production (which, again, constitutes about half of the global production”).
“Many primarily non-photosynthetic organisms have developed symbioses with microalgae and cyanobacteria; these photosynthetic intruders are here referred to as photosymbionts. […] Most photosymbionts are endosymbiotic (living within the host) […] In almost all cases, these micro-algae are in symbiosis with invertebrates. Here the alga provides the animal with organic products of photosynthesis, while the invertebrate host can supply CO2 and other inorganic nutrients including nitrogen and phosphorus to the alga […]. In cases where cyanobacteria form the photosymbiont, their ‘caloric’ nutritional value is more questionable, and they may instead produce toxins that deter other animals from eating the host […] Many reef-building […] corals contain symbiotic zooxanthellae within the digestive cavity of their polyps, and in general corals that have symbiotic algae grow much faster than those without them. […] The loss of zooxanthellae from the host is known as coral bleaching […] Certain sea slugs contain functional chloroplasts that were ingested (but not digested) as part of larger algae […]. After digesting the rest of the alga, these chloroplasts are imbedded within the slugs’ digestive tract in a process called kleptoplasty (the ‘stealing’ of plastids). Even though this is not a true symbiosis (the chloroplasts are not organisms and do not gain anything from the association), the photosynthetic activity aids in the nutrition of the slugs for up to several months, thus either complementing their nutrition or carrying them through periods when food is scarce or absent.”
“90–100 million years ago, when there was a rise in seawater levels, some of the grasses that grew close to the seashores found themselves submerged in seawater. One piece of evidence that supports [the] terrestrial origin [of marine angiosperms] can be seen in the fact that residues of stomata can be found at the base of the leaves. In terrestrial plants, the stomata restrict water loss from the leaves, but since seagrasses are principally submerged in a liquid medium, the stomata became absent in the bulk parts of the leaves. These marine angiosperms, or seagrasses, thus evolved from those coastal grasses that successfully managed to adapt to being submerged in saline waters. Another theory has it that the ancestors of seagrasses were freshwater plants that, therefore, only had to adapt to water of a higher salinity. In both cases, the seagrasses exemplify a successful readaptation to marine life […] While there may exist some 20 000 or more species of macroalgae […], there are only some 50 species of seagrasses, most of which are found in tropical seas. […] the ability to extract nutrients from the sediment renders the seagrasses at an advantage over (the root-less) macroalgae in nutrient-poor waters. […] one of the basic differences in habitat utilisation between macroalgae and seagrasses is that the former usually grow on rocky substrates where they are held in place by their holdfasts, while seagrasses inhabit softer sediments where they are held in place by their root systems. Unlike macroalgae, where the whole plant surface is photosynthetically active, large proportions of seagrass plants are comprised of the non-photosynthetic roots and rhizomes. […] This means […] that seagrasses need more light in order to survive than do many algae […] marine plants usually contain less structural tissues than their terrestrial counterparts”.
“if we define ‘visible light’ as the electromagnetic wave upon which those energy-containing particles called quanta ‘ride’ that cause vision in higher animals (those quanta are also called photons) and compare it with light that causes photosynthesis, we find, interestingly, that the two processes use approximately the same wavelengths: While mammals largely use the 380–750 nm (nm = 10-9 m) wavelength band for vision, plants use the 400–700-nm band for photosynthesis; the latter is therefore also termed photosynthetically active radiation (PAR […] If a student
asks “but how come that animals and plants use almost identical wavelengths of radiation for so very different purposes?”, my answer is “sorry, but we don’t have the time to discuss that now”, meaning that while I think it has to do with too high and too low quantum energies below and above those wavelengths, I really don’t know.”
“energy (E) of a photon is inversely proportional to its wavelength […] a blue photon of 400 nm wavelength contains almost double the energy of a red one of 700 nm, while the photons of PAR between those two extremes carry decreasing energies as wavelengths increase. Accordingly, low-energy photons (i.e. of high wavelengths, e.g. those of reddish light) are absorbed to a greater extent by water molecules along a depth gradient than are photons of higher energy (i.e. lower wavelengths, e.g. bluish light), and so the latter penetrate deeper down in clear oceanic waters […] In water, the spectral distribution of PAR reaching a plant is different from that on land. This is because water not only attenuates the light intensity (or, more correctly, the photon flux, or irradiance […]), but, as mentioned above and detailed below, the attenuation with depth is wavelength dependent; therefore, plants living in the oceans will receive different spectra of light dependent on depth […] The two main characteristics of seawater that determine the quantity and quality of the irradiance penetrating to a certain depth are absorption and scatter. […] Light absorption in the oceans is a property of the water molecules, which absorb photons according to their energy […] Thus, red photons of low energy are more readily absorbed than, e.g. blue ones; only <1% of the incident red photons (calculated for 650 nm) penetrate to 20 m depth in clear waters while some 60% of the blue photons (450 nm) remain at that depth. […] Scatter […] is mainly caused by particles suspended in the water column (rather than by the water molecules themselves, although they too scatter light a little). Unlike absorption, scatter affects short-wavelength photons more than long-wavelength ones […] in turbid waters, photons of decreasing wavelengths are increasingly scattered. Since water molecules are naturally also present, they absorb the higher wavelengths, and the colours penetrating deepest in turbid waters are those between the highly scattered blue and highly absorbed red, e.g. green. The greenish colour of many coastal waters is therefore often due not only to the presence of chlorophyll-containing phytoplankton, but because, again, reddish photons are absorbed, bluish photons are scattered, and the midspectrum (i.e. green) fills the bulk part of the water column.”
“the open ocean, several kilometres or miles from the shore, almost always appears as blue. The reason for this is that in unpolluted, particle-free, waters, the preferential absorption of long-wavelength (low-energy) photons is what mainly determines the spectral distribution of light attenuation. Thus, short-wavelength (high-energy) bluish photons penetrate deepest and ‘fill up’ the bulk of the water column with their colour. Since water molecules also scatter a small proportion of those photons […], it follows that these largely water-penetrating photons are eventually also reflected back to our eyes. Or, in other words, out of the very low scattering in clear oceanic waters, the photons available to be scattered and, thus, reflected to our eyes, are mainly the bluish ones, and that is why the clear deep oceans look blue. (It is often said that the oceans are blue because the blue sky is reflected by the water surface. However, sailors will testify to the truism that the oceans are also deep blue in heavily overcast weathers, and so that explanation of the general blueness of the oceans is not valid.)”
“Although marine plants can be found in a wide range of temperature regimes, from the tropics to polar regions, the large bodies of water that are the environment for most marine plants have relatively constant temperatures, at least on a day-to-day basis. […] For marine plants that are found in intertidal regions, however, temperature variation during a single day can be very high as the plants find themselves alternately exposed to air […] Marine plants from tropical and temperate regions tend to have distinct temperature ranges for growth […] and growth optima. […] among most temperate species of microalgae, temperature optima for growth are in the range 18–25 ◦C, while some Antarctic diatoms show optima at 4–6 ◦C with no growth above a critical temperature of 7–12 ◦C. By contrast, some tropical diatoms will not grow below 15–17 ◦C. Similar responses are found in macroalgae and seagrasses. However, although some marine plants have a restricted temperature range for growth (so-called stenothermal species; steno = narrow and thermal relates to temperature), most show some growth over a broad range of temperatures and can be considered eurythermal (eury = wide).”
“The language denotes the man. A coarse or refined character finds its expression naturally in a coarse or refined phraseology.” (Christian Nestell Bovee)
(Click to view details/full size)
Doff, pabulum, astringent, enervate, mountebank, argot, sluice, sequin, indite, vitiate, simper, tarry, casuistry, saturnine, sidle, meretricious, fugacious, esurient, scabrous, disquisition, winsome, sedulous, badinage, abeyance, effrontery, minatory, synecdoche, lubricious, adjure, asperse, encumbrance, careen, desuetude, syllepsis, limn, bathetic, surcease, taut, tribulation, chrysalis, farrier, vane, virago, rictus, gewgaw, vituperate, curdle, ichthyology, abrogate, stultify, approbatory, intrepid, nugatory, contumacious, append, vociferate, tenebrous, arrogate, vermilion, descry, sententious, repine, procrustean, undulate, abstemious, palter, iniquitous, endue, lugubrious, obloquy, obdurate, importunate, apotheosis, obviate, peregrinate, sacrum, …
In a way it makes absolutely no sense for someone like me to spend as much time on this stuff as I have over the last year or two; I almost never engage in conversations with other people as I rarely interact with other people at all (and also tend to avoid conversations when I do because conversations are usually unpleasant), and when I do both interact and converse with other people I only rarely engage in conversations in English as my first language, and the first language of most of the people with whom I interact regularly, is Danish. If the aim were to improve my vocabulary in order to hide my stupidity (‘make me look smarter’), I’d do a lot better by learning some more fancy-sounding Danish words. As it is, I can’t even remember the last time I last looked up a word in a Danish dictionary, but it’s been at least a few years (if not much more than that). Of course on the other hand I do read a lot of books, and I only read books in English. So it’s probably not a complete waste of time. But I’ve been thinking lately that I might derive a lot more benefit from these sorts of activities, in the sense that more words would ‘stick’, if I actually had to interact with other people in English on a daily basis. It seems to me likely that in a sense my language production capabilities might not be improved as much by these activities as are my language consumption capabilities. What I mean by this is that I frequently encounter new words I’ve worked on in the books I read, but at the same time I’m very rarely forced to ever actually use any of them in conversations with other people, so I don’t. I don’t know enough about linguistics to tell if this distinction between production and consumption matters, but it seems to me that it might. On a related note I’ve recently had the idea that my activities in these areas might implicitly be lowering my opportunity costs of book-reading, compared to personal interactions with others, because these activities make it easier for me to read books but does not at the same time much improve upon my social skills (e.g. conversational skills; though I’m on a related note open to the suggestion that conversational skills and vocabulary size are in some contexts relevant to this discussion in fact perhaps best conceived of as orthogonal variables (which doesn’t help at all…)) – which is hardly what I would conceive of as a desirable outcome. Oh well.
As you should have been able to infer from the screencap above and/or the post title, I’ve by now reached another major milestone (here’s the first one) on the vocabulary.com site as I have now ‘mastered’ more than 10.000 words on the site – I figured it made sense to make a post about this and related matters, and this is the post in question. In the time that has passed since I wrote the post to which I link above the site has undergone a few minor changes, but actually most of it works pretty much the same way it did last year; if you’re curious about how the vocabulary.com site works and you have not heard about it before, go have a look at that post before reading on. As I have noted before I don’t fully trust the vocabulary.com dictionary; or at least I like Webster’s online dictionary better, which is why the links above are all to Webster entries. I’ll often ‘check out’ particular words which I’m curious about after having encountered them on vocabulary.com, because sometimes specific interpretations of the words in question are simply wrong, or at least so I would argue; if the site is trying to tell me that a specific word means X, but I ‘know’ that it doesn’t and the Webster entry also provides zero support for this specific usage/interpretation – or actively ‘disagrees’ – then I go with Webster and I’ll get annoyed at the people behind vocabulary.com (again). One thing to note when making comparisons here is that in general I believe that the vocabulary.com dictionary has a greater ‘range’ of meanings covered, which also means that if you look up the entries to which I link above you might fail to appreciate how many different types of questions that might be required for someone to ‘master’ the words on vocabulary.com; if the word has some rare meaning in a very specific context, you can expect vocabulary.com to ask you about that before you master the word (and you can expect a subset of those questions to be poorly worded, making you angry at the programmers behind the site). This also means that even if you think you know a word, the site may still cause you some challenges along the way.
I’ve used the site pretty much every week during the last year, though in some periods I used the site very little; the relative inactivity meant that I dropped out of the top 100 list for a while, but over the last weeks I’ve done some more work on the site, and I’m now back in the top 100. So I seem to focus more on improving my vocabulary than do most users on the site, which I actually find somewhat curious given that this tool has apparently been introduced to thousands of children throughout the US. On the other hand I’ve put in a lot of hours when you add them all together (the site actually logging the hours you put in is incidentally a new feature which was not present when I posted my first couple of posts about the site a year ago; I actually didn’t like this feature to start with, in part because I realized how much time I’d spent on this stuff).
The site is in my opinion very bad at explaining how to properly use the site to learn new words in the semi-long run, so I should probably explain why I recently came to ‘rediscover’ my joy of using the site. The main factor rekindling my interest was that I discovered how to use ‘lists’ to focus on new words. If you play the challenge without any bells and whistles and never add lists or anything, you’ll at some point get to a situation where you may well be given 500 questions without ‘mastering’ more than one or two new words; the site will recycle and recycle, asking you hundreds of questions about words you’ve already mastered and occasionally ask you about a new word which you’ll never get enough questions about to actually ever master – this is incredibly frustrating, to the point where I last year decided to send the vocabulary.com staff an email suggesting they make changes to the algorithms, because this just seemed insane and probably killed the motivation for a lot of users. You’d put in 20 hours almost without being allowed to actually achieve mastery of any of the new words, then suddenly you’d ‘master’ more than a thousand words one after the other because now suddenly the site could be bothered to finally allow you to show that you’ve mastered those words the site last asked you about last April – or whenever. Or not – I have a suspicion that a lot of users have given up before this point was reached and just said ‘screw this’ before getting to the mastery questions at the end of the line, and that stuff like this may be part of the reason why I’m in the top 100 list now. If this is true it’s sort of sad, because it seems like such a big missed opportunity; what you’d ideally want is not just a site useful for learning a few thousand words after which the way the site is coded will contribute strongly to making many people sick of it, but rather a site which mixes new words and old in an optimal manner which might encourage users to keep using the site in the long run. People may argue about what’s an optimal mix, but I don’t think you can argue with a straight face that the current configuration is anywhere near this point – and if the perceived optimal mix is different for different people, why not allow users to have an influence on this variable in the first place? In a way the site implicitly does, in an admittedly roundabout manner, give people some influence on these sorts of variables via the lists, but I remained unaware of this for a very long time so a lot of users presumably don’t know this. Either way I certainly think I’m justified in assuming that far more care has been taken to optimize the user experience early on than has been taken to making sure the site remains useful even to people who’ve already mastered a lot of words; I’d argue that the site has an excessive focus on review questions, compared to questions about new words, and from personal experience it has seemed to me that this problem seems to get bigger and bigger the more words you learn.
Adding to the problems mentioned above it also does not help that some of the review questions – not many of them, but some – are so poorly thought out that you can’t really tell what the right answer is supposed to be despite knowing very well what the word means, so you risk getting stuck in loops where a substantial proportion of the questions you’re asked are about words you already know at least in part because the questions are bad (if you answer a tricky review question like that incorrectly, you’ll be given quite a few more other questions in the future about this word you don’t care about and don’t want to answer questions about anymore, because an incorrect answer to a review question is always taken by the site as an indication that you don’t understand the word as well as you should, and never as an indication that someone should seriously have a closer look at some of those shitty questions (again, there aren’t that many of them, but they’re very annoying to someone like me)).
So in short, if you’re contemplating using the site or already does, don’t do what I did – instead of just playing the basic challenge, at some point it becomes necessary to instead start exploring the lists. If you add a list to learn, the site will mostly (though not exclusively) focus on the words on the list you’re currently learning, avoiding the outcome outlined above. You can add more than one list simultaneously. I’ll put it bluntly – if you don’t use lists, this site will eventually kill pretty much all desire to use it, because you’ll eventually get to a point where you’ll feel you’re not making any progress and you’ll also at the same time have the distinct impression that the site actively refuses to give you any opportunities to making progress. I can’t be the only person who until recently did not use lists, and frankly without lists this site is a disaster waiting to happen. If you use lists well, it is however a very useful tool.
The site does not help you with grammar – if you know about a site that does, I’d be curious to know about it in the comments below. On a related note I thought I should end this post with this quite amusing quote from Jerome Jerome’s book Three Men on the Bummel, published in 1900:
“In the course of the century, I am inclined to think that Germany will solve her difficulty in this respect by speaking English. Every boy and girl in Germany, above the peasant class, speaks English. Were English pronunciation less arbitrary, there is not the slightest doubt but that in the course of a very few years, comparatively speaking, it would become the language of the world. All foreigners agree that, grammatically, it is the easiest language of any to learn. A German, comparing it with his own language, where every word in every sentence is governed by at least four distinct and separate rules, tells you that English has no grammar. A good many English people would seem to have come to the same conclusion; but they are wrong. As a matter of fact, there is an English grammar, and one of these days our schools will recognise the fact, and it will be taught to our children, penetrating maybe even into literary and journalistic circles. But at present we appear to agree with the foreigner that it is a quantity neglectable. English pronunciation is the stumbling-block to our progress. English spelling would seem to have been designed chiefly as a disguise to pronunciation. It is a clever idea, calculated to check presumption on the part of the foreigner; but for that he would learn it in a year.”
i. “Without feeling abashed by my ignorance, I confess that I am absolutely unable to say. In the absence of an appearance of learning, my answer has at least one merit, that of perfect sincerity.” (Jean Henri Fabre)
ii. “It is right to be content with what we have, but never with what we are.” (James Mackintosh)
iii. “God is a hypothesis constructed by man to help him understand what existence is all about.” (Julian Huxley)
iv. “Sooner or later, false thinking brings wrong conduct.” (-ll-)
v. “A man has no reason to be ashamed of having an ape for his grandfather.” (Thomas Henry Huxley)
vi. “The improver of natural knowledge absolutely refuses to acknowledge authority, as such. For him, scepticism is the highest of duties; blind faith the one unpardonable sin.” (-ll-)
vii. “Perhaps the most valuable result of all education is the ability to make yourself do the thing you have to do, when it ought to be done, whether you like it or not; it is the first lesson that ought to be learned; and, however early a man’s training begins, it is probably the last lesson that he learns thoroughly.” (-ll-)
viii. “The doctrine that all men are, in any sense, or have been, at any time, free and equal, is an utterly baseless fiction.” (-ll-)
ix. “Try to learn something about everything and everything about something.” (-ll-)
x. “The known is finite, the unknown infinite; intellectually we stand on an islet in the midst of an illimitable ocean of inexplicability. Our business in every generation is to reclaim a little more land, to add something to the extent and the solidity of our possessions.” (-ll-)
xi. “There will always be people who are ahead of the curve, and people who are behind the curve. But knowledge moves the curve.” (Bill James)
xii. “No field of knowledge is so transparently simple as another’s.” (Michael Flynn)
xiii. “If we would have new knowledge, we must get us a whole world of new questions.” (Susanne Langer)
xiv. “Any ethics that needs religion is bad ethics […] If you view religion as necessary for ethics, you’ve reduced us to the ethical level of 4 year olds. “If you follow these commandments you’ll go to heaven, if you don’t’ you’ll burn in hell” is just a spectacular version of the carrots and sticks with which you raise your children.” (Susan Neiman)
xv. “Truths are not relative. What is relative are opinions about truth.” (Nicolás Gómes Dávila)
xvi. “To tolerate does not mean to forget that what we tolerate does not deserve anything more.” (-ll-)
xvii. “At all times pseudoprofound aphorisms have been more popular than rigorous arguments.” (Mario Bunge)
xviii. “Instead […] of saying that Man is the creature of Circumstance, it would be nearer the mark to say that Man is the architect of Circumstance. It is Character which builds an existence out of Circumstance.” (George Lewes)
xix. “A man must be himself convinced if he is to convince others. The prophet must be his own disciple, or he will make none. Enthusiasm is contagious: belief creates belief.” (-ll-)
xx. “No deeply-rooted tendency was ever extirpated by adverse argument. Not having originally been founded on argument, it cannot be destroyed by logic.” (-ll-)
Sorry for the infrequent updates. I realized blogging Wodehouse books takes more time than I’d imagined, so posting this sort of stuff is probably a better idea.
“On the first day of the evacuation, only 7,669 men were evacuated, but by the end of the eighth day, a total of 338,226 soldiers had been rescued by a hastily assembled fleet of over 800 boats. Many of the troops were able to embark from the harbour’s protective mole onto 39 British destroyers and other large ships, while others had to wade out from the beaches, waiting for hours in the shoulder-deep water. Some were ferried from the beaches to the larger ships by the famous little ships of Dunkirk, a flotilla of hundreds of merchant marine boats, fishing boats, pleasure craft, and lifeboats called into service for the emergency. The BEF lost 68,000 soldiers during the French campaign and had to abandon nearly all of their tanks, vehicles, and other equipment.”
One way to make sense of the scale of the operations here is to compare them with the naval activities on D-day four years later. The British evacuated more people from France during three consecutive days in 1940 (30th and 31st of May, and 1st of June) than the Allies (Americans and British combined) landed on D-day four years later, and the British evacuated roughly as many people on the 31st of May (68,014) as they landed by sea on D-day (75,215). Here’s a part of the story I did not know:
“Three British divisions and a host of logistic and labour troops were cut off to the south of the Somme by the German “race to the sea”. At the end of May, a further two divisions began moving to France with the hope of establishing a Second BEF. The majority of the 51st (Highland) Division was forced to surrender on 12 June, but almost 192,000 Allied personnel, 144,000 of them British, were evacuated through various French ports from 15–25 June under the codename Operation Ariel. […] More than 100,000 evacuated French troops were quickly and efficiently shuttled to camps in various parts of southwestern England, where they were temporarily lodged before being repatriated. British ships ferried French troops to Brest, Cherbourg, and other ports in Normandy and Brittany, although only about half of the repatriated troops were deployed against the Germans before the surrender of France. For many French soldiers, the Dunkirk evacuation represented only a few weeks’ delay before being killed or captured by the German army after their return to France.”
ii. A pretty awesome display by the current world chess champion:
If you feel the same way I do about Maurice Ashley, you’ll probably want to skip the first few minutes of this video. Don’t miss the games, though – this is great stuff. Do keep in mind when watching this video that the clock is a really important part of this event; other players in the past have played a lot more people at the same time while blindfolded than Carlsen does here – “Although not a full-time chess professional [Najdorf] was one of the world’s leading chess players in the 1950s and 1960s and he excelled in playing blindfold chess: he broke the world record twice, by playing blindfold 40 games in Rosario, 1943, and 45 in São Paulo, 1947, becoming the world blindfold chess champion” (link) – but a game clock changes things a lot. A few comments and discussion here.
In very slightly related news, I recently got in my first win against a grandmaster in a bullet game on the ICC.
iii. Gastric-brooding frog.
“The genus was unique because it contained the only two known frog species that incubated the prejuvenile stages of their offspring in the stomach of the mother. […] What makes these frogs unique among all frog species is their form of parental care. Following external fertilization by the male, the female would take the eggs or embryos into her mouth and swallow them. […] Eggs found in females measured up to 5.1 mm in diameter and had large yolk supplies. These large supplies are common among species that live entirely off yolk during their development. Most female frogs had around 40 ripe eggs, almost double that of the number of juveniles ever found in the stomach (21–26). This means one of two things, that the female fails to swallow all the eggs or the first few eggs to be swallowed are digested. […] During the period that the offspring were present in the stomach the frog would not eat. […] The birth process was widely spaced and may have occurred over a period of as long as a week. However, if disturbed the female may regurgitate all the young frogs in a single act of propulsive vomiting.”
Fascinating creatures.. Unfortunately they’re no longer around (they’re classified as extinct).
iv. I’m sort of conflicted about what to think about this:
“Epidemiological studies show that patients with type-2-diabetes (T2DM) and individuals with a diabetes-independent elevation in blood glucose have an increased risk for developing dementia, specifically dementia due to Alzheimer’s disease (AD). These observations suggest that abnormal glucose metabolism likely plays a role in some aspects of AD pathogenesis, leading us to investigate the link between aberrant glucose metabolism, T2DM, and AD in murine models. […] Recent epidemiological studies demonstrate that individuals with type-2 diabetes (T2DM) are 2–4 times more likely to develop AD (3–5), individuals with elevated blood glucose levels are at an increased risk to develop dementia (5), and those with elevated blood glucose levels have a more rapid conversion from mild cognitive impairment (MCI) to AD (6), suggesting that disrupted glucose homeostasis could play a […] causal role in AD pathogenesis. Although several prominent features of T2DM, including increased insulin resistance and decreased insulin production, are at the forefront of AD research (7–10), questions regarding the effects of elevated blood glucose independent of insulin resistance on AD pathology remain largely unexplored. In order to investigate the potential role of glucose metabolism in AD, we combined glucose clamps and in vivo microdialysis as a method to measure changes in brain metabolites in awake, freely moving mice during a hyperglycemic challenge. Our findings suggest that acute hyperglycemia raises interstitial fluid (ISF) Aβ levels by altering neuronal activity, which increases Aβ production. […] Since extracellular Aβ, and subsequently tau, aggregate in a concentration-dependent manner during the preclinical period of AD while individuals are cognitively normal (27), our findings suggest that repeated episodes of transient hyperglycemia, such as those found in T2DM, could both initiate and accelerate plaque accumulation. Thus, the correlation between hyperglycemia and increased ISF Aβ provides one potential explanation for the increased risk of AD and dementia in T2DM patients or individuals with elevated blood glucose levels. In addition, our work suggests that KATP channels within the hippocampus act as metabolic sensors and couple alterations in glucose concentrations with changes in electrical activity and extracellular Aβ levels. Not only does this offer one mechanistic explanation for the epidemiological link between T2DM and AD, but it also provides a potential therapeutic target for AD. Given that FDA-approved drugs already exist for the modulation of KATP channels and previous work demonstrates the benefits of sulfonylureas for treating animal models of AD (26), the identification of these channels as a link between hyperglycemia and AD pathology creates an avenue for translational research in AD.”
Why am I conflicted? Well, on the one hand it’s nice to know that they’re making progress in terms of figuring out why people get Alzheimer’s and potential therapeutic targets are being identified. On the other hand this – “our findings suggest that repeated episodes of transient hyperglycemia […] could both initiate and accelerate plaque accumulation” – is bad news if you’re a type 1 diabetic (I’d much rather have them identify risk factors to which I’m not exposed).
v. I recently noticed that Khan Academy has put up some videos about diabetes. From the few ones I’ve had a look at they don’t seem to contain much stuff I don’t already know so I’m not sure I’ll explore this playlist in any more detail, but I figured I might as well share a few of the videos here; the first one is about the pathophysiology of type 1 diabetes and the second one’s about diabetic nephropathy (kidney disease):
vi. On Being the Right Size, by J. B. S. Haldane. A neat little text. A few quotes:
“To the mouse and any smaller animal [gravity] presents practically no dangers. You can drop a mouse down a thousand-yard mine shaft; and, on arriving at the bottom, it gets a slight shock and walks away, provided that the ground is fairly soft. A rat is killed, a man is broken, a horse splashes. For the resistance presented to movement by the air is proportional to the surface of the moving object. Divide an animal’s length, breadth, and height each by ten; its weight is reduced to a thousandth, but its surface only to a hundredth. So the resistance to falling in the case of the small animal is relatively ten times greater than the driving force.
An insect, therefore, is not afraid of gravity; it can fall without danger, and can cling to the ceiling with remarkably little trouble. It can go in for elegant and fantastic forms of support like that of the daddy-longlegs. But there is a force which is as formidable to an insect as gravitation to a mammal. This is surface tension. A man coming out of a bath carries with him a film of water of about one-fiftieth of an inch in thickness. This weighs roughly a pound. A wet mouse has to carry about its own weight of water. A wet fly has to lift many times its own weight and, as everyone knows, a fly once wetted by water or any other liquid is in a very serious position indeed. An insect going for a drink is in as great danger as a man leaning out over a precipice in search of food. If it once falls into the grip of the surface tension of the water—that is to say, gets wet—it is likely to remain so until it drowns. A few insects, such as water-beetles, contrive to be unwettable; the majority keep well away from their drink by means of a long proboscis. […]
It is an elementary principle of aeronautics that the minimum speed needed to keep an aeroplane of a given shape in the air varies as the square root of its length. If its linear dimensions are increased four times, it must fly twice as fast. Now the power needed for the minimum speed increases more rapidly than the weight of the machine. So the larger aeroplane, which weighs sixty-four times as much as the smaller, needs one hundred and twenty-eight times its horsepower to keep up. Applying the same principle to the birds, we find that the limit to their size is soon reached. An angel whose muscles developed no more power weight for weight than those of an eagle or a pigeon would require a breast projecting for about four feet to house the muscles engaged in working its wings, while to economize in weight, its legs would have to be reduced to mere stilts. Actually a large bird such as an eagle or kite does not keep in the air mainly by moving its wings. It is generally to be seen soaring, that is to say balanced on a rising column of air. And even soaring becomes more and more difficult with increasing size. Were this not the case eagles might be as large as tigers and as formidable to man as hostile aeroplanes.
But it is time that we pass to some of the advantages of size. One of the most obvious is that it enables one to keep warm. All warmblooded animals at rest lose the same amount of heat from a unit area of skin, for which purpose they need a food-supply proportional to their surface and not to their weight. Five thousand mice weigh as much as a man. Their combined surface and food or oxygen consumption are about seventeen times a man’s. In fact a mouse eats about one quarter its own weight of food every day, which is mainly used in keeping it warm. For the same reason small animals cannot live in cold countries. In the arctic regions there are no reptiles or amphibians, and no small mammals. The smallest mammal in Spitzbergen is the fox. The small birds fly away in winter, while the insects die, though their eggs can survive six months or more of frost. The most successful mammals are bears, seals, and walruses.” [I think he’s a bit too categorical in his statements here and this topic is more contested today than it probably was when he wrote his text – see wikipedia’s coverage of Bergmann’s rule].
Providing practical support for people with autism spectrum disorder – supported living in the community
“The last few chapters managed to almost push me all the way towards giving the book one star. You can’t just claim in a book like this that very expensive and comprehensive support systems which you’re dreaming about are cost-effective without citing a single study, especially not in a context where you’ve just claimed that activities which usually end up costing a lot of money will end up saving money. If you envision a much more comprehensive support system, you can’t not address obvious cost drivers.
Some interesting stuff and important observations are included in the book, but the level of coverage is not high and you should not take my two star (‘ok’) rating to indicate that I am in agreement with the author. The main reason why I ended up finishing it was that it was easy to read, not that it was a good book.”
There are no inline citations, and examples of things people with ASD might need help with and ways to help them with these problems seem to be derived from anecdotes, not systematic research. The author repeatedly emphasizes that aid should be individualized and focused on the specific needs of the person with ASD, and although this makes a lot of sense it also makes recommendations very difficult to evaluate (it’s a bit like figuring out what’s going on in the context of other areas of psychological research, where therapists will often ‘mix methods’ when dealing with specific individuals, making it impossible to figure out which components of the treatment regime are actually helpful and which are not because even if people were to try to figure this out, power issues would make it impossible to estimate the relevant interaction effects even in theory); though it should be made clear that the author makes no attempt to do this.
I however found some of the observations included and specific points raised in the book to be interesting, and I’ll mention some of these in the coverage below.
“Professional support needs to be developed and executed in partnership with people and families. For support to be successful, all concerned need to be aware of its objectives and agree with the plan and strategies involved.”
I decided to start out the coverage with this quote because the book is full of postulates like these. Often specific cases will be used to illustrate points like these, but don’t expect any references to actual research on such topics – it’s not that kind of book. The approach employed makes the book incredibly hard for me to evaluate; some of the ideas are presumably sound, but it’s difficult to tell which because they didn’t do the research. In theory it’s sometimes easy to see how a given approach mentioned might lead to, or solve, specific problems, but you’ll often get the idea that perhaps there are tradeoffs at play here which the advice included does not take into account, meaning that in specific cases an alternative solution/piece of advice to the one proposed might lead to better outcomes by trading off the problems associated with the approach mentioned and the problems associated with an alternative approach. In some cases you perhaps would ideally prefer the parents of an adult child living outside the home of the parents to not have too much influence on support strategies employed even though they might traditionally have had a significant role to play in the context of support provision, because the family’s approach to problem solving might be counterproductive, in which case a support plan not supported by the parents might still in some cases be preferable to one which would be supported by them. The emphasis on individualized care throughout the book is, it must be said, on the other hand helpful in terms of thinking about such potential problems, but you still have this impression that a lot of the suggestions in the book are really not based on anywhere near a sufficient amount of data or research, and although they’re often ‘common sense suggestions’ it’s quite clear from a lot of different areas of psychological research by now that common sense can sometimes deceive us.
A general problem I have with the book is, I think, that I think the author is too confident about which support approaches/strategies/etc. might, or might not, work – and perhaps a key reason why she seems overconfident is that she’s not provided the research results in the book which one would in my opinion need in order to draw conclusions like the ones she draws, regardless of whether such research actually exists. A related problem is that quite a few of the concluding statements in the book are at least partly normative statements (which I generally dislike to encounter in non-fiction), not descriptive statements (which I do like to encounter). In the book she repeatedly makes claims about what people with ASD are like without referring to research on these topics, so you’re wondering how she knows these things, and whether or not those claims are actually true, or just true for a small subset of people with ASD which she’s encountered or read about. Many of the observations seemed familiar to me (having encountered them either in other textbooks, or having personal experience with the issues mentioned) so I’d be likely to grant that many of the observations are valid, but you are sometimes wondering how she knows the things she claims to know. A big problem is actually the way she covers the material; she covers various topics in various chapters, but the way she does is makes it relatively hard for a reader to know which parts of a given chapter might actually be useful for a specific individual curious about these things; another way to do things might have been to split the coverage up into chapters about support provision for people with low support requirements, and other chapters about support provision for people with high support requirements. It’s made clear in the book that needs are different for different individuals, but you’re often sort of wondering which passages are most relevant to which groups of people with ASD. One might argue that ‘people ought to be able to tell this on their own’, but then we get to the problems that people with ASD tend to be bad at asking for support, perhaps not realizing that they need it, and the problem that people without ASD who do not know much about ASD perhaps have a difficult time figuring out which types of help might be useful in a specific setting. This stuff is difficult as it is, but I don’t think the way the coverage is structured in this book is helping at all with solving these sorts of issues.
Oh well, let’s move on…:
“The ultimate aim of support should be to improve skills and develop strategies to enable the person with ASD to feel in control and better able to cope independently.”
“The fact is that extremely able people with ASD frequently struggle with day-to-day life skills. Very intelligent students cannot organize themselves to launder their clothes, and may get up to find they are all dirty or still wet in the machine from several days ago. This is one of those superficially trivial things that can be a major problem to the person it repeatedly happens to. On a practical domestic front, what may be a massive difficulty for a person with ASD, may be an easily solved problem for someone without it. […] People with ASD like to have regular routines. The ability to adhere to routine is an advantage in many situations, and this skill can be used productively. Structure and organization can be brought to running the household. As a plan is constructed, problems can be considered and systems put in place to deal with them. A planning session when the individual collaborates with support to work out a weekly menu and the necessary shopping plan, gives the person more autonomy, than having someone turn up to go shopping or cook with them. Having someone alongside is sometimes necessary, but has the disadvantage of creating dependence. The individual is empowered instead by being facilitated to complete tasks independently. […] The best support methods promote independence. […] The aspects of forward planning can be incredibly challenging for a person with ASD, regardless of their intellectual level. […] As people with ASD have great difficulty seeing consequences or planning ahead, they may find it hard to become motivated if the gratification is not instant. Things have to be broken down and explained in a practical way.”
“Most people instigate minor changes easily. It may be more convenient to vary a normal routine on a particular day, even pleasurable. I might decide that as it is a sunny day I will go out, and do the housework in the evening. As a supporter for someone with ASD it is vital to remember, that he will not have the flexibility of thought that people generally have and so may need routines to be more stringently adhered to. Such a simple adjustment may not be easy, and it may be preferable to stay with the usual unless there is a strong argument for change. The world becomes easier to interpret if as much as possible is held constant. […] Change is easier to manage if we know it is coming. The better prepared someone is for a change, generally the easier it is to cope. For people with ASD, it helps if the preparation can be as concrete as possible.”
“The paradoxical nature of ASD is demonstrated again in attention span. The person will be absolutely absorbed, blocking out the rest of the world, when he is engrossed in something of particular interest; but at other times his attention span can be low. Most people will recognize the experience of being called away to answer a phone call, or speaking to a visitor and completely forgetting that they were in the middle of doing something. This distractibility is a common experience for those with ASD. […] I often think that ASD is the source of the stereotype of the ‘absent-minded professor’.”
A personal remark on these topics is perhaps in order here, and I add it because it is my impression that mass media portrails of individuals with these sorts of traits are generally if anything favourably inclined; in the sense that distractibility, forgetfulness and these sorts of traits are in those contexts in general traits you smile about and which are mildly funny. My impression is that the first word that springs to mind in these contexts is ‘amusing’, or something along those lines, not ‘annoying’. The downsides are usually to some extent neglected. However I know from Real Life experience that things like forgetfulness and distractability can be really annoying. Forgetting the key to your flat and locking yourself out of your flat (multiple times); forgetting to bring home your laptop from the university and having to go back and get it while worrying about whether or not it’s been stolen in the meantime (it fortunately wasn’t); getting caught up in an interesting exchange on the internet causing you to you forget that you turned on the stove an hour ago (or was it two hours ago? Time flies when you’re engaged in stuff that interests you…), so now you’ll have to spend another hour trying to clean the pot and separate the charred chunks of vegetables and the metal; getting a burn while taking something out of the oven because you were thinking about something else and didn’t pay sufficient attention to the task at hand – these things border from annoying to dangerous, as also noted in the book: “Depending on what we were doing, finding that we have left something in the middle of it can be anything from mildly annoying (left the kitchen half cleaned) to very distressing (left the pan on the hob and burnt the house down).” Similar observations might be made in the context of ‘clumsiness’ (not a diagnostic trait, but apparently often observed) and combinations of these traits. The sorts of things people often find amusing when they happen to, say, cartoon characters are a lot less funny when they happen to you personally, especially if you are having difficulties finding ways to address the issues and other people are impacted by them as well. Problems like these may cause amusement among others, but I know from both personal experience and the experiences of a good friend of mine that they may also cause profound exasperation among the people around you.
“Difficulty with communication is a core problem for those with autism spectrum disorder (ASD). Some people have little or no speech, some have an extensive vocabulary, some make grammatical mistakes, some have a wide use of language – but all people with ASD have problems with communication. These problems are extremely complex, leading to much misunderstanding, confusion and stress. The more sophisticated the person’s language is the greater the problem may be. Ros Blackburn, a highly intelligent British woman with ASD who gives many talks on the subject, highlights that a person’s ability can also be their greatest disability. As a verbal, intellectually able woman, she finds that people do not appreciate the support that she needs in everyday and social situations. The power to have a seemingly normal conversation can cause many troubles for a person with ASD by giving a false impression of their comprehension. […] Care should be taken not to give too much information at one time. People with ASD generally process language slowly and have difficulty handling a lot of verbal input. […] People with ASD work through matters slowly, and speed of discussion is problematic. […] So time needs to be offered to assimilate information before a response is expected. […] For most people with ASD, it is easier to talk if there are fewer people in the group. In a large meeting there is too much to take in, and few silences in which to process what has been said. […] They almost always prefer one to one conversation to group discussion, and small intimate gatherings to parties.”
“We all make blunders in relationships. We misjudge what is acceptable in a situation, mistake another person’s intention or misinterpret someone’s meaning. We then feel upset, isolated and embarrassed. People with ASD are more prone to doing this sort of thing than most – and they do experience the same unpleasant aftermath. […] Coping well is a double-edged sword; the better a person manages, the more likely he is to be judged harshly when he does make a mistake. […] Some people with ASD are able to think their way through social situations. They teach themselves or have been taught to interpret non-verbal signals. They can use cognition to remember that the other person may feel differently to them, and to compute what their perception and emotions may be. This is a slow, cumbersome method compared to the automatic, rapid assimilation that those without ASD make. Even those who compensate well appear slow, stilted, awkward, and are liable to make significant mistakes.”
“Neurotypical people (NTs) are as lacking in empathy towards people with ASD as vice versa.” This is in my opinion a bold claim and I’m not sure it’s true, but I think she does have a point here. I think it’s likely that NTs often judge people with ASD based on the standards of NTs; standards which may well be impossible for the person with ASD to ever meet, regardless of the amount of effort the individual puts into meeting those standards. She however argues later on in the coverage that: “Most people are not unkind, but are unthinking or, because of lack of knowledge about disability, make incorrect assumptions.” This seems plausible.
“The rigidity of AS thinking and the tendency to obsess means that a worry can escalate and dominate a person’s life. […] As a basic rule of thumb, regular, familiar routines are better stress busters than a novel idea. A holiday, for example, is more likely to add to stress than relieve it.” (This sounds very familiar, and I’ll keep this quote in mind…)
“Many people with ASD remain more susceptible to parental influence than the majority of their peers. […] All people with ASD, including the highly intelligent, are susceptible to being led by others and it is very easy for the person offering support, either knowingly or unwittingly, to lead the person down a route, which is not the course he wants to follow.”
“Social inabilities create problems for people with autism spectrum disorder (ASD) in establishing peer relationships and so naturally accessing the support that evolves between members of groups, such as work colleagues, fellow students or regulars in the pub. Asking for assistance appropriately will be challenging for people with ASD. […] adults often only appear on the services ‘radar’ when they reach crisis point. Forty-nine per cent of adults with ASD are still living with their parents. […] Only 6 per cent of adults with ASD are in full-time employment [no sources provided, US]”
“It is not always possible to tell from meeting a person or even from having regular contact with him that he has autism spectrum disorder (ASD). Individuals therefore face the decision as to whether or not to disclose that they have the disorder. […] Generally disclosure is on a sliding scale. Most people tell close family; whilst it would probably be inappropriate to tell a casual stranger. Some will disclose to professionals, but prefer to keep the information from social contacts. […] There are no easy answers as to who and when to tell. Disclosure to professionals in formal situations appears advisable so that all are aware of the condition and any differences are accepted and planned for. Informal social situations are more fluid and difficult to read.”
“NAS statistics show that only six per cent of people with autism spectrum disorder (ASD) (12% of those with Asperger Syndrome (AS)) in the UK are in full-time employment. This compares with 49 per cent of people with general disabilities who are employed. […] Given the talents which many with ASD have, this is a great loss to the workforce. […] Traits common to ASD, such as conscientiousness, attention to detail, perseverance and loyalty, are great assets to an employer. […] People with ASD tend to be loyal, to stick to routines and dislike change. […] The characteristics of the disorder mean that the individual may not make a good impression at interview. Social skills will not be a forté. […] The employer needs to be aware of any ASD traits the person displays, such as lack of eye contact. Questions may be prepared with support so that they elicit the information needed, but are specific, factual and clear. Broad questions, such as, ‘Tell me about yourself ’, will leave the interviewee floundering. […] Interviews are not always the most appropriate way of assessing candidates, especially not those with ASD.”
The author does not address in the book the specific problems and tradeoffs related to the question of whether or not it’s optimal to disclose an autism spectrum disorder to a potential employer, but rather seems to take it for granted that the interviewee should always disclose, preferably beforehand. I’ve given this a lot of thought, and I’m really not convinced this is always the right approach.
i. A lecture on mathematical proofs:
ii. “In the fall of 1944, only seven percent of all bombs dropped by the Eighth Air Force hit within 1,000 feet of their aim point.”
From wikipedia’s article on Strategic bombing during WW2. The article has a lot of stuff. The ‘RAF estimates of destruction of “built up areas” of major German cities’ numbers in the article made my head spin – they didn’t bomb the Germans back to the stone age, but they sure tried. Here’s another observation from the article:
“After the war, the U.S. Strategic Bombing Survey reviewed the available casualty records in Germany, and concluded that official German statistics of casualties from air attack had been too low. The survey estimated that at a minimum 305,000 were killed in German cities due to bombing and estimated a minimum of 780,000 wounded. Roughly 7,500,000 German civilians were also rendered homeless.” (The German population at the time was roughly 70 million).
iii. Also war-related: Eddie Slovik:
“Edward Donald “Eddie” Slovik (February 18, 1920 – January 31, 1945) was a United States Army soldier during World War II and the only American soldier to be court-martialled and executed for desertion since the American Civil War.
Although over 21,000 American soldiers were given varying sentences for desertion during World War II, including 49 death sentences, Slovik’s was the only death sentence that was actually carried out.
During World War II, 1.7 million courts-martial were held, representing one third of all criminal cases tried in the United States during the same period. Most of the cases were minor, as were the sentences. Nevertheless, a clemency board, appointed by the Secretary of War in the summer of 1945, reviewed all general courts-martial where the accused was still in confinement. That Board remitted or reduced the sentence in 85 percent of the 27,000 serious cases reviewed. The death penalty was rarely imposed, and those cases typically were for rapes or murders. […] In France during World War I from 1917 to 1918, the United States Army executed 35 of its own soldiers, but all were convicted of rape and/or unprovoked murder of civilians and not for military offenses. During World War II in all theaters of the war, the United States military executed 102 of its own soldiers for rape and/or unprovoked murder of civilians, but only Slovik was executed for the military offense of desertion. […] of the 2,864 army personnel tried for desertion for the period January 1942 through June 1948, 49 were convicted and sentenced to death, and 48 of those sentences were voided by higher authority.”
What motivated me to read the article was mostly curiosity about how many people were actually executed for deserting during the war, a question I’d never encountered any answers to previously. The US number turned out to be, well, let’s just say it’s lower than I’d expected it would be. American soldiers who chose to desert during the war seem to have had much, much better chances of surviving the war than had soldiers who did not. Slovik was not a lucky man. On a related note, given numbers like these I’m really surprised desertion rates were not much higher than they were; presumably community norms (”desertion = disgrace’, which would probably rub off on other family members…’) played a key role here.
iv. Chess and infinity. I haven’t posted this link before even though the thread is a few months old, and I figured that given that I just had a conversation on related matters in the comment section of SCC (here’s a link) I might as well repost some of this stuff here. Some key points from the thread (I had to make slight formatting changes to the quotes because wordpress had trouble displaying some of the numbers, but the content is unchanged):
“Shannon has estimated the number of possible legal positions to be about 1043. The number of legal games is quite a bit higher, estimated by Littlewood and Hardy to be around 1010^5 (commonly cited as 1010^50 perhaps due to a misprint). This number is so large that it can’t really be compared with anything that is not combinatorial in nature. It is far larger than the number of subatomic particles in the observable universe, let alone stars in the Milky Way galaxy.
As for your bonus question, a typical chess game today lasts about 40 to 60 moves (let’s say 50). Let us say that there are 4 reasonable candidate moves in any given position. I suspect this is probably an underestimate if anything, but let’s roll with it. That gives us about 42×50 ≈ 1060 games that might reasonably be played by good human players. If there are 6 candidate moves, we get around 1077, which is in the neighbourhood of the number of particles in the observable universe.”
“To put 1010^5 into perspective:
There are 1080 protons in the Universe. Now imagine inside each proton, we had a whole entire Universe. Now imagine again that inside each proton inside each Universe inside each proton, you had another Universe. If you count up all the protons, you get (1080 )3 = 10240, which is nowhere near the number we’re looking for.
You have to have Universes inside protons all the way down to 1250 steps to get the number of legal chess games that are estimated to exist. […]
Imagine that every single subatomic particle in the entire observable universe was a supercomputer that analysed a possible game in a single Planck unit of time (10-43 seconds, the time it takes light in a vacuum to travel 10-20 times the width of a proton), and that every single subatomic particle computer was running from the beginning of time up until the heat death of the Universe, 101000 years ≈ 1011 × 101000 seconds from now.
Even in these ridiculously favorable conditions, we’d only be able to calculate
1080 × 1043 × 1011 × 101000 = 101134
possible games. Again, this doesn’t even come close to 1010^5 = 10100000 .
Basically, if we ever solve the game of chess, it definitely won’t be through brute force.”
v. An interesting resource which a friend of mine recently shared with me and which I thought I should share here as well: Nature Reviews – Disease Primers.
vi. Here are some words I’ve recently encountered on vocabulary.com: augury, spangle, imprimatur, apperception, contrition, ensconce, impuissance, acquisitive, emendation, tintinnabulation, abalone, dissemble, pellucid, traduce, objurgation, lummox, exegesis, probity, recondite, impugn, viscid, truculence, appurtenance, declivity, adumbrate, euphony, educe, titivate, cerulean, ardour, vulpine.
i. “Calumny can injure you only if you reflect yourself in others and not in your conscience.” (Fausto Cercignani).
ii. “Emulation can be positive, if you succeed in avoiding imitation.” (-ll-).
iii. “Your identity is like your shadow: not always visible and yet always present.” (-ll-).
iv. “Sometimes moderation is a bad counselor.” (-ll-).
v. “It is error only, and not truth, that shrinks from inquiry.” (Thomas Paine)
vi. “A long habit of not thinking a thing wrong, gives it a superficial appearance of being right, and raises at first a formidable outcry in defense of custom.” (-ll-)
vii. “A body of men, holding themselves accountable to nobody, ought not to be trusted by any body.” (-ll-)
viii. “All national institutions of churches, whether Jewish, Christian, or Turkish, appear to me no other than human inventions set up to terrify and enslave mankind, and monopolize power and profit.” (-ll-)
ix. “Example has more followers than reason.” (Christian Nestell Bovee)
xii. “Education is an ornament for the prosperous, a refuge for the unfortunate.” (Democritus)
xiii. “There is no such thing as a Scientific Mind. Scientists are people of very dissimilar temperaments doing different things in very different ways. Among scientists are collectors, classifiers and compulsive tidiers-up; many are detectives by temperament and many are explorers; some are artists and others artisans. There are poet-scientists and philosopher-scientists and even a few mystics. What sort of mind or temperament can all these people be supposed to have in common? Obligative scientists must be very rare, and most people who are in fact scientists could easily have been something else instead.” (Peter Medawar)
xiv. “The purpose of scientific enquiry is not to compile an inventory of factual information, nor to build up a totalitarian world picture of natural Laws in which every event that is not compulsory is forbidden. We should think of it rather as a logically articulated structure of justifiable beliefs about nature.” (-ll-)
xv. “the spread of secondary and latterly tertiary education has created a large population of people, often with well-developed literary and scholarly tastes, who have been educated far beyond their capacity to undertake analytical thought.” (-ll-)
xvi. “If a person a) is poorly, b) receives treatment intended to make him better, and c) gets better, no power of reasoning known to medical science can convince him that it may not have been the treatment that restored his health.” (-ll-)
xvii. “I once spoke to a human geneticist who declared that the notion of intelligence was quite meaningless, so I tried calling him unintelligent. He was annoyed, and it did not appease him when I went on to ask how he came to attach such a clear meaning to the notion of lack of intelligence. We never spoke again.” (-ll-)
xviii. “There is no feeling so simple that it is not immediately complicated and distorted by introspection.” (André Gide)
xix. “Men need history; it helps them to have an idea of who they are.” (V. S. Naipaul)
xx. “There is a great deal of difference between the eager man who wants to read a book, and the tired man who wants a book to read.” (G. K. Chesterton)
Here’s a link to the first post in this series. The quotes below are from the book Full Moon, which is one of the books in Wodehouse’ Blandings Castle series. I have not read a book in that series which I did not enjoy reading.
“I really am feeling astoundingly well. It’s what I’ve always said – alcohol’s a tonic. Where most fellows go wrong is that they don’t take enough of it. […] He never drank tea, having always had a prejudice against the stuff since his friend Buffy Struggles back in the nineties had taken to it as a substitute for alcohol and had perished miserably as a result. (Actually what had led to the late Mr Struggles’s turning in his dinner pail had been a collision in Piccadilly with a hansom cab, but Gally had always felt that this could have been avoided if the poor dear old chap had not undermined his constitution by swilling a beverage whose dangers are recognized by every competent medical authority.)”
“Some little while later Veronica, starting the conversational ball rolling once more, said that she had been bitten on the nose that afternoon by a gnat. Tipton, shuddering at this, said that he had never liked gnats. Veronica said that she too, did not like gnats, but that they were better than bats. Yes, assented Tipton, oh, sure, yes a good deal better than bats. Of cats Veronica said she was fond, and Tipton agreed that cats as a class were swell. On the subject of rats they were also as one, both holding strong views regarding their lack of charm.
The ice thus broken, the talk flowed pretty easily until Veronica said that perhaps they had better be going in now. Tipton said, “Oh, shoot!” and Veronica said, “I think we’d better,” and Tipton said, “Well, okay, if we must.” His heart was racing and bounding as he accompanied her to the drawing-room. If there had ever been any doubt in his mind that this girl and he were twin souls, it no longer existed. It seemed to him absolutely amazing that two people should think so alike on everything – on gnats, bats, cats, rats, in fact absolutely everything.”
“Tipton removed his gaze from the cow. As a matter of fact, he had seen about as much of it as he wanted to see. A fine animal, but, as is so often the case with cows, not much happening.”
“‘Look here, Guv’nor, will you do something for me?’
‘What?’ asked Lord Emsworth, cautiously.
‘What were you thinking of buying Vee?’
‘I had in mind some little inexpensive trinket, such as girls like to wear. A wrist watch was your aunt’s suggestion.’
‘Good. That fits my plans like the paper on the wall. Go to Aspinall’s in Bond Street. They have wrist watches of all descriptions. And when you get there, tell them that you are empowered to act for F. Threepwood. I left Aggie’s necklace with them to be cleaned, and at the same time ordered a pendant for Vee. Tell them to send the necklace to … Are you following me, Guv’nor?’
‘No,’ said Lord Emsworth.
‘It’s quite simple. On the one hand, the necklace; on the other, the pendant. Tell them to send the necklace to Aggie at the Ritz Hotel, Paris—‘
‘Who’, asked Lord Emsworth, mildly interested, ‘is Aggie?’
‘Come, come, Guv’nor. This is not the old form. My wife.’
‘I thought your wife’s name was Frances.’
‘Well, it isn’t. It’s Niagara.’
‘What a peculiar name.’
‘Her parents spent their honeymoon at the Niagara Falls hotel.’
‘Niagara is a town in America, is it not?’
‘Not so much a town as a rather heavy downpour.’
‘A town, I always understood.’
‘You were misled by your advisers, Guv’nor. But do you mind if we get back to the res. Time presses. Tell these Aspinall birds to mail the necklace to Aggie at the Ritz Hotel, Paris, and bring back the pendant with you. Have no fear that you will be left holding the baby—‘
Again Lord Emsworth was interested. This was the first he’d heard of this.
‘Have you a baby? Is it a boy? How old is he? What do you call him? Is he at all like you?’ he asked, with a sudden pang of pity for the unfortunate suckling.
‘I was speaking figuratively, Guv’nor,’ said Freddie patiently. ‘When I said, “Have no fear that you will be left holding the baby,” I meant, “Entertain no alarm lest they may shove the bill off on you.” The score is all paid up. Have you got it straight?’
‘Let me hear the story in your own words.’
‘There is a necklace and a pendant—‘
‘Don’t go getting them mixed.’
‘I never get anything mixed. You wish me to have the pendant sent to your wife and to bring back—‘
‘No, no, the other way round.’
‘Or, rather, as I was just about to say, the other way round. It is all perfectly clear. Tell me,’ said Lord Emsworth, returning to the subject which really interested him, ‘why is Frances nicknamed Niagara?’
‘Her name isn’t Frances, and she isn’t.’
‘You told me she was. Has she taken the baby to Paris with her?’
Freddie produced a light blue handkerchief from his sleeve and passed it over his forehead.
‘Look here, Guv’nor, do you mind if we call the whole thing off? Not the necklace and pendant sequence, but all this stuff about Frances and babies—‘
‘I like the name Frances.’
‘Me, too. Music to the ears. But shall we just let it go, just forget all about it? We shall both feel easier and happier.’
Lord Emsworth uttered a pleased exlamation.
‘Not Niaraga. Chicago. This is the town I was thinking of. There is a town in America called Chicaco.'”
“‘I’ve got it,’ he said, returning. ‘The solution came to me in a flash. We will put the pig in Veronica’s room.’
A rather anxious expression stole across Freddie’s face. Of the broad general principle of putting pigs in girls’ rooms he of course approved, but he did not like that word ‘we’. […]
‘What’s the good of putting pigs in Vee’s room?’
‘My dear fellow, have you no imagination? What happens when a girl finds a pig in her room?’
‘I should think she’d yell her head off.’
‘Precisely. I confidently expect Veronica to raise the roof. Whereupon, up dashes young Plimsoll to her rescue. If you can think of a better way to bring two young people together, I should be interested to hear it.'”
“‘Is he wanted by the police?’
‘No, he is not wanted by the police.’
‘How I sympathize with the police,’ said Lady Hermione. ‘I know just how they feel.'”
I’ve been reading Wodehouse lately. I read some of his books on my Kindle as well (8, according to my updates on goodreads – it’s hard to keep track), but it’s harder to take pictures of those – for a complete list, go here or here.
Wodehouse’ novels are nice because you can pretty much read one each day even if you have other stuff going on as well, as least if you have a few hours you don’t know what to do with each day. According to one estimate from Statistics Denmark which I’ve blogged before, the average Dane spends something like 3 hours and 20 minutes per day watching TV; if they spent that time reading books like these ones instead, there’d be a lot more Danes reading more than 100 books per year than there are.
Over the last year or two I’ve in general limited my blogging of fiction to a minimum, and I’ve also actually dedicated a lot of effort into making this blog as mind-numbingly boring and irrelevant as possible. So of course it feels terrible to have to take this step now; to start suddenly blogging books which have a strong tendency to make their readers laugh and enjoy themselves. But there’s no way around it – this is stuff that’s easy to blog, and a very plausible alternative to me seems to be ‘no blogging’. I hope that by blogging books like these I’ll be able to sustain a relatively regular blogging schedule in the period to come. There’s no work involved in reading the books any longer, which should be very helpful; I already read the books, and I have more than 20 to choose from now in terms of what to cover. Wodehouse’ books are really funny, and my impression is that they’ll be easy for me to blog, in the sense that there’s a lot of funny stuff in those books and you can get away with quoting from the books without spoiling anything much. On the other hand as the picture illustrates these are mostly paper books, which are not as easy to blog as e-books are; I may find that these posts actually take so much time and effort that not much work is saved by switching (at the very least partially) to this sort of coverage. We’ll see how it goes.
I should mention that although I only discovered Wodehouse earlier this year, he’s already on my top five list of fiction authors (Terry Pratchett and Agatha Christie also belong on such a list, as do probably George R. R. Martin and Jasper Fforde – but it’s hard; there are a lot of good authors…).
The first book I’ll cover is Big Money, which I gave 4 stars on goodreads. Below I have added some quotes from the book to illustrate how Wodehouse writes and what he writes about.
“‘I wish I could find some way of making a bit of money,’ he said, resuming his remarks. ‘I don’t seem able to do it, racing. And I don’t seem able to do it at Bridge. But there must be some method. Look at all the wealthy blighters you see running around. They’ve managed to find it. I read a book the other day where a bloke goes up to another bloke in the street – and whispers in his ear – the first bloke does – “A word with you, sir!” Addressing the second bloke, you understand. “A word with you, sir. I know your secret!” Upon which, the second bloke turns ashy white and supports him in luxury for the rest of his life. I thought there might be something in it.’
‘About seven years, I should think.'”
“A low moan escaped Mr Frisby. His face, which was rather like that of a horse, twisted in pain. Of the broad principle of his sister going to Japan he approved, Japan being further away than New York. What rived his very soul was that she should be squandering her cash to tell him so [over the telephone]. A picture postcard from Tokyo, with a cross and a ‘This is my room’ against one of the windows of a hotel, would have met the case. […]
‘Do you know what she did last week?’
Mr Frisby gave a lifelike imitation of a man who has just discovered that he is sitting on an ant’s nest.’How the devil should I know what she did last week? Do you think I’m a clairvoyant?'”
“Lord Hoddesdon gasped.
‘You don’t imagine I would be fool enough to go touching Frisby?’
‘Wasn’t that your idea?’
‘Of course not. Certainly not. I was thinking – er – I was wondering – well, to tell you the truth, it crossed my mind that you might possibly be willing to part with a trifle.’
‘It did, eh?’
‘I don’t see why you shouldn’t, said Lord Hoddesdon plaintively. ‘You must have plenty. There’s a lot of money in this chaperoning business. When you took on that Argentine girl three years ago you got a couple of thousand pounds.’
‘I got fifteen hundred,’ corrected his sister. ‘In a moment of weakness – I can’t imagine what I was thinking of – I lent you the rest.’
‘Er – well, yes,’ said Lord Hoddesdon, not unembarrassed. ‘That is, in a measure, true. It comes back to me now.’
‘It didn’t come back to me – ever,’ said Lady Vera”.
“Ever since she had read in her paper that morning the plain, blunt statement that she was engaged to be married, she had been feeling oddly pensive. […] A sudden thirst for information seized her. She leaned towards her host.
‘Tell me about Godfrey,’ she said abruptly.
‘Eh?’ said Lord Hoddesdon, blinking. […] ‘What about him?’
It was a question which Ann found difficult to answer. ‘What sort of man is he?’ she would have liked to say. But when you have agreed to marry a man, it seems silly to ask what sort of man he is.
‘Well, what was he like as a little boy?’ she said, feeling that that was safe. […]
‘Boyish and vivacious,’ […] ‘Full of spirits. But always,’ he said impressively, ‘good.’
‘Good?’ said Ann with a slight shiver.
‘Always the soul of honour,’ said Lord Hoddesdon solemnly. Ann shivered again. Clarence Dumphry had been the soul of honour. She had often caught him at it.”
“A man who has so recently become engaged to be married as Lord Biskerton has, of course, no right to stare appreciatively at strange girls. But this is what Biscuit found himself doing. The fact that Ann Moon had accepted his hand had done nothing to impair his eyesight”.
“There are two schools of thought concerning the correct method of dealing with small boys who throw stones at their elders and betters in the public street. Some say they should be kicked, others that they should be smacked on the head. Lord Hoddesdon, no bigot, did both.”
“‘Biscuit,’ said Berry, ‘the most extraordinary thing has happened. There’s a girl …’
‘A girl, eh?’ said the Biscuit, interested. He began to see daylight. ‘Who is she?’
‘What?’ asked Berry, whose attention had wandered.
‘I said, who is she?’
‘I don’t know.’
‘What’s her name?’
‘I don’t know.’
‘Where does she live?’
‘I don’t know.’
‘You aren’t an Encyclopedia, old boy, are you?’ said the Biscuit. […]
‘Either a man clicks or he does not click,’ said the Biscuit firmly. ‘There are no half measures. You did?’
‘I think she was – pleased to see me.’
‘Ah! Well, then, of course you proceeded to ask her name?’
‘I hadn’t time.’
‘Did you ask her where she lived?’
‘Did she ask you your name?’
‘Did she ask you where you lived?’
‘What the dickens did you talk about?’ asked the Biscuit, curiously. ‘The situation in Russia?'”
“Mr Robbins, of Robbins, Robbins, Robbins, and Robbins, solicitors and Commissioners for Oaths, was just the sort of man you would have expected him to be after hearing his voice on the telephone.”
“‘I can’t stand Paris. I hate the place. Full of people talking French”.
“Few things in life are more embarrassing than the necessity of having to inform an old friend that you have just got engaged to his fiancée.”
“‘We’re engaged,’ he said.
‘Fine!’ said the Biscuit. ‘So you’re engaged? Well, well!’
‘Just to this one girl, I suppose?’
‘What do you mean?’
‘You always were a prudent, level-headed fellow who knew where to stop,’ said the Biscuit enviously. ‘I’m engaged to two girls.’
The Biscuit sighed.
‘Yes, two. And I’m hoping that you may have a word of advice to offer on the subject. Otherwise, I see a slightly tangled future ahead of me.’
‘Two?’ said Berry, dazed.
‘Two,’ said the Biscuit. ‘I’ve counted them over and over again, but that’s what the sum keeps working out at. I started, if you remember, with one. So far, so good. A steady, conservative policy. But complications have now arisen.”
Before I move on to the book coverage, I thought I should mention that people reading along here should expect few updates in the next month or two. I have considered simply taking a break from blogging for a month because I really need to focus on my work, but this seems a bit too radical an approach and I think what I’ll do instead is to e.g. occasionally blog one of the Wodehouse novels which I’ve been reading during the spring; this shouldn’t take too much time or effort, and ‘lazy blogging’ like that may well be all I can justify doing. Maybe I’ll talk about a textbook or two, but don’t expect much ‘serious’ blogging in the near future.
Okay, let’s move on to the book. I’ve read 25 of the 30 chapters, and the coverage will pick up where I left off in my second post.
“Few scholars today claim that there is a direct relationship between environmental scarcity and violent conflict. Accordingly, empirical research increasingly discusses and attempts to identify plausible intervening variables, notably social, political, demographic, or economic mechanisms that together with environmental scarcity may increase the risk of violent conflict. […] frequently suggested intervening variables include food security and migration […]. For instance, in sub-Saharan Africa, where inter- and intra-annual rainfall variation is extensive, almost 90 percent of total food production comes from rain-fed agriculture […], implying high social and economic vulnerability to volatile resource supplies.”
“Taken together, this broad literature [on environmental change and armed conflict] offers mixed evidence for a causal relationship. The majority of studies of civil wars and major armed conflict conclude that resource scarcity, population pressure, and weather patterns exhibit weak influences on conflict risk, compared to structural economic and institutional features. Moreover, those that report a significant correlation disagree on the direction and magnitude of the effect.”
“Children recruited into armed groups in one conflict often end up fighting in other regional conflicts as ‘floating warriors’ capitalising on porous borders to travel wherever there was a market for their newly learned trade. In certain regions of recurrent conflict, large pools of ex-combatants as well as children exist as potential recruits for armed groups lured by the opportunity to share in the spoils of war. […] Such dynamics underline the problem of regional zones of instability or ‘conflict complexes’. War economies spread beyond borders and networks of mercenaries, illegal trading and organised crime spread instability.”
“Scholars of civil war often mistake the causes of the onset of armed conflict with the factors which explain the continuation of war. Many studies seem to implicitly argue that when understanding its causes, we understand the continuation of war. […] War may [however] break out for one set of issues but might continue for a completely different and changing set of reasons. As a result of interaction between the belligerents new reasons and stimuli for conflict develop. […] These developments can significantly complicate the picture that civil war presents and do not necessarily make it easier to work towards resolution. […] Two important causal mechanisms can be distinguished that hold explanatory power for the continuation of conflict. […] For the continuation of violence one observed causal mechanism is the provocation trap. An important theory developed by insurgents since the nineteenth century aims to play on the calculations of the political decision-makers by provoking violence from the state, which generally acts as a forceful recruiting mechanism for insurgent groups […] The second mechanism can be called the counter-measure imperative. The counter-measure imperative is the commonly observable chain of events after an attack against unarmed and unwitting targets. A public outcry occurs and political decision-makers feel forced to respond. Doing nothing is often not an option in terms of political capital and electoral consequences, at least in most democratic societies. James Fearon has called this “audience costs” in the context of international crises […] Weakness in times of crisis can be political – or electoral – suicide. Therefore, there is a strong tendency to institute one stringent measure after another. Repression, the use of force and police action are just a few of the instruments that can be used […] These mechanisms trigger state violence both from a push and pull perspective and are very powerful to propel a struggle forward. Discontinuing civil war by not buying into the provocation trap and counter-measure imperative is extremely difficult, given the primary demands made of the state to uphold its monopoly of force and to protect its population.”
“Most studies looking into the dynamics or continuation of conflict see the increase or decrease in capabilities as an important explanatory factor for the continuation or discontinuation of civil war. […] The termination of civil war has in several studies been strongly linked to cutting off the capabilities and supplies of belligerents. Paul Staniland concludes that “the best offense is a fence” (Staniland 2006; see also Record 2007). When capabilities are compromised by cutting off the replenishment of men and material that are necessary to continue the struggle, wars wither down.”
“For those gathering conflict data, obtaining accurate numbers of fatalities is one of the most complicated and difficult tasks due to a plethora of problems, including misuse of the terms “casualties” and “fatalities,” political reasons for either the under-reporting or exaggeration of fatalities, and either a lack of information or the presence of conflicting information in the available sources […] In addressing the sources of bias in fatality statistics Gohdes and Price (2012: 9) note that the higher the visibility of the act of violence, the more likely it (and its fatalities) will be reported. Visibility can be reflected in the magnitude of armed conflict, wars are more visible than minor disputes; but visibility can also be related to the types of participants or fatalities, with deaths of those in uniform, whose job it is to fight being more visible than deaths of civilians. Visibility leads to a greater likelihood that fatalities will be reported, thus making them more reliable. As Lacina and Gleditsch note (2012: 3) the tallies provided by military agencies of personnel killed in action are very credible data. It was considerations such as these that led COW to make different choices than UCDP, in ways in which it codifies and gathers data about armed conflict: focusing primarily on higher fatality levels (war), using the war as the primary unit of analysis, and counting deaths only among combatants (rather than combatants and civilians).”
“Utilizing the COW datasets on war, one gains a perspective on the trends in warfare that varies significantly from those that utilize UCDP/PRIO data […] A fundamental difference is merely the timeframe covered, with COW examining wars after 1815 and UCDP/PRIO focusing upon the post-World War II era. An analysis of trends in all COW wars types for the period 1816 to 2007 […] concluded that there is a relative constancy over time in war behavior. […] Intra-state wars are the most numerous of the four major COW categories, constituting 52 percent of all of the COW wars [and there has been a] significant increase in intra-state wars since the end of World War II. […] Of the 192 years in the 1816–2007 period, there is an average number of 1.6 civil war onsets per year, and only 52 years (27 percent) experienced no civil war onsets. […] If one looks at the number of civil wars experienced by the various regions of the world […], the numbers look fairly comparable […] All in all, this analysis does not promote optimism about the trends in civil war for the remainder of the twenty-first century. The Human Security Report’s (2011) emphasis on the decline in civil war since the end of the Cold War ignores the fact that civil war onsets (even after the highpoints of 1989 and 1991) are at historically high levels with an average of 2.8 civil war onsets per year from 1992 to 2007 (compared to the yearly average of 1.6 onsets from 1816 to 2007). These figures hardly portend the end of civil war.”
“There are multiple ways to distinguish types of civil wars: whether they are ethnically motivated […], whether they are driven by attempts at secession or control of the central government […], or whether they involve lootable resources […]. Another way to distinguish different types of civil wars is to examine the military tactics used by each side in the conflict. […] Kalyvas and Balcells (2010) […] identify three technologies of rebellion that are used in civil war: irregular warfare, conventional warfare, and symmetric non-conventional warfare. Irregular war, or insurgency, occurs when the state’s military capabilities exceed those of the rebels. Conventional civil war occurs when both the state and the rebels are militarily matched at a high level, and symmetric non-conventional war happens when both the state and the rebels are militarily matched, but at a lower level. […] [They] show that irregular wars comprise just over half of the civil wars between 1944 and 2004 [and that] the end of the Cold War resulted in a decrease in the percentage of conflicts that were irregular […] from about two-thirds during the Cold War to about one-quarter after 1991 […] irregular wars last longer and are more likely to be won by the incumbent as compared to both conventional wars and symmetric non-conventional wars.”
“Findley and Young (2012) […] find that a majority of terrorist acts occur in the context of civil war, which suggests that this is an important tactic in the context of the larger struggle between state and non-state actors. […] Lake (2002), among others, has argued that terrorism is often used in conflicts to provoke a disproportionate response from the state. […] Kydd and Walter (2006) argue that terrorism can be used to spoil potential peace among moderate factions in a civil war and empirical evidence supports this claim (Findley and Young, 2013).”
“While sexual violence against civilians in conflict is pervasive, it is not ubiquitous. There are conflicts where systematic sexual violence is completely absent, showing that, contrary to popular belief, sexual violence is not an inherent component of conflict […] Sexual violence in conflict creates disorder in communities by violating social norms and dissolving social bonds through humiliation, shame, and terror […]. The breakdown of the rule of law and social norms has an impact upon the whole community, not just the victims of the violence. Formal and informal social controls are diminished during civil war and communities in conflict lack a functional formal system to maintain order. […] Whether sexual violence is primarily a consequence of the strategy or tactic of leaders or the lack of control of militaries is an ongoing debate.”
“forced migration is not simply a function of conflict and insecurity. Rather, security concerns interact with economic “push” factors in sending regions and “pull” factors in receiving areas. […] it is difficult to disentangle security motives from economic ones […] When governments deliberately target political or ethnic opponents, people are more likely to cross borders as compared with general turmoil in civil wars and dissident violence. […] In addition, better economic conditions and political stability in neighboring states make it more likely that individuals will cross an international border, demonstrating the importance of pull factors in receiving countries. […] Proximity to the conflict country exerts a very large effect on destination choice as does the presence of a large diaspora population. […] Bohra-Mishra and Massey (2011) find that low levels of violence actually discourage migration, perhaps because unsafe travel conditions make it more likely that people will hunker down and stay at home to protect their assets. Only at a high threshold of violence are people willing to leave. Engel and Ibáñez (2007) find that owning land interacts with violence. People with more land are less willing to move since they would lose a fixed asset, but at the same time, large landowners are more likely to be threatened with violence. Confronted with low levels of violence, landowners are more likely to stay put, but become increasingly likely to flee as violence gets worse. […] Greenhill (2010) examines the strategic use of forced migration as a negotiating tactic in interstate relations. In many cases, sending states have “engineered” refugee flows in such a way so as to extract concessions from migrant-receiving states. […] several themes have emerged in the literature on the causes of forced migration. First, refugees are not choice-less, but are strategic actors who weigh the various options available to them, even if choice is in the context of extreme violence. Second, forced migration and economic migration are not mutually exclusive categories; rather, security, economics, and social networks all shape migration decisions to a greater or lesser degree. Finally, perpetrators of violence understand the effects of forced migration and displacement, and use refugee flows and “cleansing” as a way to further their political aims.”
“Refugee communities can also foster conflict in host countries, either through mobilization into militant factions, or by the mere presence of ethnically different “foreigners” and economic competitors. Salehyan and Gleditsch (2006) start with the observation that civil wars often cluster in space – when one country experiences civil war its neighbors are significantly more likely to fall into conflict themselves. They then argue that refugee migration facilitates the transnational spread of militant networks as well as presents negative externalities for receiving areas – such as ethnic competition or economic burdens – increasing the risk of conflict in refugee hosts. Through statistical testing they demonstrate that hosting a large number of refugees does indeed raise the risk of conflict. […] scholars have [also] noted a link between civil war and international conflict: countries that are faced with domestic unrest are more likely to become involved in disputes with their neighbors […] Refugee flows are one potential source of friction between states and can become a cause of international armed conflict. […] Statistically, Salehyan (2008) confirms a general pattern that refugee flows between two countries are associated with militarized interstate disputes (MIDs). Controlling for an array of factors known to be associated with international conflict, hosting 100,000 refugees from another country raises the probability that the host will initiate a conflict against the sender by 96 percent. On the flip side, the sending state is over 90 percent more likely to launch an MID against the host. Therefore, while international relations scholars have focused on variables such as the power balance, democracy, and territorial issues, a significant share of interstate conflict stems from the external effects of domestic unrest and refugee flows.”
“[One] typological approach to understanding violence against civilians is to focus on the military capacity of the actors, usually with a dyadic approach which identifies the relative strength of actors. A general finding is that relatively weak actors are more likely to target civilians. […] Regarding government violence, Valentino et al. (2004) show that governments who face strong rebel groups with a strong civilian base are more likely to engage in mass killings. […] While selective violence is useful for controlling a population (in areas where such control is feasible to uphold through violence), indiscriminate violence seems to follow a logic of weakening the adversary in their strongholds. […] A few studies have examined to what extent violence against civilians occurs as a response to violence against civilians by the adversary. […] Taken together, these studies suggest that there is some evidence for a cycle-of-violence dynamic. […] Hultman (2007) shows that when rebels lose on the battlefield, they tend to shift strategy towards more targeting of civilians and less targeting of government forces. Wood et al. (2012) also focus on shifts in relative power, showing that exogenously imposed power shifts through armed interventions into civil wars increase the level of violence against civilians by the actor that is disadvantaged by the intervention. Hence, rather than concluding that weak actors are more likely to target civilians, these findings show that actors are more likely to target civilians in response to being weakened as a consequence of the war.”
“Numbers from the Uppsala Conflict Data Program (UCDP) Conflict Termination Project1 (Kreutz 2010) reveal the average length of civil wars episodes from 1946 to 2009 is approximately 1647 days […] Fearon (2004) links war type to civil war duration. Coups and revolutions seek quick outright victories. Failing this, coup organizers will likely face imprisonment, death, or exile. The strategy in territorial wars – which are usually fought on the periphery – is to continue the fight to win more concessions at the bargaining table. Peripheral wars do not necessarily need outright military victory to realize important goals. Rebels in these wars have more time. […] There were approximately 30 military coups between 1946 and 2009 identified in the Uppsala Conflict Termination data […] Peripheral/territorial wars tend to endure and are unlikely to end conclusively with peace agreements or military victories. According to the Uppsala Conflict Termination data, the mean duration of territorial wars from 1946 to 2010 is 1826.7 days […] Wars over control of government, on the other hand, typically do not last as long […] we find that the mean duration of these wars is 1456.7 days between 1946 and 2010.”
“Whereas several scholars [have noted] that ethnic/secessionist wars are more intractable than wars over government, there is not a wealth of empirical evidence directly linking war type to recurrence. […] The duration of peace after a civil war has been shown to have a negative impact on recurrence […]. In other words if peace has lasted 20 years after a war has ended, the probability of war in a future year is quite low. […] Civil war duration tends to increase when credible commitment is lacking, the war is ethnic/ peripheral, there are lootable natural resources the rebels can exploit, there are spoilers and a good number of veto players, and when there is third-party intervention. “Reversing” these factors makes for shorter wars. The factors are cumulative in that an ethnic war in the presence of lootable resources, low credible commitment, and spoilers will be expected to last a very long time. Wars over government with no spoilers or lootables will be expected to be shorter. Civil wars are more likely to recur if the war is ethnic/peripheral, credible commitment is lacking, the outcome is one of negotiated settlement, the war did not see an exceptionally high death rate, there are factors conducive to rebel recruitment such as low democracy and a weak economy at war’s end, there are valuable natural resources present in the rebel territory, the war is not mediated, and there is no effective peacekeeping operation.”
i. “A drawback to success in life is that failure, when it does come, acquires an exaggerated importance.” (P. G. Wodehouse).
ii. “Truth is the cry of all, but the game of the few.” (George Berkeley).
iii. “It is always the best policy to speak the truth, unless, of course, you are an exceptionally good liar.” (Jerome K. Jerome).
iv. “I don’t believe any man ever existed without vanity, and if he did he would be an extremely uncomfortable person to have anything to do with. He would, of course, be a very good man, and we should respect him very much. He would be a very admirable man—a man to be put under a glass case and shown round as a specimen—a man to be stuck upon a pedestal and copied, like a school exercise—a man to be reverenced, but not a man to be loved, not a human brother whose hand we should care to grip. Angels may be very excellent sort of folk in their way, but we, poor mortals, in our present state, would probably find them precious slow company. Even mere good people are rather depressing. It is in our faults and failings, not in our virtues, that we touch one another and find sympathy. We differ widely enough in our nobler qualities. It is in our follies that we are at one.” (-ll-).
v. “A shy man’s lot is not a happy one. The men dislike him, the women despise him, and he dislikes and despises himself. […] A shy man means a lonely man—a man cut off from all companionship, all sociability. He moves about the world, but does not mix with it. Between him and his fellow-men there runs ever an impassable barrier—a strong, invisible wall that, trying in vain to scale, he but bruises himself against. He sees the pleasant faces and hears the pleasant voices on the other side, but he cannot stretch his hand across to grasp another hand. He stands watching the merry groups, and he longs to speak and to claim kindred with them. But they pass him by, chatting gayly to one another, and he cannot stay them. He tries to reach them, but his prison walls move with him and hem him in on every side. In the busy street, in the crowded room, in the grind of work, in the whirl of pleasure, amid the many or amid the few—wherever men congregate together, wherever the music of human speech is heard and human thought is flashed from human eyes, there, shunned and solitary, the shy man, like a leper, stands apart. His soul is full of love and longing, but the world knows it not. The iron mask of shyness is riveted before his face, and the man beneath is never seen.” (-ll-).
vi. “We cannot tell the precise moment when friendship is formed. As in filling a vessel drop by drop, there is at last a drop which makes it run over; so in a series of kindnesses there is at last one which makes the heart run over.” (James Boswell).
vii. “Men might as well project a voyage to the Moon as attempt to employ steam navigation against the stormy North Atlantic Ocean.” (Dr. Dionysus Lardner (1793-1859). Many more quotes of a similar nature here).
viii. “We pity in others only those evils which we have ourselves experienced.” (Jean-Jacques Rousseau).
ix. “All that time is lost which might be better employed.” (-ll-).
x. “Virtue is a state of war, and to live in it means one always has some battle to wage against oneself.” (-ll-).
xi. “Remorse sleeps during a prosperous period but wakes up in adversity.” (-ll-).
xii. “Hatred, as well as love, renders its votaries credulous.” (-ll-).
xiii. “He that is choice of his time will be choice of his company, and choice of his actions.” (Jeremy Taylor).
xiv. “To say that a man is vain means merely that he is pleased with the effect he produces on other people. A conceited man is satisfied with the effect he produces on himself.” (Max Beerbohm).
xv. “Moderation is the silken string running through the pearl chain of all virtues.” (Joseph Hall).
xvi. “If you make people think they’re thinking, they’ll love you; but if you really make them think, they’ll hate you.” (Donald Marquis).
xvii. “Some luck lies in not getting what you thought you wanted but getting what you have, which once you have got it you may be smart enough to see is what you would have wanted had you known.” (Garrison Keillor)
xviii. “Once I believed that sooner or later I would come across a really wise person; today I couldn’t even say what wisdom is.” (Fausto Cercignani).
xix. “If you are living in the past or in the future, you will never find a meaning in the present.” (-ll-)
xx. “A secret remains a secret until you make someone promise never to reveal it.” (-ll-)
Update: According to the category count, this is the 150th post of quotes here on this blog (the category cloud seems to be slow to update the number, but I assume it’ll do it eventually).
It’s probably worth pointing out to new readers in particular that if you like this post and perhaps have liked a few of the previous posts in the series, you can access a collection of all the other posts in the series simply by clicking the blue category link, ‘quotes’, at the bottom of this post, or by clicking the ‘quotes’ link provided in the category cloud in the sidebar to the right.
[Warning: Long post].
I’ve blogged data related to the data covered in this post before here on the blog, but when I did that I only provided coverage in Danish. Part of my motivation for providing some coverage in English here (which is a slightly awkward and time consuming thing to do as all source material is in Danish) is that this is the sort of data you probably won’t ever get to know about if you don’t understand Danish, and it seems like some of it might be worth knowing about also for people who do not live in Denmark. Another reason for posting stuff in English is of course that I dislike writing a blog post which I know beforehand that some of my regular readers will not understand. I should perhaps note that some of the data is at least peripherally related to my academic work at the moment.
The report which I’m covering in this post (here’s a link to it) deals primarily with various metrics collected in order to evaluate whether treatment goals which have been set centrally are being met by the Danish regions, one of the primary political responsibilities of which is to deal with health care service delivery. To take an example from the report, a goal has been set that at least 95 % of patients with known diabetes in the Danish regions should have their Hba1c (an important variable in the treatment context) measured at least once per year. The report of course doesn’t just contain a list of goals etc. – it also presents a lot of data which has been collected throughout the country in order to figure out to which extent the various goals have been met at the local levels. Hba1c is just an example; there are also goals set in relation to the variables hypertension, regular eye screenings, regular kidney function tests, regular foot examinations, and regular tests for hyperlipidemia, among others.
Testing is just one aspect of what’s being measured; other goals relate to treatment delivery. There’s for example a goal that the proportion of (known) type 2 diabetics with an Hba1c above 7.0% who are not receiving anti-diabetic treatment should be at most 5% within regions. A thought that occurred to me while reading the report was that it seemed to me that some interesting incentive problems might pop up here if these numbers were more important than I assume they are in the decision-making context, because adding this specific variable without also adding a goal for ‘finding diabetics who do not know they are sick’ – and no such goal is included in the report, as far as I’ve been able to ascertain – might lead to problems; in theory a region that would do well in terms of identifying undiagnosed type 2 patients, of which there are many, might get punished for this if their higher patient population in treatment as a result of better identification might lead to binding capacity constraints at various treatment levels; capacity constraints which would not affect regions which are worse at identifying (non-)patients at risk because of the existence of a tradeoff between resources devoted to search/identification and resources devoted to treatment. Without a goal for identifying undiagnosed type 2 diabetics, it seems to me that to the extent that there’s a tradeoff between devoting resources to identifying new cases and devoting resources to the treatment of known cases, the current structure of evaluation, to the extent that it informs decision-making at the regional level, favours treatment over identification – which might or might not be problematic from a cost-benefit point of view. I find it somewhat puzzling that no goals relate to case-finding/diagnostics because a lot of the goals only really make sense if the people who are sick actually get diagnosed so that they can receive treatment in the first place; that, say, 95% of diabetics with a diagnosis receives treatment option X is much less impressive if, say, a third of all people with the disease do not have a diagnosis. Considering the relatively low amount of variation in some of the metrics included you’d expect a variable of this sort to be included here, at least I did.
The report has an appendix with some interesting information about the sex ratios, age distributions, how long people have had diabetes, whether they smoke, what their BMIs and blood pressures are like, how well they’re regulated (in terms of Hba1c), what they’re treated with (insulin, antihypertensive drugs, etc.), their cholesterol levels and triglyceride levels, etc. I’ll talk about these numbers towards the end of the post – if you want to get straight to this coverage and don’t care about the ‘main coverage’, you can just scroll down until you reach the ‘…’ point below.
The report has 182 pages with a lot of data, so I’m not going to talk about all of it. It is based on very large data sets which include more than 37.000 Danish diabetes patients from specialized diabetes units (diabetesambulatorier) (these are usually located in hospitals and provide ambulatory care only) as well as 34.000 diabetics treated by their local GPs – the aim is to eventually include all Danish diabetics in the database, and more are added each year, but even as it is a very big proportion of all patients are ‘accounted for’ in the data. Other sources also provide additional details, for example there’s a database on children and young diabetics collected separately. Most of the diabetics which are not included here are patients treated by their local GPs, and there’s still a substantial amount of uncertainty related to this group; approximately 90% of all patients connected to the diabetes units are assumed at this point to be included in the database, but the report also notes that approximately 80 % of diabetics are assumed to be treated in general practice. Coverage of this patient population is currently improving rapidly and it seems that most diabetics in Denmark will likely be included in the database within the next few years. They speculate in the report that the inclusion of more patients treated in general practice may be part of the explanation why goal achievement seems to have decreased slightly over time; this seems to me like a likely explanation considering the data they present as the diabetes units in general are better at achieving the goals set than are the GPs. The data is up to date – as some of you might have inferred from the presumably partly unintelligible words in the parenthesis in the title, the report deals with data from the time period 2013-2014. I decided early on not to copy tables into this post directly as it’s highly annoying to have to translate terms in such tables; instead I’ve tried to give you the highlights. I may or may not have succeeded in doing that, but you should be aware, especially if you understand Danish, that the report has a lot of details, e.g. in terms of intraregional variation etc., which are excluded from this coverage. Although I far from cover all the data, I do cover most of the main topics dealt with in the publication in at least a little bit of detail.
The report concludes in the introduction that for most treatment indicators no clinically significant differences in the quality of the treatment provided to diabetics are apparent when you compare the different Danish regions – so if you’re looking at the big picture, if you’re a Danish diabetic it doesn’t matter all that much if you live in Jutland or in Copenhagen. However some significant intra-regional differences do exist. In the following I’ll talk in a bit more detail about some of data included in the report.
When looking at the Hba1c goal (95% should be tested at least once per year), they evaluate the groups treated in the diabetes units and the groups treated in general practice separately; so you have one metric for patients treated in diabetes units living in the north of Jutland (North Denmark Region) and you have another group of patients treated in general practice living in the north of Jutland – this breakdown of the data makes it possible to not only compare people across regions but also to investigate whether there are important differences between the care provided by diabetes units and the care provided by general practitioners. When dealing with patients receiving ambulatory care from the diabetes units all regions meet the goal, but in Copenhagen (Capital Region of Denmark, (-CRD)) only 94% of patients treated in general practice had their Hba1c measured within the last year – this was the only region which did not meet the goal for the patient population treated in general practice. I would have thought beforehand that all diabetes units would have 100% coverage here, but that’s actually only the case in the region in which I live (Central Denmark Region) – on the other hand in most other regions, aside from Copenhagen again, the number is 99%, which seems reasonable as I’m assuming a substantial proportion of the remainder is explained by patient noncompliance, which is difficult to avoid completely. I speculate that patient compliance differences between patient populations treated at diabetes units and patient populations treated by their GP might also be part of the explanation for the lower goal achievement of the general practice population; as far as I’m aware diabetes units can deny care in the case of non-compliance whereas GPs cannot, so you’d sort of expect the most ‘difficult’ patients to end up in general practice; this is speculation to some extent and I’m not sure it’s a big effect, but it’s worth keeping in mind when analyzing this data that not all differences you observe necessarily relate to service delivery inputs (whether or not a doctor reminds a patient it’s time to get his eyes checked, for example); the two main groups analyzed are likely to also be different due to patient population compositions. Differences in patient population composition may of course also drive some of the intraregional variation observed. They mention in their discussion of the results for the Hba1c variable that they’re planning on changing the standard here to one which relate to the distributional results of the Hba1c, not just whether the test was done, which seems like a good idea. As it is, the great majority of Danish diabetics have their Hba1c measured at least annually, which is good news because of the importance of this variable in the treatment context.
In the context of hypertension, there’s a goal that at least 95% of diabetics should have their blood pressure measured at least once per year. In the context of patients treated in the diabetes units, all regions achieve the goal and the national average for this patient population is 97% (once again the region in which I live is the only one that achieved 100 % coverage), but in the context of patients treated in general practice only one region (North Denmark Region) managed to get to 95% and the national average is 90%. In most regions, one in ten diabetics treated in general practice do not have their blood pressure measured once per year, and again Copenhagen (CRD) is doing worst with a coverage of only 87%. As mentioned in the general comments above some of the intraregional variation is actually quite substantial, and this may be a good example because not all hospitals are doing great on this variable. Sygehus Sønderjylland, Aabenraa (in southern Jutland), one of the diabetes units, had a coverage of only 67%, and the percentage of patients treated at Hillerød Hospital in Copenhagen (CRD), another diabetes unit, was likewise quite low, with 83% of patients having had their blood pressure measured within the last year. These hospitals are however the exceptions to the rule. Evaluating whether it has been tested if patients do or do not have hypertension is different from evaluating whether hypertension is actually treated after it has been discovered, and here the numbers are less impressive; for the type 1 patients treated in the diabetes units, roughly one third (31%) of patients with a blood pressure higher than 140/90 are not receiving treatment for hypertension (the goal was at most 20%). The picture was much better for type 2 patients (11% at the national level) and patients treated in general practice (13%). They note that the picture has not improved over the last years for the type 1 patients and that this is not in their opinion a satisfactory state of affairs. A note of caution is that the variable only includes patients who have had a blood pressure measured within the last year which was higher than 140/90 and that you can’t use this variable as an indication of how many patients with high blood pressure are not being treated; some patients who are in treatment for high blood pressure have blood pressures lower than 140/90 (achieving this would in many cases be the point of treatment…). Such an estimate will however be added to later versions of the report. In terms of the public health consequences of undertreatment, the two patient populations are of course far from equally important. As noted later in the coverage, the proportion of type 2 patients on antihypertensive agents is much higher than the proportion of type 1 diabetics receiving treatment like this, and despite this difference the blood pressure distributions of the two patient populations are reasonably similar (more on this below).
Screening for albuminuria: The goal here is that at least 95 % of adult diabetics are screened within a two-year period (There are slightly different goals for children and young adults, but I won’t go into those). In the context of patients treated in the diabetes units, the northern Jutland Region and Copenhagen/RH failed to achieve the goal with a coverage slightly below 95% – the other regions achieved the goal, although not much more than that; the national average for this patient population is 96%. In the context of patients treated in general practice none of the regions achieve the goal and the national average for this patient population is 88%. Region Zealand was doing worst with 84%, whereas the region in which I live, Region Midtjylland, was doing best with a 92% coverage. Of the diabetes units, Rigshospitalet, “one of the largest hospitals in Denmark and the most highly specialised hospital in Copenhagen”, seems to also be the worst performing hospital in Denmark in this respect, with only 84 % of patients being screened – which to me seems exceptionally bad considering that for example not a single hospital in the region in which I live is below 95%. Nationally roughly 20% of patients with micro- or macroalbuminuria are not on ACE-inhibitors/Angiotensin II receptor antagonists.
Eye examination: The main process goal here is at least one eye examination every second year for at least 90% of the patients, and a requirement that the treating physician knows the result of the eye examination. This latter requirement is important in the context of the interpretation of the results (see below). For patients treated in diabetes units, four out of five regions achieved the goal, but there were also what to me seemed like large differences across regions. In Southern Denmark, the goal was not met and only 88 % had had an eye examination within the last two years, whereas the number was 98% in Region Zealand. Region Zealand was a clear outlier here and the national average for this patient population was 91%. For patients treated in general practice no regions achieved the goal, and this variable provides a completely different picture from the previous variables in terms of the differences between patients treated in diabetes units and patients treated in general practice: In most regions, the coverage here for patients in general practice is in the single digits and the national average for this patient population is just 5 %. They note in the report that this number has decreased over the years through which this variable has been analyzed, and they don’t know why (but they’re investigating it). It seems to be a big problem that doctors are not told about the results of these examinations, which presumably makes coordination of care difficult.
The report also has numbers on how many patients have had their eyes checked within the last 4 years, rather than within the last two, and this variable makes it clear that more infrequent screening is not explaining anything in terms of the differences between the patient populations; for patients treated in general practice the numbers are still here in the single digits. They mention that data security requirements imposed on health care providers are likely the reason why the numbers are low in general practice as it seems common that the GP is not informed of the results of screenings taking place, so that the only people who gets to know about the results are the ophthalmologists doing them. A new variable recently included in the report is whether newly-diagnosed type 2 diabetics are screened for eye-damage within 12 months of receiving their diagnosis – here they have received the numbers directly from the ophthalmologists so uncertainty about information sharing doesn’t enter the picture (well, it does, but the variable doesn’t care; it just measures whether an eye screen has been performed or not) – and although the standard set is 95% (at most one in twenty should not have their eyes checked within a year of diagnosis) at the national level only half of patients actually do get an eye screen within the first year (95% CI: 46-53%) – uncertainty about the date of diagnosis makes it slightly difficult to interpret some of the specific results, but the chosen standard is not achieved anywhere and this once again underlines how diabetic eye care is one of the areas where things are not going as well as the people setting the goals would like them to. The rationale for screening people within the first year of diagnosis is of course that many type 2 patients have complications at diagnosis – “30–50 per cent of patients with newly diagnosed T2DM will already have tissue complications at diagnosis due to the prolonged period of antecedent moderate and asymptomatic hyperglycaemia.” (link).
The report does include estimates of the number of diabetics who receive eye screenings regardless of whether the treating physician knows the results or not; at the national level, according to this estimate 65% of patients have their eyes screened at least once every second year, leaving more than a third of patients in a situation where they are not screened as often as is desirable. They mention that they have had difficulties with the transfer of data and many of the specific estimates are uncertain, including two of the regional estimates, but the general level – 65% or something like that – is based on close to 10.000 patients and is assumed to be representative. Approximately 1% of Danish diabetics are blind, according to the report.
Foot examinations: Just like most of the other variables: At least 95 % of patients, at least once every second year. For diabetics treated in diabetes units, the national average is here 96% and the goal was not achieved in Copenhagen (CRD) (94%) and northern Jutland (91%). There are again remarkable differences within regions; at Helsingør Hospital only 77% were screened (95% CI: 73-82%) (a drop from 94% the year before), and at Hillerød Hospital the number was even lower, 73% (95% CI: 70-75), again a drop from the previous year where the coverage was 87%. Both these numbers are worse than the regional averages for all patients treated in general practice, even though none of the regions meet the goal. Actually I thought the year-to-year changes in the context of these two hospitals were almost as interesting as the intraregional differences because I have a hard time explaining those; how do you even set up a screening programme such that a coverage drop of more than 10 % from one year to the next is possible? To those who don’t know, diabetic feet are very expensive and do not seem to get the research attention one might from a cost-benefit perspective assume they would (link, point iii). Going back to the patients in general practice on average 81 % of these patients have a foot examination at least once every second year. The regions here vary from 79% to 84%. The worst covered patients are patients treated in general practice in the Vordingborg sygehus catchment area in the Zealand Region, where only roughly two out of three (69%, 95% CI: 62-75%) patients have regularly foot examinations.
Aside from all the specific indicators they’ve collected and reported on, the authors have also constructed a combined indicator, an ‘all-or-none’ indicator, in which they measure the proportion of patients who have not failed to get their Hba1c measured, their feet checked, their blood pressure measured, kidney function tests, etc. … They do not include in this metric the eye screening variable because of the problems associated with this variable, but this is the only process variable not included, and the variable is sort of an indicator of how many of the patients are actually getting all of the care that they’re supposed to get. As patients treated in general practice are generally less well covered than patients treated in the diabetes units at the hospitals I was interested to know how much these differences ‘added up to’ in the end. For the diabetes units, 11 % of patients failed on at least one metric (i.e. did not have their feet checked/Hba1c measured/blood pressure measured/etc.), whereas this was the case for a third of patients in general practice (67%). Summed up like that it seems to me that if you’re a Danish diabetes patient and you want to avoid having some variable neglected in your care, it matters whether you’re treated by your local GP or by the local diabetes unit and that you’re probably going to be better off receiving care from the diabetes unit.
Some descriptive statistics from the appendix (p. 95 ->):
Sex ratio: In the case of this variable, they have multiple reports on the same variable based on data derived from different databases. In the first database, including 16.442 people, 56% are male and 44% are female. In the next database (n=20635), including only type 2 diabetics, the sex ratio is more skewed; 60% are males and 40% are females. In a database including only patients in general practice (n=34359), like in the first database 56% of the diabetics are males and 44% are females. For the patient population of children and young adults included (n=2624), the sex ratio is almost equal (51% males and 49% females). The last database, Diabase, based on evaluation of eye screening and including only adults (n=32842), have 55% males and 45% females. It seems to me based on these results that the sex ratio is slightly skewed in most patient populations, with slightly more males than females having diabetes – and it seems not improbable that this is to due to a higher male prevalence of type 2 diabetes (the children/young adult database and type 2 database seem to both point in this direction – the children/young adult group mainly consists of type 1 patients as 98% of this sample is type 1. The fact that the prevalence of autoimmune disorders is in general higher in females than in males also seems to support this interpretation; to the extent that the sex ratio is skewed in favour of males you’d expect lifestyle factors to be behind this.
Next, age distribution. In the first database (n=16.442), the average and the median age is 50, the standard deviation is 16, the youngest individual is 16 and the oldest is 95. It is worth remembering in this part of the reporting that the oldest individual in the sample is not a good estimate of ‘how long a diabetic can expect to live’ – for all we know the 95 year old in the database got diagnosed at the age of 80. You need diabetes duration before you can begin to speculate about that variable. Anyway, in the next database, of type 2 patients (n=20635), the average age is 64 (median=65), the standard deviation is 12 and the oldest individual is 98. In the context of both of the databases mentioned so far some regions do better than others in terms of the oldest individual, but it also seems to me that this may just be a function of the sample size and ‘random stuff’ (95+ year olds are rare events); Northern Jutland doesn’t have a lot of patients so the oldest patient in that group is not as old as the oldest patient from Copenhagen – this is probably but what you’d expect. In the general practice database (n=34359), the average age is 68 (median=69) and the standard deviation is 11; the oldest individual there is 102. In the Diabase database (n=32842), the average age is 62 (median=64), the standard deviation is 15 and the oldest individual is 98. It’s clear from these databases that most diabetics in Denmark are type 2 diabetics (this is no surprise) and that a substantial proportion of them are at or close to retirement age.
The appendix has a bit of data on diabetes type, but I think the main thing to take away from the tables that break this variable down is that type 1 is overrepresented in the databases compared to the true prevalence – in the Diabase database for example almost half of patients are type 1 (46%), despite the fact that type 1 diabetics are estimated to make up only 10% of the total in Denmark (see e.g. this (Danish source)). I’m sure this is to a significant extent due to lack of coverage of type 2 diabetics treated in general practice.
Diabetes duration: In the first data-set including 16.442 individuals the patients have a median diabetes duration of 21,2 years. The 10% cutoff is 5,4 years, the 25% cutoff is 11,3 years, the 75% cutoff is 33,5 years, and the 90% cutoff is 44,2 years. High diabetes durations are more likely to be observed in type 1 patients as they’re in general diagnosed earlier; in the next database involving only type 2 patients (n=20635), the median duration is 12.9 years and the corresponding cutoffs are 3,8 years (10%); 7,4 years (25%); 18,6 years (75%); and 24,7 years (90%). In the database involving patients treated in general practice, the median duration is 6,8 years and the cutoffs reported for the various percentiles are 2,5 years (10%), 4,0 (25%), 11,2 (75%) and 15,6 (90%). One note not directly related to the data but which I thought might be worth adding here is that of one were to try to use these data for the purposes of estimating the risk of complications as a function of diabetes duration, it would be important to have in mind that there’s probably often a substantial amount of uncertainty associated with the diabetes duration variable because many type 2 diabetics are diagnosed after a substantial amount of time with sub-optimal glycemic control; i.e. although diabetes duration is lower in type 2 populations than in type 1 populations, I’d assume that the type 2 estimates of duration are still biased downwards compared to type 1 estimates causing some potential issues in terms of how to interpret associations found here.
Next, smoking. In the first database (n=16.442), 22% of diabetics smoke daily and another 22% are ex-smokers who have not smoked within the last 6 months. According to the resource to which you’re directed when you’re looking for data on that kind of stuff on Statistics Denmark, the percentage of daily smokers was 17% in 2013 in the general population (based on n=158.870 – this is a direct link to the data), which seems to indicate that the trend (this is a graph of the percentage of Danes smoking daily as a function of time, going back to the 70es) I commented upon (Danish link) a few years back has not reversed or slowed down much. If we go back to the appendix and look at the next source, dealing with type 2 diabetics, 19% of them are smoking daily and 35% of them are ex-smokers (again, 6 months). In the general practice database (n=34.359) 17% of patients smoke daily and 37% are ex-smokers.
BMI. Here’s one variable where type 1 and type 2 look very different. The first source deals with type 1 diabetics (n=15.967) and here the median BMI is 25.0, which is comparable to the population median (if anything it’s probably lower than the population median) – see e.g. page 63 here. Relevant percentile cutoffs are 20,8 (10%), 22,7 (25%), 28,1 (75%), and 31,3 (90%). Numbers are quite similar across regions. For the type 2 data, the first source (n=20.035) has a median BMI of 30,7 (almost equal to the 1 in 10 cutoff for type 1 diabetics), with relevant cutoffs of 24,4 (10%), 27,2 (25%), 34,9 (75%), and 39,4 (90%). According to this source, one in four type 2 diabetics in Denmark are ‘severely obese‘ and more diabetics are obese than are not. It’s worth remembering that using these numbers to implicitly estimate the risk of type 2 diabetes associated with overweight is problematic as especially some of the people in the lower end of the distribution are quite likely to have experienced weight loss post-diagnosis. For type 2 patients treated in general practice (n=15.736), the median BMI is 29,3 and cutoffs are 23,7 (10%), 26,1 (25%), 33,1 (75%), and 37,4 (90%).
Distribution of Hba1c. The descriptive statistics included also have data on the distribution of Hba1c values among some of the patients who have had this variable measured. I won’t go into the details here except to note that the differences between type 1 and type 2 patients in terms of the Hba1c values achieved are smaller than I’d perhaps expected; the median Hba1c among type 1s was estimated at 62, based on 16.442 individuals, whereas the corresponding number for type 2s was 59, based on 20.635 individuals. Curiously, a second data source finds a median Hba1c of only 48 for type 2 patients treated in general practice; the difference between this one and the type 1 median is definitely high enough to matter in terms of the risk of complications (it’s more questionable how big the effect of a jump from 59 to 62 is, especially considering measurement error and the fact that the type 1 distribution seems denser than the type 2 distribution so that there aren’t that many more exceptionally high values in the type 1 dataset), but I wonder if this actually quite impressive level of metabolic control in general practice may not be due to biased reporting, with GPs doing well in terms of diabetes management being also more likely to report to the databases; it’s worth remembering that most patients treated in general practice are still not accounted for in these data-sets.
Oral antidiabetics and insulin. In one sample of 20.635 type 2 patients, 69% took oral antidiabetics, and in another sample of 34.359 type 2 patients treated in general practice the number was 75%. 3% of type 1 diabetics in a sample of 16.442 individuals also took oral antidiabetics, which surprised me. In the first-mentioned sample of type 2 patients 69% (but not the same amount of individuals – this was not a reporting error) also took insulin, so there seems to be a substantial number of patients on both treatments. In the general practice sample included the number of patients on insulin was much lower, as only 14% of type 2 patients were on insulin – again concerns about reporting bias may play a role here, but even taking this number at face value and extrapolating out of sample you reach the conclusion that the majority of patients on insulin are probably type 2 diabetics, as only roughly one patient in 10 is type 1.
Antihypertensive treatment and treatment for hyperlipidemia: Although there as mentioned above seems to be less focus on hypertension in type 1 patients than on hypertension in type 2 patients, it’s still the case that roughly half (48%) of all patients in the type 1 sample (n=16.442) was on antihypertensive treatment. In the first type 2 sample (n=20635), 82% of patients were receiving treatment against hypertension, and this number was similar in the general practice sample (81%). The proportions of patients in treatment for hyperlipidemia are roughly similar (46% of type 1, and 79% and 73% in the two type 2 samples, respectively).
Blood pressure. The median level of systolic blood pressure among type 1 diabetics (n=16442) was 130, with the 75% cutoff intersecting the hypertension level (140) and 10% of patients having a systolic blood pressure above 151. These numbers are almost identical to the sample of type 2 patients treated in general practice, however as earlier mentioned this blood pressure level is achieved with a lower proportion of patients in treatment for hypertension. In the second sample of type 2 patients (n=20635), the numbers were slightly higher (median: 133, 75% cutoff: 144, 90% cutoff: 158). The median diastolic blood pressure was 77 in the type 1 sample, with 75 and 90% cutoffs of 82 and 89; the data in the type 2 samples are almost identical.
Here’s my first post about the book. In this post I’ll continue my coverage where I left off in my first post. A few of the chapters covered below I did not think very highly of, but other parts of the coverage are about as good as you could expect (given problems such as e.g. limited data etc.). Some of the stuff I found quite interesting. As people will note in the coverage below the book does address the religious dimension to some extent, though in my opinion far from to the extent that the variable deserves. An annoying aspect of the chapter on religion was to me that although the author of the chapter includes data which to me cannot but lead to some very obvious conclusions, the author seems to be very careful avoiding drawing those conclusions explicitly. It’s understandable, but still annoying. For related reasons I also got annoyed at him for presumably deliberately completely disregarding which seems in the context of his own coverage to be an actually very important component of Huntington’s thesis, that conflict at the micro level seems to very often be between muslims and ‘the rest’. Here’s a relevant quote from Clash…, p. 255:
“ethnic conflicts and fault line wars have not been evenly distributed among the world’s civilizations. Major fault line fighting has occurred between Serbs and Croats in the former Yugoslavia and between Buddhists and Hindus in Sri Lanka, while less violent conflicts took place between non-Muslim groups in a few other places. The overwhelming majority of fault line conflicts, however, have taken place along the boundary looping across Eurasia and Africa that separates Muslims from non-Muslims. While at the macro or global level of world politics the primary clash of civilizations is between the West and the rest, at the micro or local level it is between Islam and the others.”
This point, that conflict at the local level – which seems to be the type of conflict level you’re particularly interested in if you’re researching civil wars, as also argued in previous chapters in the coverage – according to Huntington seems to be very islam-centric, is completely overlooked (ignored?) in the handbook chapter, and if you haven’t read Huntington and your only exposure to him is through the chapter in question you’ll probably conclude that Huntington was wrong, because that seems to be the conclusion the author draws, arguing that other models are more convincing (I should add here that these other models do seem useful, at least in terms of providing (superficial) explanations; the point is just that I feel the author is misrepresenting Huntington and I dislike this). Although there are parts of the coverage in that chapter where I feel that it’s obvious the author and I do not agree, I should note that the fact that he talks about the data and the empirical research makes up for a lot of other stuff.
Anyway, on to the coverage – it’s perhaps worth noting, in light of the introductory remarks above, that the post has stuff on a lot of things besides religion, e.g. the role of natural resources, regime types, migration, and demographics.
“Elites seeking to end conflict must: (1) lead followers to endorse and support peaceful solutions; (2) contain spoilers and extremists and prevent them from derailing the process of peacemaking; and (3) forge coalitions with more moderate members of the rival ethnic group(s) […]. An important part of the two-level nature of the ethnic conflict is that each of the elites supporting the peace process be able to present themselves, and the resulting terms of the peace, as a “win” for their ethnic community. […] A strategy that a state may pursue to resolve ethnic conflict is to co-opt elites from the ethnic communities demanding change […]. By satisfying elites, it reduces the ability of the aggrieved ethnic community to mobilize. Such a process of co-option can also be used to strengthen ethnic moderates in order to undermine ethnic extremists. […] the co-opted elites need to be careful to be seen as still supporting ethnic demands or they may lose all credibility in their respective ethnic community. If this occurs, the likely outcome is that more extreme ethnic elites will be able to capture the ethnic community, possibly leading to greater violence.
It is important to note that “spoilers,” be they an individual or a small sub-group within an ethnic community, can potentially derail any peace process, even if the leaders and masses support peace (Stedman, 2001).”
“Three separate categories of international factors typically play into identity and ethnic conflict. The first is the presence of an ethnic community across state boundaries. Thus, a single community exists in more than one state and its demands become international. […] This division of an ethnic community can occur when a line is drawn geographically through a community […], when a line is drawn and a group moves into the new state […], or when a diaspora moves a large population from one state to another […] or when sub-groups of an ethnic community immigrate to the developed world […] When ethnic communities cross state boundaries, the potential for one state to support an ethnic community in the other state exists. […] There is also the potential for ethnic communities to send support to a conflict […] or to lobby their government to intervene […]. Ethnic groups may also form extra-state militias and cross international borders. Sometimes these rebel groups can be directly or indirectly sponsored by state governments, leading to a very complex situation […] A second set of possible international factors is non-ethnic international intervention. A powerful state may decide to intervene in an ethnic conflict for a variety of reasons, ranging from humanitarian support, to peacekeeping, to outright invasion […] The third and last factor is the commitment of non-governmental organizations (NGOs) or third-party mediators to a conflict. […] The record of international interventions in ethnic civil wars is quite mixed. There are many difficulties associated with international action [and] international groups cannot actually change the underlying root of the ethnic conflict (Lake and Rothchild, 1998; Kaufman, 1996).”
“A relatively simple way to think of conflict onset is to think that for a rebellion to occur two conditions need to be satisfactorily fulfilled: There must be a motivation and there must be an opportunity to rebel.3 First, the rebels need a motive. This can be negative – a grievance against the existing state of affairs – or positive – a desire to capture resource rents. Second, potential rebels need to be able to achieve their goal: The realization of their desires may be blocked by the lack of financial means. […] Work by Collier and Hoeffler (1998, 2004) was crucial in highlighting the economic motivation behind civil conflicts. […] Few conflicts, if any, can be characterized purely as “resource conflicts.” […] It is likely that few groups are solely motivated by resource looting, at least in the lower rank level. What is important is that valuable natural resources create opportunities for conflicts. To feed, clothe, and arm its members, a rebel group needs money. Unless the rebel leaders are able to raise sufficient funds, a conflict is unlikely to start no matter how severe the grievances […] As a consequence, feasibility of conflict – that is, valuable natural resources providing opportunity to engage in violent conflict – has emerged as a key to understanding the relation between valuable resources and conflict.”
“It is likely that some natural resources are more associated with conflict than others. Early studies on armed civil conflict used resource measures that aggregated different types of resources together. […] With regard to financing conflict start-up and warfare the most salient aspect is probably the ease with which a resource can be looted. Lootable resources can be extracted with simple methods by individuals or small groups, are easy to transport, and can be smuggled across borders with limited risks. Examples of this type of resources are alluvial gemstones and gold. By contrast, deep-shaft minerals, oil, and natural gas are less lootable and thus less likely sources of financing. […] Using comprehensive datasets on all armed civil conflicts in the world, natural resource production, and other relevant aspects such as political regime, economic performance, and ethnic composition, researchers have established that at least some high-value natural resources are related to higher risk of conflict onset. Especially salient in this respect seem to be oil and secondary diamonds […] The results regarding timber […] and cultivation of narcotics […] are inconclusive. […] [An] important conclusion is that natural resources should be considered individually and not lumped together. Diamonds provide an illustrative example: the geological form of the diamond deposit is related to its effect on conflict. Secondary diamonds – the more lootable form of two deposit types – makes conflict more likely, longer, and more severe. Primary diamonds on the other hand are generally not related to conflict.”
“Analysis on conflict duration and severity confirm that location is a salient factor: resources matter for duration and severity only when located in the region where the conflict is taking place […] That the location of natural resources matters has a clear and important implication for empirical conflict research: relying on country-level aggregates can lead to wrong conclusions about the role of natural resources in armed civil conflict. As a consequence of this, there has been effort to collect location-specific data on oil, gas, drug cultivation, and gemstones”.
“a number of prominent studies of ethnic conflict have suggested that when ethnic groups grow at different rates, this may lead to fears of an altered political balance, which in turn might cause political instability and violent conflict […]. There is ample anecdotal evidence for such a relationship [but unfortunately little quantitative research…]. The civil war in Lebanon, for example, has largely been attributed to a shift in the delicate ethnic balance in that state […]. Further, in the early 1990s, radical Serb leaders were agitating for the secession of “Serbian” areas in Bosnia-Herzegovina by instigating popular fears that Serbs would soon be outnumbered by a growing Muslim population heading for the establishment of a Shari’a state”.
“[One] part of the demography-conflict literature has explored the role of population movements. Most of this literature […] treats migration and refugee flows as a consequence of conflict rather than a potential cause. Some scholars, however, have noted that migration, and refugee migration in particular, can spur the spread of conflict both between and within states […]. Existing work suggests that environmentally induced migration can lead to conflict in receiving areas due to competition for scarce resources and economic opportunities, ethnic tensions when migrants are from different ethnic groups, and exacerbation of socioeconomic “fault lines” […] Salehyan and Gleditsch (2006) point to spill-over effects, in the sense that mass refugee migration might spur tensions in neighboring or receiving states by imposing an economic burden and causing political stability [sic]. […] Based on a statistical analysis of refugees from neighboring countries and civil war onset during the period 1951–2001, they find that countries that experience an influx of refugees from neighboring states are significantly more likely to experience wars themselves. […] While the youth bulge hypothesis [large groups of young males => higher risk of violence/war/etc.] in general is supported by empirical evidence, indicating that countries and areas with large youth cohorts are generally at a greater risk of low-intensity conflict, the causal pathways relating youth bulges to increased conflict propensity remain largely unexplored quantitatively. When it comes to the demographic factors which have so far received less attention in terms of systematic testing – skewed sex ratios, differential ethnic growth, migration, and urbanization – the evidence is somewhat mixed […] a clear challenge with regard to the study of demography and conflict pertains to data availability and reliability. […] Countries that are undergoing armed conflict are precisely those for which we need data, but also those in which census-taking is hampered by violence.”
“Most research on the duration of civil war find that civil wars in democracies tend to be longer than other civil wars […] Research on conflict severity finds some evidence that democracies tend to see fewer battledeaths and are less likely to target civilians, suggesting that democratic institutions may induce some important forms of restraints in armed conflict […] Many researchers have found that democratization often precedes an increase in the risk of the onset of armed conflict. Hegre et al. (2001), for example, find that the risk of civil war onset is almost twice as high a year after a regime change as before, controlling for the initial level of democracy […] Many argue that democratic reforms come about when actors are unable to rule unilaterally and are forced to make concessions to an opposition […] The actual reforms to the political system we observe as democratization often do not suffice to reestablish an equilibrium between actors and the institutions that regulate their interactions; and in its absence, a violent power struggle can follow. Initial democratic reforms are often only partial, and may fail to satisfy the full demands of civil society and not suffice to reduce the relevant actors’ motivation to resort to violence […] However, there is clear evidence that the sequence matters and that the effect [the increased risk of civil war after democratization, US] is limited to the first election. […] civil wars […] tend to be settled more easily in states with prior experience of democracy […] By our count, […] 75 percent of all annual observations of countries with minor or major armed conflicts occur in non-democracies […] Democracies have an incidence of major armed conflict of only 1 percent, whereas nondemocracies have a frequency of 5.6 percent.”
“Since the Iranian revolution in the late 1970s, religious conflicts and the rise of international terror organizations have made it difficult to ignore the facts that religious factors can contribute to conflict and that religious actors can cause or participate in domestic conflicts. Despite this, comprehensive studies of religion and domestic conflict remain relatively rare. While the reasons for this rarity are complex there are two that stand out. First, for much of the twentieth century the dominant theory in the field was secularization theory, which predicted that religion would become irrelevant and perhaps extinct in modern times. While not everyone agreed with this extreme viewpoint, there was a consensus that religious influences on politics and conflict were a waning concern. […] This theory was dominant in sociology for much of the twentieth century and effectively dominated political science, under the title of modernization theory, for the same period. […] Today supporters of secularization theory are clearly in the minority. However, one of their legacies has been that research on religion and conflict is a relatively new field. […] Second, as recently as 2006, Brian Grim and Roger Finke lamented that “religion receives little attention in international quantitative studies. Including religion in cross-national studies requires data, and high-quality data are in short supply” […] availability of the necessary data to engage in quantitative research on religion and civil wars is a relatively recent development.”
“[Some] studies [have] found that conflicts involving actors making religious demands – such as demanding a religious state or a significant increase in religious legislation – were less likely to be resolved with negotiated settlements; a negotiated settlement is possible if the settlement focused on the non-religious aspects of the conflict […] One study of terrorism found that terror groups which espouse religious ideologies tend to be more violent (Henne, 2012). […] The clear majority of quantitative studies of religious conflict focus solely on inter-religious conflicts. Most of them find religious identity to influence the extent of conflict […] but there are some studies which dissent from this finding”.
“Terror is most often selected by groups that (1) have failed to achieve their goals through peaceful means, (2) are willing to use violence to achieve their goals, and (3) do not have the means for higher levels of violence.”
“the PITF dataset provides an accounting of the number of domestic conflicts that occurred in any given year between 1960 and 2009. […] Between 1960 and 2009 the modified dataset includes 817 years of ethnic war, 266 years of genocides/politicides, and 477 years of revolutionary wars. […] Cases were identified as religious or not religious based on the following categorization:
1 Not Religious.
2 Religious Identity Conflict: The two groups involved in the conflict belong to different religions or different denominations of the same religion.
3 Religious Wars: The two sides of the conflict belong to the same religion but the description of the conflict provided by the PITF project identifies religion as being an issue in the conflict. This typically includes challenges by religious fundamentalists to more secular states. […]
The results show that both numerically and as a proportion of all conflict, religious state failures (which include both religious identity conflicts and religious wars) began increasing in the mid-1970s. […] As a proportion of all conflict, religious state failures continued to increase and became a majority of all state failures in 2002. From 2002 onward, religious state failures were between 55 percent and 62 percent of all state failures in any given year.”
“Between 2002 and 2009, eight of 12 new state failures were religious. All but one of the new religious state failures were ongoing as of 2009. These include:
• 2002: A rebellion in the Muslim north of the Ivory Coast (ended in 2007)
• 2003: The beginning of the Sunni–Shia violent conflict in Iraq (ongoing)
• 2003: The resumption of the ethnic war in the Sudan [97% muslims, US] (ongoing)
• 2004: Muslim militants challenged Pakistan’s government in South and North Waziristan. This has been followed by many similar attacks (ongoing)
• 2004: Outbreak of violence by Muslims in southern Thailand (ongoing)
• 2004: In Yemen [99% muslims, US], followers of dissident cleric Husain Badr al-Din al-Huthi create a stronghold in Saada. Al-Huthi was killed in September 2004, but serious fighting begins again in early 2005 (ongoing)
• 2007: Ethiopia’s invasion of southern Somalia causes a backlash in the Muslim (ethnic- Somali) Ogaden region (ongoing)
• 2008: Islamist militants in the eastern Trans-Caucasus region of Russia bordering on Georgia (Chechnya, Dagestan, and Ingushetia) reignited their violent conflict against Russia (ongoing)” [my bold]
“There are few additional studies which engage in this type of longitudinal analysis. Perhaps the most comprehensive of such studies is presented in Toft et al.’s (2011) book God’s Century based on data collected by Toft. They found that religious conflicts – defined as conflicts with a religious content – rose from 19 percent of all civil wars in the 1940s to about half of civil wars during the first decade of the twenty-first century. Of these religious conflicts, 82 percent involved Muslims. This analysis includes only 135 civil wars during this period. The lower number is due to a more restrictive definition of civil war which includes at least 1,000 battle deaths. This demonstrates that the findings presented above also hold when looking at the most violent of civil wars.” [my bold]