Natural Conflict Resolution (I)

“During the past two decades there has been a sharp increase in interest in cooperation, peace, and conflict resolution in disparate disciplines, such as anthropology, social and developmental psychology, ethology, political sciences, and legal studies. We have closely followed this development in animal behavior and directly participated in it with our work on nonhuman primates. In the past few years, we have had an increasing number of exchanges with colleagues from different disciplines and realized the common bases underlying these heterogeneous research efforts. This volume aims to bring together the various approaches to the study of conflict management and to emphasize the similarities among them. […] we combine in one volume 36 original contributions based on the efforts of 52 authors and coauthors. […] Each contribution is a review of a particular aspect of the vast topic of conflict management: some contributions summarize years of research, whereas others present recent developments. Each contribution is written to stand on its own, but it is also a part of the whole. […] The result is an interdisciplinary volume that provides an overview of progress on many aspects of natural conflict resolution.”

I’ve as already mentioned been reading this book. It’s full of good stuff, I really like it so far. There’s a lot of stuff about the behaviours of apes and monkey in this book as well (there are e.g. chapters with titles like ‘Covariation of Conflict Management Patterns across Macaque Species’ and ‘The Peacefulness of Cooperatively Breeding Primates’), but in the coverage below I’ve mostly limited myself to coverage of studies on humans as the first chapters of the book mostly deal with these kinds of things. I’ll probably talk about some of the animal study results later on. When reading the last of the chapters from which I quote below, I was actually thinking to myself that ‘the stuff in this chapter is the kind of stuff all parents should really know about and be aware of…’ I’m sure almost none of them do, they just do what they do and many of them are probably doing perfectly alright anyway, or at least ‘good enough’.

“Whether the units are people, animals, groups, or nations, as soon as several units together try to accomplish something, there is a need to overcome competition and set aside differences. The problem of a harmonization of goals and reduction of competition for the sake of larger objectives is universal, and the processes that serve to accomplish this may be universal too. These dynamics are present to different degrees among the employees within a corporation, the members of a small band of hunter-gatherers, or the individuals in a lion pride. In all cases, mechanisms for the regulation of conflict should be in place. […] the basic dilemma facing competitors is that they sometimes cannot win a fight without losing a friend and supporter. The same principle underlying all Darwinian theory, that individuals pursue their own reproductive interests, thus automatically leads one to assume that animals that depend on cooperation should either avoid open conflict or evolve ways to control the social damage caused by open conflict […] Conflict resolution, like conflict and cooperation, appears to be a natural phenomenon. We should then find similarities in its expression and procedures across cultures and species.”

And we do – an example:

“even though primate species vary greatly in conciliatory tendency, in all species reconciliation is most common after fights between partners with close social ties even if we control for the increased level of interaction among these individuals […] most human conflict concerns familiar individuals. […] not all psychologists and social scientists try to divorce aggressive conflict from other aspects of social life or view it as necessarily destructive and antisocial […] The conflict resolution perspective […] shifts attention from aggression as the expression of an internal state to aggression as the product of a conflict of interest. It regards aggressive behavior as the product of social decision making: it is one of several ways in which conflicts between individuals or groups can be resolved. This framework will be referred to here as the Relational Model […], because it concerns the way aggressive behavior functions within social relationships. […] Once aggressive conflict is viewed as an instrument to negotiate the terms of relationships—an instrument made possible by powerful constraints on the expression of aggression as well as the possibility of social repair—the definition of what is a close or distant relationship changes dramatically. Instead of classifying relationships simply in terms of rates of affiliative and aggressive behavior, the dynamic between the two becomes the critical factor. Relationships marked by high aggression rates may actually be quite close and cooperative. […] Paradoxically, the better developed mechanisms of conflict resolution are, the less reluctant individuals will be to engage in open conflict [as the] ability to maintain working relationships despite conflict, and to undo damage to relationships, makes room for aggression as an instrument of negotiation.”

“Successful social development reconciles individuation with social integration and requires the acquisition of conflict management skills that afford both. The interplay between individual and social motives is already apparent in conflict between toddlers. For instance, Hay & Ross (1982) found that 21-month-old winners of toys would often abandon the toy they had just taken from a peer in order to engage in a new dispute over another toy held by the former opponent. Such tendencies were common even when the toy held by the other was an exact copy of the toy originally won. Earlier, Eckerman et al. (1979) showed that to one-year-old children the attractiveness of a toy increased after another person touched it. Thus toddler conflict may serve to test the “social waters” and may be instrumental in the acquisition of knowledge about social relations as well as ownership.”

“Developmental research commonly focuses on distinct episodes that are separated in time and that can be broken down conceptually into three sequential phases: (1) instigation; (2) termination; and (3) immediate outcome (or resolution). The emphasis in this chapter is on conflict termination and immediate outcome, and for the purpose of our review we combine these under the heading of conflict management. Conflict management can be unilateral or bilateral. Unilateral conflict management is characterized by opportunism and lack of consideration for the opponent’s perspectives and wishes, as well as by subordination. Conversely, bilateral conflict management is characterized by mutual perspective taking and often by dovetailing of opposing goals and expectations […] Immediate outcome is commonly classified in two main categories: distributive and variable conflict outcomes. A distributive outcome includes situations during which one child’s gain is the other child’s loss. A variable outcome refers to situations in which both win or benefit from the resolution […] A variant of this latter category is the integrative outcome […] In this situation a shared interest in social interaction provides the basis for a mutually beneficial resolution.”

“Research on parent-child conflict during the first decade of life most often has focused on emotional outbursts, such as temper tantrums […] and coercive behavior of children toward other family members as evidence of conflict. The frequency of such behavior begins to decline during early childhood and continues to do so during middle childhood […] The frequency of episodes during which parents discipline their children also decreases between the ages of three and nine […] research on conflict management in this period has focused on the relative effectiveness of various parental strategies for gaining compliance and managing negative behaviors. As a result there is little descriptive information about the characteristics of conflict between parents and children and the role of each in conflict management. With young children, parents typically employ distraction and physical assertion for preventing harm and gaining compliance. In middle childhood, however, parents report less frequent physical punishment and increasing use of techniques such as deprivation of privileges, appeals to children’s self-esteem or sense of humor, arousal of children’s sense of guilt, and reminders that children are responsible for what happens to them […] Compared with preschool children, six- to twelve-year-olds are more likely to sulk, become depressed, avoid parents, or engage in passive noncooperation with their parents […] children are increasingly likely to attribute conflict with parents to the inadequacy of parental helping behaviors and disappointment in the frequency of parent-child interactions. […] Naturalistic observations, experimental analogues, and self-reports with both peers […] and family members […] show clearly that adolescents’ conflicts are most commonly terminated through power assertion and disengagement, rather than through negotiation. Later in adolescence preference for power-assertive techniques declines, thus making more complex bilateral techniques, such as negotiation, relatively more common. […] The importance of satisfying resolutions […] is indicated by repeated findings that bilateral engagement in terminating conflict, rather than the occurrence of conflict, is a marker of adaptive, well-functioning relationships […] Children’s concepts of the basis for parental authority also change with age. Whereas preschoolers view parental authority as resting on the power to punish or reward, children in early middle childhood increasingly believe parental authority derives from all the things that parents do for them. After about age eight, parents’ expert knowledge and skill are also seen as reasons to submit to their authority”

“Conflict is universally embedded in sibling relationships. Since most children have siblings, this means that sibling conflict is widely experienced. These conflicts, however, are both unilateral and bilateral in terms of the management tactics used. The use of unilateral tactics such as coercion has been found to be negatively correlated with cooperation between siblings. Unilateral tactics also negatively correlate with helping, sharing, and sympathy expressed by older siblings toward younger ones […] When mothers or fathers favor one sibling (or children perceive them that way), greater conflict and more coercive relations ensue between the children. Moreover, psychosocial adjustment is poorer among children who perceive themselves to be less positively treated by their parents than their siblings […] Overall, crossrelational continuities in conflict management suggest that families constitute social systems rather than separate dyads.”

“Laursen et al. (1998b) used a metaanalytic approach to take a general look at developmental differences in peer conflict management […] The meta-analysis showed that, overall, peers managed conflict more often with negotiation than with coercion or disengagement. Significant developmental contrasts emerged, however. Children (age 2–10) commonly employed coercion, whereas adolescents (age 11–18) frequently employed negotiation as well as coercion. Conversely, young adults (age 19–25) more often resorted to either negotiation or disengagement. […] The meta-analysis […] did not consider the aftermath of peer conflict. Recent cross-cultural findings showed that young children transformed a significant percentage of distributive outcomes into integrative resolutions after a “cooling-off” period of a few minutes […] When such post-conflict reconciliation […] is considered, young children appear considerably more constructive in their approach to peer conflict than one would infer from the aggregated findings of Laursen et al.”

“Interestingly, young children tend to have inflated views of the extent to which they are accepted by their peers […], and they commonly overestimate their own rank—and the rank of liked peers—in the dominance hierarchy […]. Several studies have established a link between conflict management and sociometric status. […] conciliatory strategies [have been] associated with popularity and coercive strategies with rejection by peers […] Commonly, when groups of children first meet (e.g., early in the new school year), conflicts, and assertive interactions not resulting in conflict, occur relatively frequently and contribute to the eventual establishment of a dominance hierarchy […] Once dominance relations are established, rates of conflict and aggression decrease […] first impressions seem to matter in peer groups. Ladd et al. (1988), for instance, found that preschoolers who frequently argued with their peers early in the year were likely to be rejected throughout the entire year. In fact, children who argued early in the year but changed their ways during the year were still rejected later in the year. In a similar vein Denham & Holt (1993) found that peer reputation established early in the year was a significantly stronger predictor of being liked later in the year than actual social behavior.”

“friendships emerge on the basis of shared interests and attitudes as well as the shared understanding that continued interaction between them is in their mutual interest. Observational studies show that, first, agreements must occur over time within a context of shared interests in order for acquaintances to become friends and, second, certain conflicts and certain modes of conflict management actually facilitate friendship formation. For example, the use of “soft” modes of conflict management (e.g., “weak demands” followed by agreement) are associated with “hitting it off” […] friends are more active in their search for solutions, are more taskoriented, and make more active use of conflict to obtain solutions than nonfriends. Overall, some two dozen published investigations contain data comparing friends and nonfriends in terms of conflict management. […] Metaanalyses based on the entire literature with children ranging from preschool age through preadolescence […] confirm the pattern we describe: conflict frequency does not generally differ between friends and nonfriends, but modes of conflict management do. […] Successive agreement/disagreement episodes are instrumental in friendship formation among peers.”

“In families, third parties often contribute to both the incidence and resolution of conflict. Some of these effects are indirect, in that the two principal parties to the conflict behave differently in the presence of the third party than they would under other conditions. For example, mother-son dyads have been found to manifest greater engagement, security, and consistency when the father was present than when mother and son were alone (Gjerde 1986). These differences suggest that in intact families fathers’ presence may indirectly facilitate integrative resolutions to conflict. […] We still know relatively little about the effects of peer intervention on resolution. Most of the available evidence associates peer intervention with distributive resolutions […] even children with a variety of disabilities are often able to manage their conflicts without adult intervention and […] adult mediation strategies should be aimed at helping children manage their conflicts rather than taking over conflict management from them (cf. Perlman & Ross 1997).”


May 30, 2014 Posted by | Anthropology, Biology, Books, Evolutionary biology, Psychology | Leave a comment

The Origin and Evolution of Cultures (III)

I have read almost three-fourths of the book by now. In this post I have quoted extensively from chapter 14 because this chapter is somewhat different from most of the other chapters in the book; it has no math, but it has a lot of observations which relate to the work they’ve covered in previous chapters, and it’s much easier to blog than most of the stuff in this book.

I don’t always agree with the authors about the details and about the conclusions they draw, but this book is consistently interesting and provides high-quality coverage of the topic in question. Unless things go seriously downhill during the last part of the book, I’ll give it five stars on goodreads.

I wrote some comments and personal observations along the way when I wrote this post, many of which are not closely related to the book coverage. I have posted them below the quotes from the book, in the second half of the post proper. I actually did earlier on make the decision not to include the stuff I’d written in this post at all because I didn’t like what I’d written, but after making a few revisions I changed my mind. I may change it again. Either way writing about these things, rather than just reading about them, is a great way to force yourself to think more carefully about them.

“Evolutionary explanations are recursive. Individual behavior results from an interaction of inherited attributes and environmental contingencies. In most species, genes are the main inherited attributes, but inherited cultural information is also important for humans. Individuals with different inherited attributes may develop different behaviors in the same environment. Every generation, evolutionary processes — natural selection is the prototype — impose environmental effects on individuals as they live their lives. Cumulated over the whole population, these effects change the pool of inherited information, so that the inherited attributes of individuals in the next generation differ, usually subtly, from the attributes in the previous generation. Over evolutionary time, a lineage cycles through the recursive pattern of causal processes once per generation […] Note that in a recursive model, we explain individual behavior and population-level processes in the same model. Individual behavior depends, in any given generation, on the gene pool from which inherited attributes are sampled. The pool of inherited attributes depends in turn upon what happens to a population of individuals as they express those attributes. Evolutionary biologists have a long list of processes that change the gene frequencies, including natural selection, mutation, and genetic drift. However, no organism experiences natural selection. Organisms either live or die, or reproduce or fail to reproduce, for concrete reasons particular to the local environment and the organism’s own particular attributes. If, in a particular environment, some types of individuals do better than others, and if this variation has a heritable basis, then we label as “natural selection” the resulting changes in gene frequencies of populations. We use abstract categories like selection to describe such concrete events because we wish to build up — concrete case by concrete case — some useful generalizations about evolutionary process. Few would argue that evolutionary biology is the poorer for investing effort in this generalizing project. Although some of the processes that lead to cultural change are very different than those that lead to genetic change, the logic of the two evolutionary problems is very similar.”

“Evolutionary theory is always multi-level […] evolutionary theories are systemic, integrating every part of biology. In principle, everything that goes into causing change through time plays its proper part in the theory. […] In theorizing about human evolution, we must include processes affecting culture in our list of evolutionary processes along side those that affect genes. Culture is a system of inheritance. We acquire behavior by imitating other individuals much as we get our genes from our parents. A fancy capacity for high-fidelity imitation is one of the most important derived characters distinguishing us from our primate relatives […] We are also an unusually docile animal (Simon 1990) and unusually sensitive to expressions of approval and disapproval by parents and others (Baum 1994). Thus parents, teachers, and peers can rapidly, easily, and accurately shape our behavior compared to training other animals using more expensive material rewards and punishments. […] once children acquire language, parents and others can communicate new ideas quite economically. Our own contribution to the study of human behavior is a series of mathematical models in the Darwinian style of what we take to be the fundamental processes of cultural evolution”

“We make [the] claim that a dual gene-culture theory of some kind will be necessary to account for the evolution of human cooperative institutions. Understanding the evolution of contemporary human cooperation requires attention to two different time scales: First, a long period of evolution in the Pleistocene shaped the innate “social instincts” that underpin modern human behavior. During this period, much genetic change occurred as a result of humans living in groups with social institutions heavily influenced by culture, including cultural group selection […] On this timescale genes and culture coevolve, and cultural evolution is plausibly a leading rather than lagging partner in this process. We sometimes refer to the process as “culture-gene coevolution.” Then, only about 10,000 years ago, the origins of agricultural subsistence systems laid the economic basis for revolutionary changes in the scale of social systems. The evidence suggests that genetic changes in the social instincts over the last 10,000 years are insignificant. […] Our hypothesis is premised on the idea that selection between groups plays a much more important role in shaping culturally transmitted variation than it does in shaping genetic variation. As a result, humans have lived in social environments characterized by high levels of cooperation for as long as culture has played an im portant role in human development. […] We believe that the human capacity to live in larger scale forms of tribal social organization evolved through a coevolutionary ratchet generated by the interaction of genes and culture. Rudimentary cooperative institutions favored genotypes that were better able to live in more cooperative groups. Those individuals best able to avoid punishment and acquire the locally-relevant norms were more likely to survive. At first, such populations would have been only slightly more cooperative than typical nonhuman primates. However, genetic changes, leading to moral emotions like shame, and a capacity to learn and internalize local practices, would allow the cultural evolution of more sophisticated institutions that in turn enlarged the scale of cooperation. These successive rounds of coevolutionary change continued until eventually people were equipped with capacities for cooperation with distantly related people, emotional attachments to symbolically marked groups, and a willingness to punish others for transgression of group rules.”

“Upper Paleolithic societies were the culmination of a long period of coevolutionary increases in a tendency toward tribal social life. We suppose that the resulting “tribal instincts” are something like principles in the Chomskian linguists’ “principles and parameters” view of language […] The innate principles furnish people with basic predispositions, emotional capacities, and social dispositions that are implemented in practice through highly variable cultural institutions, the parameters. People are innately prepared to act as members of tribes, but culture tells us how to recognize who belongs to our tribes, what schedules of aid, praise, and punishment are due to tribal fellows, and how the tribe is to deal with other tribes — allies, enemies, and clients. […] Contemporary human societies differ drastically from the societies in which our social instincts evolved. Pleistocene hunter-gatherer societies were likely comparatively small, egalitarian, and lacking in powerful institutionalized leadership. […] To evolve largescale, complex social systems, cultural evolutionary processes, driven by cultural group selection, takes advantage of whatever support these instincts offer. […] cultural evolution must cope with a psychology evolved for life in quite different sorts of societies. Appropriate larger scale institutions must regulate the constant pressure from smaller-groups (coalitions, cabals, cliques), to subvert the large-group favoring rules. To do this cultural evolution often makes use of “work arounds” — mobilizing tribal instincts for new purposes. For example, large national and international (e.g. great religions) institutions develop ideologies of symbolically marked inclusion that often fairly successfully engage the tribal instincts on a much larger scale. Military and religious organizations (e.g., Catholic Church), for example, dress recruits in identical clothing (and haircuts) loaded with symbolic markings, and then subdivide them into small groups with whom they eat and engage in long-term repeated interaction. Such work-arounds are often awkward compromises […] In military and religious organizations, for example, excessive within-group loyalty often subverts higher-level goals […] Complex societies are, in effect, grand natural social-psychological experiments that stringently test the limits of our innate dispositions to cooperate.”

“Elements of coercive dominance are no doubt necessary to make complex societies work. Tribally legitimated self-help violence is a limited and expensive means of altruistic coercion. Complex human societies have to supplement the moralistic solidarity of tribal societies with formal police institutions. […] A common method of deepening and strengthening the hierarchy of command and control in complex societies is to construct a nested hierarchy of offices, using various mixtures of ascription and achievement principles to staff the offices. Each level of the hierarchy replicates the structure of a hunting and gathering band. A leader at any level interacts mainly with a few near-equals at the next level down in the system […] The hierarchical nesting of social units in complex societies gives rise to appreciable inefficiencies […] Leaders in complex societies must convey orders downward, not just seek consensus among their comrades. Devolving substantial leadership responsibility to sub-leaders far down the chain of command is necessary to create small-scale leaders with face-to-face legitimacy. However, it potentially generates great friction if lower-level leaders either come to have different objectives than the upper leader ship or are seen by followers as equally helpless pawns of remote leaders. Stratification often creates rigid boundaries so that natural leaders are denied promotion above a certain level, resulting in inefficient use of human resources and a fertile source of resentment to fuel social discontent. On the other hand, failure to properly articulate tribal scale units with more inclusive institutions is often highly pathological. Tribal societies often must live with chronic insecurity due to intertribal conflicts.”

“The high population density, division of labor, and improved communication made possible by the innovations of complex societies increased the scope for elaborating symbolic systems. The development of monumental architecture to serve mass ritual performances is one of the oldest archaeological markers of emerging complexity. Usually an established church or less formal ideological umbrella supports a complex society’s institutions. At the same time, complex societies extensively exploit the symbolic ingroup instinct to delimit a quite diverse array of culturally defined subgroups, within which a good deal of cooperation is routinely achieved. […] Many problems and conflicts revolve around symbolically marked groups in complex societies. Official dogmas often stultify desirable innovations and lead to bitter conflicts with heretics. Marked subgroups often have enough tribal cohesion to organize at the expense of the larger social system. […] Wherever groups of people interact routinely, they are liable to develop a tribal ethos. In stratified societies, powerful groups readily evolve self-justifying ideologies that buttress treatment of subordinate groups ranging from neglectful to atrocious.”

“Many individuals in modern societies feel themselves part of culturally labeled tribal-scale groups, such as local political party organizations, that have influence on the remotest leaders. In older complex societies, village councils, local notables, tribal chieftains, or religious leaders often hold courts open to humble petitioners. These local leaders in turn represent their communities to higher authorities. To obtain low-cost compliance with management decisions, ruling elites have to convince citizens that these decisions are in the interests of the larger community. As long as most individuals trust that existing institutions are reasonably legitimate and that any felt needs for reform are achievable by means of ordinary political activities, there is considerable scope for large scale collective social action. However, legitimate institutions, and trust of them, are the result of an evolutionary history and are neither easy to manage nor engineer. […] Without trust in institutions, conflict replaces cooperation along fault lines where trust breaks down. Empirically, the limits of the trusting community define the universe of easy cooperation […] At worst, trust does not extend outside family […] and potential for cooperation on a larger scale is almost entirely foregone.”

If I were the kind of person who were interested in political stuff, I might have decided to talk a bit about how the above remarks may relate to how to set up optimal policies aimed at maintaining cooperation and trust (perhaps subject to a few relevant constraints). Some ideas spring to mind, perhaps in relation to immigration policy in particular. But I’m not that kind of person, so I won’t talk about that here.

I figured it might be a good idea to cover some ‘related’ topics here, as I can’t be sure how much the people reading along here has read about this kind of stuff and what kind of background people have. Many of the remarks below are only tangentially related to the coverage above, but they’re arguably important if you want ‘a bigger picture’.

One thing to note is that in the context of this part:

“only about 10,000 years ago, the origins of agricultural subsistence systems laid the economic basis for revolutionary changes in the scale of social systems. The evidence suggests that genetic changes in the social instincts over the last 10,000 years are insignificant.”

…there are at least two important points to mention. One is that the 10.000 years number is ‘just a number’, and that there is no ‘one true number’ here – that number depends on geography and a lot of other stuff. The origins of agriculture are still somewhat murky, though we do know a lot. There are lots of problems archaeologists need to deal with when analyzing these sorts of things, like for instance the issue that locally the date for first observed/established case of agricultural adoption may not correlate well with the first actual adoption date, because we have this tendency to overlook the sort of evidence that has already evaded attention for thousands of years. Another problem is that the switch was often gradual and took a lot of time, and involved some trial and error. A related point is that switches in food procurement strategies likely happened at local levels in the far past – in some areas of the world it would seem likely that a strategy of mostly relying on a few select crops (‘agriculture’) in ‘good periods’ (perhaps lasting hundreds of years) and then relying more on a more diversified set of different crops as well as other complementary food sources (‘hunter-gathering’) in ‘bad periods’ may have been superior to a strategy of relying exclusively on one or the other, especially around the ‘border areas’ where people almost couldn’t make agriculture work at all due to climatic factors. It’s incidentally worth noting that “no single plant can provide the mix of amino acids that primates need for growth, so primates must either eat a variety of different plants to achieve an adequate amino-acid balance, or have a regular supplement of animal foods in their diet”, so the ‘rely-on-only-one-plant agricultural model and nothing else’ is not workable in practice and never was (quote from Sponheimer et al., p.361. Less extreme versions of dependence on a single crop is feasible if you can get the other stuff elsewhere, but it’s highly risky – ask e.g. the Irish. Despite how far we’ve come in other areas, we humans incidentally rely on quite few crops to supply a substantial part of the calories we need, making us somewhat vulnerable; for example more than one-fifth of all calories consumed by humans are derived from rice). Yet another problem is that ‘agriculture’ isn’t just ‘agriculture’ – people got better at this stuff over time and things like intensification and yield improvements were important, yet often difficult or frankly impossible to estimate, especially at the intensive margin. This means that ‘we think agriculture started here in 8900 BC’ may in some contexts not mean quite what you could be tempted to think it means.

But the above, and many related, issues aside, of course the main problem with a statement including words like ‘about 10,000 years ago’ is that the variation in when different people living different places ‘adopted agriculture’ (whatever that may mean) is astonishingly huge. Here are two illustrative passages from Scarre et al. – exhibit 1: “The site of Ohalo II in northern Israel, dated around 20,000 BC, provides a remarkable snapshot of lifeways in the Levant during the Last Glacial Maximum […] At Ohalo II […] we have evidence for the exploitation of a broad spectrum of plants and animals, the extensive use of storable plant foods, and the year-round occupation of a settlement. The starch traces found on the surfaces of grinding stones confirm that they were indeed used in the preparation of hard-seeded plant foods.” The site is a hunter-gatherer site, but these guys belonged to a sedentary hunter-gatherer settlement inhabited by people who were doing many, though not all, of the things we usually only associate with traditional farmers, illustrating how these sorts of categorizations sometimes get slightly complicated if you’re not very careful when you define your terms (and sometimes even if you do) – and perhaps illustrating that it makes sense to be cautious about which mental models of our hunter-gatherer forebears we apply. Either way more ‘proper’ farming communities, such as these, which started to pop up during the early Neolithic were themselves likely at least in part ‘the result’ of gradual changes that humans which came before them had had on their surrounding environments (especially local flora and fauna – in terms of the latter probably especially our impact on local megafauna) – the processes which eventually lead us to agriculture probably took a lot of time, though just how long into the past you need to look to get the full picture is an open question, and probably will remain so as the amount of evidence available to us is sparse (which impact had human activities taking place during the late Pleistocene had on the range and distribution of potential domesticables at the beginning of the Holocene? Such questions do not to me seem easy to answer, and they’re part of the story). Although agriculture in some areas of the world by now has a ‘shelf life’ of 10.000 years or more, in other areas of the world that ‘shelf life’ is much, much shorter – exhibit 2: “no agricultural colonization of Australia, the last completely hunter-gatherer continent to survive until European contact, ever occurred.”

Agriculture provided the economic foundation for achieving the scale of social complexity which humans have achieved. This is true, but an important point/caveat here is that the evolution of ‘(relatively) advanced cultural and societal complexity’ in prehistoric times was not always contingent upon agriculture; agriculture often did lead to societal complexity, but humans could rise in societal complexity and experience significant cultural evolution without it – there were sedentary populations of some size and organizational complexity living in communities without what we usually conceptualize as agriculture (viz farming or pastoralism), e.g. in areas well-endowed with natural resources such as those near major lakes or coasts full of fish. To take one example (again from Scarre et al.), “agriculture was not a necessary prerequisite for the emergence of chiefdoms in the Southeast [North America]” – another example would be the “longstanding “Maritime Hypothesis” […] which proposes […] that maritime resources sustained population growth and the rise of sedentary earthwork-building communities” along the Pacific coast of South America during prehistoric times. There were mound builders in pre-agricultural North America as well, see e.g. this and this.

It’s worth remembering when thinking about human societies which existed especially during transitional phases – which may include many different time periods, depending on which part of the world you’re looking at – where people were starting to use agriculture but perhaps hadn’t really gotten the hang of it yet, that hunter-gatherer groups occasionally simply outcompeted farmers at the local level because some places just plain aren’t very good places to engage in agriculture, meaning that the ‘cultural victory’ of agriculturalists was by no means universal or a given at the local level, even if it’s very easy to convince yourself otherwise if you don’t know very much about these aspects of human development. Sometimes new (‘cultural’) inventions, like irrigation systems, could turn the tide in situations and geographic localities where agricultural food procurement strategies were at a disadvantage, but occasionally even that wasn’t enough.

Food production practices are/were key to societal complexity, because in order to get complexity you need to produce enough ‘excess food’ for some people to be free to engage themselves in non-food-production/procuring-activities, but another related point is that how to actually categorize and delineate various prehistoric food production practices is not always completely obvious. Food production undertaken by humans can take on multiple forms, and sometimes an ‘agriculture’ vs ‘hunter-gatherers’ dichotomic conceptualization of the issues may make you overlook important details due to ‘misclassification’ or similar problems; to take a couple of examples, some prehistoric sedentary societies based on fishing were as mentioned more or less stable food producing societies, and on a different note the cultural practices of (mobile) pastoralist societies often shared some social dimensions with hunter-gatherer societies that e.g. sedentary rice farming societies did not. Worth keeping in mind in this context is also that present-day hunter-gatherer societies still in existence often do not well reflect the cultural aspects of hunter-gatherer societies which existed in the far past, meaning that you need to be very careful about which inferences you make and what you base them on.

An aspect really important to keep in mind in general when thinking about the Holocene ‘post-agricultural period’ of human development is that the cultural development which took place in agricultural societies did not take place in a vacuum. Agriculturalists interacted with hunter-gatherers, farmers interacted with pastoralists, different e.g. geographic (mountains, seas) and biological constraints (malaria, horses) shaped human development in all kinds of ways. Boyd and Richerson do talk about this in the book, but I figured I should as well in this post. One thing to note is that in some areas agricultural practices spread much faster than in others for reasons having nothing to do with ‘the type’ of people who were doing these things, for example due to reasons of physical geography or other environmental constraints or the lack of such, and both the speed and manner of adoption likely had important (and varied) cultural ramifications. These things had genetic ramifications as well; areas where agricultural spread was particularly easy saw population growth other areas did not. Climate and climatic variation post-adoption incidentally naturally had important cultural ramifications as well – for example looking over the (pre)history of pre-colonial South America, it’s obvious that climate here was a key parameter with a huge impact on ‘the rise and fall of civilizations’.

There were multiple ways for agriculture to spread, from pure displacement to pure local adoption, as well as any combination in between, and how it proceeded varied with geography and probably a lot of other stuff as well. Some places and times the optimal type of agriculture was variable over time; which didn’t just mean that it made sense for farming societies to diversify and rely on more than one crop with different responses to e.g. drought, but also that climate change sometimes caused people to switch away from farming and towards pastoralism in bad periods – a good example of the latter is Peru during the Late Intermediate Period, where it is clear that “intensification of pastoralism was an important respone to drought” (see Moseley, p.246). Aspects such as climate have certainly had various important cultural as well as genetic impacts around the globe, e.g. on cultural transmission patterns at the regional level even during the ‘post-agricultural’ period. I mentioned interaction patterns – themselves a result of cultural dynamics, but also a driver of them – between sedentary farming societies and more mobile hunter-gatherers or pastoralists above, and perhaps I should say a little more about this kind of stuff because people may not be clear on precisely what I’m getting at there. It seems clear that in some areas division of labour dynamics played an important role in explaining and shaping cultural evolution; for a great account of these aspects of cultural dynamics and evolution in mountainous terrains and their surrounding areas, I again refer to Moseley’s account here. Inhabitants of sedentary farming societies didn’t move around very much, so things which were far away from them were things they’d often be willing to trade with more mobile human groupings. From one point of view you have a type of (modified) core-periphery model where the people from the core produced ‘excess’ food, and/or things which the people living in the core area who did not have to work on food procurement could come up with, which they then traded for other stuff, e.g. various natural resources located elsewhere (metals and wood are classic examples), with people who lived on the periphery. People looking at these things today without knowing anything about how such interaction patterns looked like may, I think, have a tendency to think of mobile hunter-gatherer groups as the morons who were left behind in this story and the pastoralists as more ‘primitive’ than the farmers, but I don’t really think that’s necessarily how it was – sometimes quite neat systems of exchange benefited both groups and were arguably by themselves important drivers of ‘cultural progress’, in the sense that they enabled and facilitated increased social complexity in the societies engaged in such systems. Of course peaceful interaction patterns were not the only ones which were explored.

May 28, 2014 Posted by | Anthropology, Archaeology, Books, culture, Evolutionary biology, Religion | Leave a comment


For some time I’ve wanted to improve upon the ‘category’ system I’ve made use of here on the blog, and yesterday I found out a way to do it as I became aware of the existence of wordpress’ ‘category cloud’-gadget after messing around a bit with the blog settings. So I added the cloud gadget and now you can see a category cloud in the sidebar.

When I implemented the cloud I realized that my categories were really off in terms of what this blog is about – there were terms like ‘comics’ and ‘fun’ in that cloud of most used categories, and although I do consider myself to be a funny guy when it suits me, well, there aren’t actually that many posts of that kind here… Some of the most used categories were basically categories dealing with stuff I hadn’t written about in years. Mostly this was because I’ve not really been very focused on using the categories optimally as they haven’t really been all that relevant to anything – with the implementation of the sidebar cloud the relevance of them certainly increased.

So as a result of this change, I have made some changes to many posts on this blog as I have re-categorized them in order for the cloud to better reflect what’s going on on this site and make it more useful as a blog navigation tool. Before I added the cloud I had a category list but people have basically pretty much never used that at all to navigate the blog, so I’ve removed that tool from the sidebar and replaced it with the cloud; I hope this change will make it easier to navigate the site and allow people to better find the types of posts they’re most interested in. I also hope it may make it easier for newcomers to the site to figure out quickly and painlessly what this blog is about, without them having to spend a great deal of time and effort exploring it.

The re-categorization stuff I’ve done has meant that instead of having wikipedia posts in the archive dealing with historical topics, physics, math and psychology-related topics being categorized under ‘wikipedia’, such posts will now have multiple categories and will now be categorized under: ‘wikipedia’, ‘history’, ‘physics’, ‘mathematics’, and ‘psychology’. A medical textbook will now be categorized under both ‘books’ and ‘medicine’, instead of just being filed under ‘books’. Some new categories have been introduced to the mix, some have been retired, quite a few have become much more common than they used to be. The changes I’ve made probably means that some people using various types of feeds to keep track of the activities that are going on on this blog may have had a lot of old posts pop up again – I recall being told that something similar happened last time I made major adjustments to the blog. I think I’ve made most of the changes I’m going to make at this point, but I’m not really sure if I’ll not be tempted to go through the archives over the next days and perhaps make some more adjustments. If you find another bombardment of old posts from this site to your feed to be annoying you might consider turning it off for a little while, although I think I have made at least most of the changes I’m going to make at this point.

As a little sidenote, I have myself been somewhat happy about my decision to make this change as it’s also made it easier for me to figure out which types of topics I’ve actually blogged on this site – or at least it’s given me some idea. With much more than 1000 posts spread out over some years now, I didn’t really have much of an overview either when I started out the re-categorization process yesterday. You can always argue about the categories being applied and whether or not they’re ‘accurate’ and I’m not sure I’ve given this topic sufficient thought; what I am sure of, however, is that I consider the current state of affairs to be much preferable to the system it has replaced. I hope the readers will share this sentiment.

Lastly I want to thank gwern – whom I hope is reading along here – for linking to me on Hacker News yesterday; quite a few people visited my blog as a consequence of that link, and some of them I could tell (from the stats information) found the topic covered in that post to be interesting.

May 27, 2014 Posted by | meta | 2 Comments

Open Thread

This is where you share interesting stuff you’ve come across since the last time I posted one of these.

I figured I should post a bit of content as well, so here we go:


(Chichen Itza is not located in ‘Southern America’, but aside from that I don’t have a lot of stuff to complain about in relation to that lecture. As I’ve mentioned before I generally like Crawford’s lectures.)

ii. I haven’t read this (yet? Maybe I won’t – I hate when articles are gated; even if I can usually get around that, I take this sort of approach to matters as a strong signal that the authors don’t really want me to read it in the first place (if they wanted me to read it, why would they make it so difficult for me to do so?)), but as it sort of conceptually relates to some of the work Boyd & Richerson talk about in their book, which I read some chapters of yesterday, I figured I should link to it anyway: Third-party punishment increases cooperation in children through (misaligned) expectations and conditional cooperation. Here’s the abstract:

“The human ability to establish cooperation, even in large groups of genetically unrelated strangers, depends upon the enforcement of cooperation norms. Third-party punishment is one important factor to explain high levels of cooperation among humans, although it is still somewhat disputed whether other animal species also use this mechanism for promoting cooperation. We study the effectiveness of third-party punishment to increase children’s cooperative behavior in a large-scale cooperation game. Based on an experiment with 1,120 children, aged 7 to 11 y, we find that the threat of third-party punishment more than doubles cooperation rates, despite the fact that children are rarely willing to execute costly punishment. We can show that the higher cooperation levels with third-party punishment are driven by two components. First, cooperation is a rational (expected payoff-maximizing) response to incorrect beliefs about the punishment behavior of third parties. Second, cooperation is a conditionally cooperative reaction to correct beliefs that third party punishment will increase a partner’s level of cooperation.”

I should note that I yesterday also started reading a book on conflict resolution which covers the behavioural patterns of social animals in some detail, and which actually also ‘sort of relate, a bit’ to this type of stuff. A lot of stuff that people do they do for different reasons than the ones they usually apply themselves to explain their behaviours (if they even bother to do that at all..), but scientists in many different areas of research are making progress in terms of finding out ‘what’s really going on’, and there are probably a lot more potentially useful approaches to these types of problems than most people usually imagine. Many smart people seem at this point to me to be familiar with some of the results of the heuristics-and-biases literature/approach to human behaviour because that stuff’s been popularized a lot over the last decade or two, and they probably have a tendency to interpret human behaviour using that sort of contextual framework, perhaps combined with the usual genes/environment-type conceptual approaches. Perhaps they combine that stuff with the approaches that are most common among people with their educational backgrounds (people with a medical degree may be prone to using biological models, an economist might perhaps apply game theory, and an evolutionary biologist might ask what a chimpanzee would have done). This isn’t a problem as such, but many people might do well to try to keep in mind every now and then that there are a lot other theoretical frameworks one might decide to apply in order to make sense of what humans do than the one(s) they usually apply themselves, and that some of these may actually add a lot information even if they’re much less well-known. Some of the methodological differences relate to levels of analysis (are we trying to understand one individual or a group of individuals?), but that’s far from the whole story. To take a different kind of example, it has turned out that animal models are actually really nice tools if you want to understand some of the details involved in addictive behaviours, and they seem to be useful if you want to deal with conflict resolution stuff as well, at least judging from the stuff I’ve read in that new book so far (one could of course consider animal models to be a subset of the genetic modeling framework, but in an applied context it makes a lot of sense to keep them separate from each other and to consider them to be distinct subfields…). I have a nagging suspicion that animal models may also be very useful when it comes to explaining various forms of what people usually refer to as ’emotional behaviours’, and that despite the fact that a lot of people tend to consider that kind of stuff ‘unanalyzable’, it probably isn’t if you use the right tools and ask the right questions. You don’t need to be a doctor or a biologist to see why hard-to-observe purely ‘biological effects’ having behavioural effects may be important, but are these sorts of dynamics taken sufficiently into account when people interact with each other? I’m not sure. Mathematical modeling approaches like the one above are other ways (of course various approaches can be combined, making this stuff even more complicated…) to proceed and they seem to me to be, certainly when they generate testable predictions, potentially useful a well – not necessarily always only because we learn whether the predictions are correct or not, but also because mathematical thinking in general allows/requires you to think more carefully about stuff  and identify relevant variables and pathways (but I’ve talked about this before).

I should point out that I wrote the passage above in part because very occasionally I encounter a Fan of The Hard Sciences (on the internet) who seems to think that rejecting all kinds of human behavioural theory/-research (‘Social Science’) on account of it not being Hard Enough to generate trustworthy inferences is a good way to go – I actually had a brief encounter with one of those not too long ago, which was part of what motivated me to write the stuff above (and the stuff below). That guy expressed the opinion that you’d learn more about human nature by reading a Dostoyevsky novel than you would by reading e.g. Leary & Hoyle’s textbook. I’m perhaps now being rather more blunt than I usually am, but I thought I should make it clear here, so that there are no misunderstandings, that I tend to consider people with that kind of approach to things to be clueless fools who don’t have any idea what they’re talking about. Perhaps I should also mention that I have in fact read both so I feel qualified to judge on the matter, but this is probably arguably besides the point; the disagreement goes much deeper than just the truth content of the specific statement in question, as the much bigger problem is the methodological divide. Some skepticism is often required in behavioural sciences, among other things because establishing causal inference is really hard in many areas, but if you want your skepticism to make sense and be taken seriously you need to know enough about the topic and potential problems to actually formulate a relevant and cogent criticism. In that context I emphasize that ‘unbundling’ is really important – if you’re speaking to someone who’s familiar with at least some part of ‘the field of social science’, criticizing ‘The Social Sciences’ in general terms will probably just make you look stupid unless you add a lot of caveats. That’s because it’s not one field. Do the same sort of problems arise when people evaluate genetic models of human behavioural variance and ‘sociological approaches’? Applied microeconomics? Attachment theory? Evolutionary biology? All of these areas, and many others, play some role and provide part of the picture as to why people behave the way they do. Quantum physics and cell biology are arguably closer connected than are some of the various subfields which might be categorized as belonging to ‘the field’ of ‘social science’. Disregarding this heterogeneity seems to be more common than I’d wish it was, as is ‘indiscriminatory skepticism’ (‘all SS is BS’). A problem with indiscriminatory skepticism of this sort is incidentally that it’s sort of self-perpetuating in a way; that approach to matters pretty much precludes you from ever learning anything about the topic, because anyone who has anything to teach you will think you’re a fool whom it’s not worth spending time talking to (certainly if they’re in a bad mood on account of having slept badly last night…). This dynamic may not seem problematic at all to people who think all SS is BS, but of course it might be worth pointing out to those kinds of people that by maintaining that sort of approach to the subject matter they’re probably also cutting themselves off from learning about research taking place in areas they hadn’t even considered to belong to the field of social science in the first place. Symptom analyses of medical problems are usually not considered to be research belonging to the social sciences, but that’s mostly just the result of a categorization convention; medical problems, or the absence of them, impact our social behaviours in all kinds of ways we’re often not aware of. Is it medical science when a doctor performs the analysis, but social science when the psychologist analyzes the same data? Is what that guy is doing social science or statistics? Sometimes the lines seem to get really blurry to me. Discriminatory skepticism is better (and probably justified, given methodological differences across areas), but contains its own host of problems. Often discriminatory skepticism seems to imply that you disregard certain levels of analysis completely – instead of ‘all SS i BS’, it becomes ‘all SS belonging to this level of analysis is BS’. Maybe that’s better than the most sensible alternative (‘perhaps it’s not all BS’) if the science is really bad, but even in those situations you’ll have contagion effects as well which may cause problems (‘culture? That’s the sort of crap cultural anthropologists deal with, isn’t it? Those people are full of crap. I’m not going to spend time on that stuff.’ So you disregard those aspects of behaviour completely, even if perhaps they do matter and can be subject to scientific analysis of a different type than the one the Assigned Bad Guys (‘Cultural Anthropologists’) usually apply).

I don’t think we’ll ever get to the point where we have a Big All-Encompassing Theory of How Humans Work because there are too many variables, but that does not mean that the analysis of specific behaviours and specific variables is without merit. Understanding that I may feel argumentative right now because I’ve misjudged my insulin requirements (or didn’t sleep enough, or haven’t had enough to eat, or had a fight with my mother yesterday, or…) is important knowledge to take into account, and you can add a lot of other similarly-useful observations to your toolbox if you spend some time on this type of stuff. A big problem with not doing the research is that not doing the research does not protect you from adopting faulty models – rather it seems to me that it almost guarantees that you do. Humans need explanations for why things happen, and ‘things that happen’ include social behaviours; they/we need causal models to make sense of the world, and having no good information will not stop them from coming up with theories about why people behave the way they do (social scientists realized that a while back..). And as a result of this, people might end up using a novel written 150 years ago to obtain insights into why humans behave the way they do, instead of perhaps relying on a textbook written last year containing the combined insights of hundreds of researchers who looked at these things in a lot of detail. The researchers might be wrong, sure, but even so this approach still seems … stupid. ‘I don’t trust the social scientists, so instead I’ll rely on the life lessons and social rules taught to me by my illiterate grandmother when I was a child.’ Or whatever. You can easily end up doing stuff like this, without ever even suspecting, much less realizing, that that’s what you’re doing.

Comments on the topics covered above are welcome, but I must admit that I didn’t really write this stuff to start a dicussion about these things – it was more of a, ‘this is where I’m coming from and these are some thoughts on this topic which I’ve had, and now you know’-posting.

iii. Enough lecturing. Let’s have a chess video. International Master Christof Sielecki recently played a tournament in Mallorca, and he’s made some excellent videos talking about his games. Here’s one of those videos:

I incidentally think I have learned quite a bit from watching his material on youtube. I may have talked about his youtube channel here on the blog before, but even if I have I don’t mind repeating myself as you should know about it if you’re interested in chess. He is one of the strongest players online providing this sort of content, and he provides a lot of content. If you’re a beginner some of his material may be beyond you, but not all of it; I don’t think his opening videos for example are particularly difficult to understand or follow, even if you’re not a very strong player. And if you’re a ‘strong club player’ I think this is the best chess channel on youtube.

May 25, 2014 Posted by | Astronomy, Chess, Lectures, Open Thread, Physics | 7 Comments

Impact of Sleep and Sleep Disturbances on Obesity and Cancer (2)

Warning: Long post.*

Okay, I’ve finished the book. I gave it five stars on goodreads – it’s come to my attention that I may be judging scientific publications like this one way too harshly, when you compare them with most other books. But then again I’d probably have given it four or five stars anyway; this book is an excellent source of information about the stuff it covers, and it covers a lot of stuff. In a way it’s hard to evaluate a book like this, because on the one hand you have a pretty good idea whether it’s enjoyable to read it or not, but on the other there are small chunks of it (or huge portions of it, or entire chapters, in the case of some readers…) which you are really not at all qualified to evaluate in the first place because you’re not actually sure precisely what they’re talking about**. Oh well.

As mentioned this book has a lot of stuff, and I can’t cover it all here. I’m annoyed about this, because it’s a great book. Some of this stuff is quite technical and there were parts of a few of the chapters I will not pretend to have really understood, but most of the stuff is okay in terms of the difficulty level – the book isn’t any harder to deal with than are most of Springer’s medical textbooks – and it’s interesting. In the first post I talked a little about sleeping patterns and a bit about cancer. The book has a lot of other stuff, and it has a lot of additional stuff about those things as well. Writing posts where I go into the details of books like these takes a lot of time and it’s not always something I have a great desire to do because it’s really hard to know where to stop. Let’s say for example that I were to decide to cover this book in great detail, and that I were to start out in chapter two, dealing with ‘Effects of Sleep Deficiency on Hormones, Cytokines, and Metabolism’. In that case I might decide to start out with these observations:

“Laboratory studies of both chronic and acute partial sleep restriction indicate that insufficient sleep can lead to increased hunger and caloric intake.”

“Many studies […] report that sleep independently relates to diabetes risk, even after controlling for the confounding effects of obesity and overweight. […] Cappuccio et al. [29] analyzed ten prospective studies with a pool of over 100,000 adults to ascertain the association of type 2 diabetes with sleep duration and quality. After controlling for BMI, age, and other confounding factors, they found [that] sleeping less than 6 h per night conferred an RR of 1.28 in predicting the incidence of type 2 diabetes, and prolonged duration (>8–9 h) had a higher RR of 1.48. As for sleep quality, Cappuccio et al. found that difficulty falling and staying asleep were highly correlated with type 2 diabetes risk with RRs of 1.48 and 1.84, respectively. […] a 3-year prospective study show[ed] that of workers with prediabetic indices, such as elevated fasting glucose, night-shift workers [were] at fivefold risk for developing overt diabetes compared to day workers [79].”

And I’d move on from there. So already here we’ve established not only that sleep problems may lead to changes in appetite which may lead to weight gain; that sleep problems and type 2 diabetes may be related, and perhaps not only because of the weight gain; that different aspects of sleep may play different roles (difficulty falling asleep doesn’t seem to have the same effect as does difficulty staying asleep); and that the time course from pre-diabetes to overt diabetes may be drastically accelerated in people who work night shifts. This is a lot of information, and we’re still only scratching the surface of that chapter (there are 11 chapters in the book). If I were to go into details about the diabetes thing I might be tempted to talk about how in another chapter they describe a study where three out of eight completely healthy young men were basically (temporarily) converted into prediabetics just by messing around a bit with their circadian clock in order to cause it to get out of sync with their sleep-wake cycle (a common phenomenon in people suffering from jetlag, and actually also a common problem, it seems, in blind people, as they’re generally not capable of using light to adjust melatonin release patterns and keep the circadian clock ‘up to date’, so to speak), but I really wouldn’t need to look to other chapters to talk more about that kind of stuff as the chapter also has some coverage of studies on hormonal pathways such as those involving leptin [a ‘satiety hormone’] and ghrelin [a ‘hunger hormone’]. The role of cortisol is also discussed in the chapter (and elaborated upon in a later chapter). I might decide to go into a bit more detail about these things and explain that the leptin-ghrelin connection isn’t perfectly clear here, as some studies find that sleep deprivation reduces leptin production and stimulates ghrelin release whereas other studies do not, but perhaps I’d also feel tempted to add that although this is the case, most studies do after all seem to find the effects you’d expect in light of the results from the weight gain studies I talked about in the first post (sleep deprivation -> less leptin, more ghrelin). But maybe then I’d feel the need to also talk about how it seems that these effects may depend on gender and may change over time (/with age). And I’d add that most of the lab studies are quite small studies with limited power, so it’s all a bit uncertain what all this ‘really means’. Perhaps I’d add the observation from the last chapter, where they talk more about this stuff, that the literature on these two hormones are not equally convincing: “Conflicting results have been presented for leptin […], although increases in ghrelin, an appetite-stimulating hormone, may be more uniformly observed.” Perhaps when discussion these things I’d opt for including a few remarks about the role of other hormones and circulating peptides as well, for example the “hypothalamic factors (e.g., neuropeptide Y and agouti-related peptide), gut hormones [such as] glucagon-like peptide-1 [GLP-1], peptide YY [PYY], and cholecystokinin), and adiposity signals (e.g., leptin and adiponectin)”, all of which are briefly covered in chapter 11 and all of which “have been demonstrated to play a role in the regulation of hunger, appetite, satiety, and food intake.”

As for the increased hunger and caloric intake observation, I might decide to talk about how there’s an ‘if you’re awake, you have more time to eat’-effect that may play a role (aside perhaps from the rare somnambulist, few people eat while they’re sleeping – and I’m not sure about the somnambulists either…) – but on the other hand staying awake requires more calories than does sleeping (“Contrary to the common belief that insufficient sleep reduces energy expenditure, sleep loss increases total daily energy expenditure by approximately ~5 % (~111 kcal/day).”). Those are sort of behavioural approaches to the problem, but of course there are many others and multiple mechanisms have been explored in order to better understand what happens when people are deprived of sleep – hormonal pathways is one way to go, I’ve talked a little about them already, and of course they’re revisited later in the chapter when dealing with type 2 diabetes. As an aside, in terms of hormonal pathways there’s incidentally an entire chapter on melatonin and the various roles it may play, as well as some stuff on insulin sensitivity and related matters, but that’s not chapter 2, the one we were talking about – however if I were to cover chapter 2 in detail I’d probably feel tempted to add a few remarks about that as well. But of course chapter 2 doesn’t limit coverage to just behavioural stuff and the exploration of hormonal pathways, as it seems that sleep deprivation also has potentially important neurological effects, in that it affects how the brain responds to food – and so in the chapter they talk about a couple of fMRI studies which have suggested this and perhaps indicated how those things might work, and they talk about a related study the results of which suggest that sleep deprivation may also induce impairments in self-control.

If I we’re to talk about the weight gain stuff in the chapter, I might as well also talk a bit about how sleep patterns may affect people when they’re trying to lose weight, as they talk a little bit about that as well. Those results are interesting – for example one study on weight loss that followed individuals for two weeks found that the individuals who were assigned to the sleep-deprivation condition (5.5 hours, vs 8,5 hours in the control group) had higher respiratory rates than those who did not. The higher respiratory rate the authors of the study argued was an indicator that the sleep-deprived individuals relied more on carbohydrates and less on fat than the well-rested controls, which is important if you’re dealing with weight loss regimes; however the authors in the book do not seem convinced that this was a plausible inference… Before going any further I would probably also interpose that how sleep affects breathing – and how breathing affects sleep – is really important for many other reasons as well besides weight loss stuff, so it makes a lot of sense to look at these things; stuff like intermittent hypoxia during the sleeping state, sleep disordered breathing and sleep apnea are topics important enough to have their own chapters in the book. Perhaps I’d feel tempted to mention in this context that there’s some evidence that people with sleep apnea who get cancer have a poorer prognosis than people without such sleep problems, and that we have some idea why this is the case. I actually decided to quote a bit from that part of the book below… But anyway, back to the weight loss study, an important observation from that study I might decide to include in my coverage is that: “shorter sleep duration reduced weight loss by 55 % in sleep-restricted subjects”. This is not good news, at least not for people who don’t get enough sleep and are trying to lose weight; certainly not when combined with the observation that sleep-deprived individuals in that study disproportionately lost muscle tissue, whereas individuals in the well-rested group were far more likely to lose fat. One tentative conclusion to draw is that if you’re sleep deprived while dieting your diet may be less likely to work, and if it does work the weight loss you achieve may not be nearly as healthy as you perhaps would be tempted to think it is. Another conclusion is that researchers looking at these things may miss important metabolic effects if they limit their analyses to body mass measures without taking into account e.g. tissue composition responses as well.

Actually if I were to talk about the stuff covered in chapter 2 I wouldn’t really be finished talking the type 2 diabetes and sleep problems even though I talked a little bit about that above, and so I’d probably feel tempted to say a bit more about that stuff. Knowing that sleep disorders may lead to a higher type 2 diabetes risk doesn’t tell you much if you don’t know why. So you could perhaps talk a bit about whether this excess risk only relates to insulin sensitivity? Or maybe beta cell function is implicated as well? We probably shouldn’t limit the analysis to insulin either – cortisol is important in glucose homeostasis, and perhaps that one plays a role? – yep, they’ve looked at that stuff as well. And so on and so forth … for example what role does the sympathetic nervous system and the catecholamines play in the diabetes-sleep link? The one you’d expect, or at least what you’d expect if you knew a bit of stuff about these things. A few conclusions from the chapter:

“Overall, studies suggest a strong relationship between insufficient sleep and impaired glucose homeostasis and cortisol regulation. These proximal outcomes may explain observed associations between sleep and the diabetes epidemic.” […] “The relationship suggested between sleep loss and sympathetic nervous system dysfunction [‘increased catecholamine levels’, US] proposes another likely mediator of several of the negative metabolic effects of sleep loss and sleep disorders, including insulin resistance, decreased glucose tolerance, and reduced leptin signaling”).

I’d still leave out a bit of stuff from chapter two if I were to cover it in the amount of detail ‘outlined’ above, but I hope you sort of get the picture. There are a lot of connections to be made here all over the place, a lot of observations which you can sort of try to add together to get something resembling a full picture of what’s going on, and it gets really hard to limit your coverage to ‘the salient points’ of a specific topic without excluding many important links to other parts of the picture and overlooking a lot of crucial details. There’s way too much stuff in books like these for me to really provide a detailed coverage of all of it – most of the time I don’t even try, though I sort of did in this post, in a way. I encourage you to ask questions if there’s something specific you’d like to know about these things which might be covered in the book; if you do, I’ll try to answer. Of course it’s rather easy for me to say that you can just ask questions about stuff like this which you’d like to know more about, as part of the reason why people read books like these in the first place is so that they can get at least some idea which questions it makes sense to ask. On the other hand people who don’t know very much about science occasionally manage to ask some rather interesting questions with interesting answers on the askscience-subreddit, so…

I’ve added some additional observations from the book below, as well as some further observations and comments.

“Over the past few decades, the drastic increase in the prevalence of obesity has been reflected by substantial decreases in the amount of sleep being obtained. For example, whereas in 1960 modal sleep duration was observed to be 8–8.9 h/night, by 2004 more than 30 % of adults aged 30–64 years reported sleeping

Regardless of the extent to which you think these two variables are related (and how they’re related), this development is interesting to me. I’m pretty sure some of the authors of the book consider the (causal part of the?) link to be stronger than I do. I had no idea things had changed that much. Okay, let’s move on…

“For many years, it has been known that the timing of onset of severe adverse cardiovascular events, such as myocardial infarction, sudden cardiac death, cardiac arrest, angina, stroke, and arrhythmias, exhibits a diurnal rhythm with peak levels occurring between 6 am and noon […] It is clear that many variables and parameters within the cardiovascular system are under substantial regulation by the circadian clock, highlighting the relevance of circadian organization for cardiovascular disease. Shift work has consistently been associated with increased cardiovascular disease risk [68–71].”

“Molecular oxygen (O2) is essential for the survival of mammalian cells because of its critical role in generating ATP via oxidative phosphorylation [the link is to a featured article on the topic, US]. Hypoxia, i.e., low levels of O2, is a hallmark phenotype of tumors. As early as 1955, it was reported that tumors exhibit regions of severe hypoxia [16]. Oxygen diffuses to a distance of 100–150 μm from blood vessels. Cancer cells located more than 150 μm exhibit necrosis. The uncontrolled cell proliferation causes tumors to outgrow their blood supply, limiting O2 diffusion resulting in chronic hypoxia. In addition, structural abnormalities in tumor blood vessels result in changes in blood flow leading to cyclic hypoxia [17,18]. Measurement of blood flow fluctuations in murine [rats and mice, US – a lot of our knowledge about some of these things come from animal studies, and they’re covered in some detail in some of the chapters in the book] and human tumors by different methods have shown that the fluctuations in oxygen levels in tumors vary from several minutes to more than 1 h in duration. Hypoxia in tumors was shown to be associated with increased metastasis and poor survival in patients suffering from squamous tumors of head and neck, cervical, or breast cancers [19,20]. Tumor hypoxia is associated with resistance to radiation therapy and chemotherapy and poor outcome regardless of treatment modality. Cancer cells have adapted a variety of signaling pathways that regulate proliferation, angiogenesis, and death allowing tumors to grow under hypoxic conditions. Cancer cells shift their metabolism from aerobic to anaerobic glycolysis under hypoxia [21] and produce growth factors that induce angiogenesis [22,23]. […] It is increasingly recognized that hypoxia in cancer cells initiates a transcription program that promotes aggressive tumor phenotype. Hypoxia-inducible factor-1 (HIF-1) is a major activator of transcriptional responses to hypoxia [24]. […] It is now well recognized that HIF-1 activation is a key element in tumor growth and progression.”

“the existing epidemiologic evidence linking OSA [Obstructive Sleep Apnea] and cancer progression fits some of the key classic causality criteria [40]: the association is biologically plausible (in view of the existing pathophysiologic knowledge and in vitro evidence); the existing longitudinal evidence supports the existence of temporality in the cause-effect association; the effects are strong; there is evidence of a dose-response relationship; and it is consistent with animal experimental models and other evidence. Lacking is evidence regarding another important criterion: that treatment of OSA will result in a decrease in cancer mortality. Future studies in this area are critical.
If verified in future studies, the implications of the evidence presented here are profound. OSA might be one of the mechanisms by which obesity is a detrimental factor in cancer etiology and natural history. From a clinical standpoint, assessing the presence of OSA (particularly in overweight or obese patients) and treating it if present might have to become a routine part of the clinical management of cancer patients.”

It’s perhaps worth mentioning here that this is but one of presumably a number of areas of oncology where sleep research has shown promise in terms of potential treatment protocol optimization. It’s observed in the book that the effectiveness of- and side effect profile of chemotherapies may depend upon which time during the day (/night?) the treatment is given, which also seems like something oncologists may want to look into (unfortunately it does not however seem like they’ve made a lot of progress over the years):

“Arguably, a field in which little progress has been made in linking circadian rhythms to pathology, disease pathogenesis, and/or clinical medicine at the molecular and genetic levels is cancer. This is unfortunate given that a diurnal rhythm in efficacy and sensitivity to chemotherapeutic agents was reported in mice over 40 years ago [92]. More recently, screening studies in rodents have demonstrated clear circadian rhythmicity in the antitumor activity and side effect profile of many anticancer agents, although at present, it is not possible to predict a priori at which time of day a given drug will be maximally effective (i.e., although rhythms are clearly present, little is known of their mechanistic underpinnings) [93]. Results such as these have given rise to the concept of “chronotherapeutics,” in which the time of drug administration is taken into consideration in the treatment plan in order to maximize efficacy and minimize toxicity […] Although some progress has been made, by and large, this approach has not made significant inroads into clinical oncology”

The stuff above is probably closely related to discoveries made by other contributors, described elsewhere in the book:

“Our laboratory used actigraphy to measure circadian activity rhythms, fatigue, and sleep/wake patterns in breast cancer patients. We found that circadian rhythms were robust at baseline, but became desynchronized during chemotherapy […] desynchronization was correlated with fatigue, low daytime light exposure, and decreased quality of life [21,32].”

Here’s some more stuff on related matters:

“A diagnosis of cancer and the subsequent cancer treatments are often associated with sleep disturbances. […] Prevalence rates for sleep disturbance among oncology patients range from 30% to 55% [in another chapter it’s 30% to 75% – either way these numbers are high, US] […]  These sleep disturbances can last for years after the end of the cancer treatment. In cancer patients and survivors, sleep disturbances are associated with anxiety, depression, cognitive impairment, increased sensitivity to physical pain, impaired immune system functioning, lowered quality of life, and increased mortality. Given these associations and the high prevalence of sleep disturbance in cancer patients, it is paramount that clinicians assess sleep disturbances and treat sleep disorders in cancer patients and survivors. […] The effects of chemotherapy and anxiety on sleep quality in [cancer] patients have been well studied, and interventions to improve sleep quality and/or duration among cancer patients have shown widespread improvements in cancer mortality and outcomes, as well as mental health, and overall quality of life” [for more on quality of life aspects related to cancer, see incidentally Goerling et al.]

“We have previously demonstrated an inverse association of self-reported typical hours of sleep per night with likelihood of incident colorectal adenomas in a prospective screening colonoscopy-based study of colorectal adenomas [10]. Compared to individuals reporting at least 7 h of sleep per night, those individuals reporting fewer than 6 h of sleep per night had an estimated 50 % increase risk in colorectal adenomas […] A recent study as part of the Women’s Health Initiative (WHI) has shown similar results with regard to risk of colorectal cancer [48].”

Remember here that colorectal cancer is one of the most common types of cancer in industrialized countries – “[t]he lifetime risk of being diagnosed with cancer of the colon or rectum is about 5% for both men and women in the US” – some more neat numbers here. The more people are affected by the disease, in some sense the ‘bigger’ these ’50 % increases’ get.

“Probably, the cancer for which sleep duration has been studied most with regard to risk is breast cancer. There are also a number of epidemiological studies that have investigated the association of sleep duration and risk of breast cancer. In these studies, the association of short sleep duration and incidence of breast cancer has been mixed […] In a large, prospective cohort of over 20,000 men, Kakizaki et al. found that sleeping 6 or fewer hours was associated with an approximately 38 % increased risk of prostate cancer, compared with those reporting 7–8 h of sleep […] New evidence is also emerging on the role of sleep duration in cancer phenotype […] Breast cancer patients who reported less than 6 h of sleep per night prior to diagnosis were about twice as likely to fall into the “high-risk” recurrence category compared to women who reported at least 7 h of sleep per night before diagnosis. This suggests that short sleep may lead to a more aggressive breast cancer phenotype.”

“Pain in cancer patients is most often treated with opioids, and sedation is a common side effect of opioids. However, the relationship between opioid use and sleep has not been well studied. Limited PSG data show that opioids decrease REM sleep and slow-wave sleep [31], suggesting that rather than improving sleep by being sedated, opioids may actually contribute to the sleep disturbances in cancer patients with chronic pain. In addition, the most serious adverse effect of opioids is respiratory depression which may exacerbate the hypoxemia in those individuals with SDB [Sleep Disordered Breathing] and thus lead to more interrupted sleep […it may also promote tumor growth and/or lead to poorer treatment outcomes – see above. On the other hand not treating pain in cancer patients is also … problematic (yet probably still widespread, at least judging from the data in Clark & Treisman’s book)]. […] Although pharmacotherapy is the most prescribed therapy for cancer patients with sleep disturbances [10,35], there is a paucity of studies related to pharmacologic interventions in cancer patients. A recent review concluded that evidence is not sufficient to recommend specific pharmacologic interventions for sleep disturbances in cancer patients [6]. […] As several studies have now confirmed the beneficial effects of cognitive behavioral therapy for insomnia (CBT-I) in cancer patients (mostly breast cancer) and survivors, CBT-I needs to be considered as the first-line treatment. Hypnotics are commonly prescribed to cancer patients. Despite this common use, little to nothing is known about the safety of these drugs in cancer patients. Given the possible interaction effects of the hypnotic/sedatives with cancer treatment agents, the side effects, and potential tolerance and addiction issues, the common use of these drugs in cancer patients is concerning.”

The book is not only about sleep, and this part I found interesting:

“Emerging evidence supports the hypothesis […] that shared mechanisms exist for the co-occurrence of common [cancer] symptoms […] an increased understanding of the mechanisms that underlie the co-occurrence of multiple symptoms may prove crucial to the development of successful interventions […] The study of multiple co-occurring symptoms in cancer patients has led to the emergence of “symptom cluster” research. […] Although awareness of the co-occurrence of symptoms has existed for over two decades […], the study of symptom clusters is considerably more recent [118]. An enduring challenge in the study of symptom clusters remains the lack of consistency in the methods used to cluster symptoms [119]. Currently, the analytic methods used to cluster co-occurring symptoms include correlation, regression modeling [120,121], factor analysis [122], principal component analysis [121,123], cluster analysis [104,111], and latent variable modeling [109]. […] the decisions that dictate the use of a specific approach are beyond the scope of this chapter […] Symptom cluster research can be grouped into two categories: de novo identification of symptom clusters (i.e., clustering symptoms) and the identification of subgroups of patients based on a specific symptom cluster (i.e., clustering patients ) […] De novo identification of symptom clusters is the most common type of symptom cluster research that occurs with oncology patients.”

A lot of stuff didn’t make it into this post, but I’ll stop here. Or should I also mention that aside from what you eat, it may also matter a lot when you eat (“a study in mice showing that animals fed a high-fat diet during their inactive phase gained more weight than mice fed during their habitual active phase”)? Or should I mention that “individuals with later sleep schedules tended” … in one study … “to have higher energy intakes throughout the day than those whose midpoint of sleep was earlier?” No, probably not. I wouldn’t know where to stop…

[This is a big part of the reason why I often limit my coverage of books to mostly just quotes. Posts like these have a tendency to blow up in my face, and if they don’t I often still find myself having spent a lot of time on them.]

*Or maybe it isn’t actually all that long, perhaps it’s just slightly longer than average? Anyway now that you’ve scrolled down from the top of the post to the buttom in order to figure out what that asterisk meant (if you didn’t scroll down and are now only reading this after you’ve read the entire post above, that’s your fault, not mine…), you’ll know whether you think it’s long. The warning seemed to carry more weight this way. That a warning like this should carry some weight seems quite important to me, considering that I’m blogging a book about obesity. A book about obesity which covers dietary aspects in some detail, yet is occasionally itself a bit hard to digest. [Permission to groan: Granted.]

**An example:

“Prolyl hydroxylase (PHD) is a tetrameric enzyme containing two hydroxylase units and two protein disulphide isomerase subunits, which requires O2, ferrous iron, and 2-oxoglutarate for PHD enzyme activity. In the presence of O2, PHD covalently modifies the HIFα subunit to a hydroxylated form, which by interacting with Von Hippel-Lindau (VHL) protein, a tumor suppressor, is subjected to ubiquitylation and targeted to proteasome, where it gets degraded [25]. Hypoxia inhibits PHD activity resulting in accumulation of HIF-1α subunit, which dimerizes with HIF-1β subunit.”

Yeah, that sounds about right to me…

There isn’t much of this kind of stuff in the book; if there had been I would not have given it five stars, because in that case I would not have found it at all interesting/enjoyable to read.

May 23, 2014 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Medicine | 2 Comments

Impact of Sleep and Sleep Disturbances on Obesity and Cancer (1)

“Sleep has recently been recognized as a critical determinant of energy balance, regulating restoration and repair of many of the physiological and psychological processes involved in modulating energy intake and utilization. Emerging data indicate that sleep can now be added to caloric intake and physical activity as major determinants of energy balance with quantitative and qualitative imbalances leading to under- or overnutrition and associated comorbidities. Considerable research is now focused on disorders of sleep and circadian rhythm and their contribution to the worldwide obesity pandemic and the associated comorbidities of diabetes, cardiovascular disease, and cancer. In addition to having an impact on obesity, sleep and circadian rhythm abnormalities have been shown to have significant effects on obesity-associated comorbidities, including metabolic syndrome, premalignant lesions, and cancer. In addition to the observation that sleep disturbances are associated with increased risk for developing cancer, it has now become apparent that sleep disturbances may be associated with worse cancer prognosis and increased mortality. […] circadian misalignment, such as that experienced by “shift workers,” has been shown to be associated with an increased incidence of several malignancies, including breast, colorectal, and prostate cancer, consistent with the increasing recognition of the role of clock genes in metabolic processes […] This volume […] review[s] current state-of-the-art studies on sleep, obesity, and cancer, with chapters focusing on molecular and physiologic mechanisms by which sleep disruption contributes to normal and abnormal physiology, related clinical consequences, and future research needs for laboratory, clinical, and translational investigation.”

I’m currently reading this book. I probably shouldn’t be reading it; I realized a couple of weeks ago that if I continue at the present rate I’ll get to something like 100 books this year, and despite some of these books being rather short and/or fiction books I don’t think this is a healthy amount of reading. It’s probably worth noting in this context that despite the fact that the number of ‘books read’ is now much higher than it used to be, I incidentally am far from sure if I actually read any more stuff now than I did in the past; it may just be that these things have become easier to keep track of as I now read a lot more books and a lot less ‘unstructured online stuff’. It’s not a new problem, but it’s getting rather obvious.

But anyway I’m reading the book, and although it may not be a good way for me to spend my time I am at least learning some stuff I did not know. The book is a standard Springer publication, with 11 chapters each of which deals with a specific topic of interest (a few examples: ‘Effects of Sleep Deficiency on Hormones, Cytokines, and Metabolism’, ‘Biomedical Effects of Circadian Rhythm Disturbances’, and ‘Shift Work, Obesity, and Cancer’). I’ve added some observations from the book below as well as some comments – I’ll probably post another post about the book later on once I’ve finished reading it. The very short version is that insufficient sleep may be quite bad for you.

“Insomnia, identified by complaints of problems initiating and/or maintaining sleep, is common, especially among women. Insomnia is often associated with a state of hyperarousal and has been linked to increased risk of depression, myocardial infarction, and cardiovascular mortality [15]. Relative risks for cardiovascular disease for insomnia have been estimated to vary from 1.5 to 3.9; a dose-dependent association between frequency of insomnia symptoms and acute myocardial infarction has been demonstrated [16]. Insomnia may be particularly problematic at certain times in the lifespan, especially in the perimenopause period and in association with acute life stresses, such as loss of a loved one. The occurrence of insomnia during critical periods, such as menopause, may contribute to increased cardiometabolic risk factors at those times. Short sleep duration may occur secondary to a primary sleep disorder or secondary to behavioral/social issues. Regardless of etiology, short sleep duration has been associated with increased risk of obesity, weight gain, diabetes, cardiovascular disease, and premature mortality [17,18].”

“Sleep is characterized not only by its presence or absence (and timing) but by its quality. Sleep is composed of distinct neurophysiological stages […] associated with differences in arousal threshold, autonomic and metabolic activity, chemosensitivity, and hormone secretion [2] […] Each sleep stage is characterized by specific patterns of EEG activity, described by EEG amplitude (partly reflecting the synchronization of electrical activity across the brain) and EEG frequency. Lighter sleep (stages N1, N2) displays relatively low-amplitude and high-frequency EEG activity, while deeper sleep (slow-wave sleep, N3) is of higher amplitude and lower frequency. Stages N1, N2, and N3 comprise non-rapid eye movement (REM) sleep (NREM). In contrast, rapid eye movement (REM) sleep is a variable frequency, low-amplitude stage, in which rapid eye movements occur and muscle tone is low. […] In adults, over the course of the night, NREM and REM sleep cycles recur approximately every 90 min, although their composition differs across the night: early cycles typically have large amounts of N3, while later cycles have large amounts of REM. The absolute and percentage times in given sleep stages, as well as the pattern and timing of progression from one stage to another, provide information on overall sleep architecture and are used to quantify the degree of sleep fragmentation. Sleep characterized by frequent awakenings, arousals, and little N3 is considered to be lighter or non-restorative and contributes to daytime sleepiness and impaired daytime function. Higher levels of N3 are thought to be “restorative.””

“The circadian rhythm changes with age and one important change is a general shift to early sleep times (advanced sleep phase) with advancing age. While teenagers and college students have a tendency due to both intrinsic rhythm and external pressures to have later bedtimes, this starts to wane in young adulthood. This phase advance to an earlier sleep time has been referred to as “an end to adolescence” and happens at a younger age for women than for men [60]. […] During the transition from adolescence to adult, several changes occur to the sleep architecture. Most notably is the significant reduction in stage N3 sleep by approximately 40 % as the child progresses through the teenage years […] This means that other stages of NREM (N1 and N2) take up more of the sleep time. Functionally this translates to the child having lighter sleep during the night and therefore is easier to arouse and awaken. […] The sleep architecture of young adults is […] in a 90-min cycle with all sleep stages represented. The amount of stage N3 sleep continues to reduce at this time, at a rate of approximately 2 % per decade up to age 60 years. There is also a smaller reduction in REM sleep during early and mid-adulthood. Once through puberty and into the 20s, most adults sleep approximately 7–8 h per night. This remains relatively constant through mid-adulthood. Young adults may still sleep a bit longer, 8–9 h for a few years. The need for sleep does not change as people progress to mid-adulthood, but the ability to maintain sleep may be affected by medical conditions and environmental influences. […] although average sleep duration does not change over adulthood, there is a large degree of inter- and intraindividual variability in sleep duration. Individuals who are consistently short sleepers (e.g., <6 h per night) and long sleepers (>9 h per night) and who demonstrate high between-day variability in sleep duration are at increased risk for weight gain, diabetes, and other metabolic dysfunction and chronic disease.”

“Nine retrospective studies have indicated that shift work might be associated with a higher risk of breast cancer, including three studies in Denmark, three studies in Norway, two studies in France, and one study in the United States. […] Three of four prospective studies have provided evidence in favor of an association between shift work and breast cancer. […] evidence for a relation between shift work and prostate cancer is very limited, both by the small number of studies and by major limitations involved in those studies that have been conducted”

The increased risk of breast cancer may well be quite significant not only in the statistical sense of the word, but also in the normal, non-statistical, sense of the word; for example the estimated breast cancer odds ratio of Norwegian nurses who’d worked 30+ years of nightwork, compared to those who hadn’t done any nightwork, was 2.21 (1.10-4.45) – and that study involved more than 40.000 nurses. Another study dealing with the same cohort found that the nurses who’d worked more than five years with schedules involving more than 5 consecutive night shifts also had an elevated risk of breast cancer (odds ratio: 1.6 (1.0-2.4)). It’s noteworthy that many of the studies on this topic according to the authors suffer from identification problems which if anything are likely to bias the estimates towards zero. As you should be able to tell from the reported CIs above, the numbers are somewhat uncertain, but that doesn’t exactly make them irrelevant or useless; roughly 1 in 8 women at baseline can expect to get breast cancer during their lifetime (link), so an odds ratio of, say, 2 is actually a really big deal – and even if we don’t know precisely what the correct number is, the risk certainly seems to be high enough to warrant some attention. One mechanism proposed in the shift work chapter is that the altered sleep patterns of shift workers lead to weight gain, and that weight gain is then part of the explanation for the increased cancer risk. I’ve read about and written about the obesity-cancer link before so this is stuff I know a bit about, and that idea seems far from far-fetched to me. And actually it turns out that the link between shift work and weight gain seems significantly stronger than does the link between shift work and cancer – which is precisely what you’d expect if it’s not the altered sleep patterns per se which increase cancer risk, but rather the excess adipose tissue which so often follows in its wake:

“Numerous epidemiologic studies have examined the association between shift work and obesity in various different countries. Most of these studies have utilized existing data from employment records in particular companies, which provide convenient but typically limited information on shift work and health-related variables because this information was not originally collected for research purposes. As a result, many of these studies have methodological issues that potentially limit the interpretation of their results. Still, 22 of 23 currently published studies found some evidence that obesity is significantly more common among individuals with shift work experience compared to those without such experience [36–57]; only one study did not identify a possible link [58]. […] many analyses of shift work and obesity lack adjustment for potentially important confounding variables (e.g., other health and lifestyle factors), and therefore prospective studies with more extensive information on these variables have provided critical insight. Four such prospective studies have been conducted, all of which indicate that individuals who perform shift work tend to experience significant weight gain over time — including two studies in Japan, one study in Australia, and one study in the United States. […] in the largest and most detailed analysis to date, each 5-year increase in rotating shift work experience was associated with a gain of 0.17 kg/m2 in body mass index (95 % CI = 0.14–0.19) or 0.45 kg in weight (95 % CI = 0.38–0.53), among 107,663 women who were followed over 18 years in the US Nurses’ Health Study 2 [57]. Statistical models were adjusted extensively for age, baseline body mass index, alcohol intake, smoking, physical activity, and other health and lifestyle indicators.”

A major problem with the ‘shift work -> obesity -> cancer’ -story is however that the identified weight gain effect sizes seem really small (one pound over five years is not very much, and despite how dangerous excess adipose tissue may be, those kinds of weight differences certainly aren’t big enough to explain e.g. the breast cancer odds ratio of 1.6 mentioned above) – the authors don’t spell this out explicitly, but it’s obvious from the data. It may be slightly misleading to consider only the average effects, as some women may be more sensitive than others to these effects and outliers may be important, but not that misleading; I don’t think it’s plausible to argue that this is all about body mass. In the few studies where they have actually looked at obesity as a potential effect modifier, the results have not been convincing:

“Although it is possible that obesity predicts both shift work and cancer risk — as would be required for obesity to be a potential confounding factor of this relation — it is probably more likely that shift work predicts obesity, in addition to obesity being a risk factor for many types of cancer. This scenario is suggested by the prospective studies of shift work and obesity described above; that is, obesity is a stronger candidate for effect modification than confounding of the association between shift work and cancer, as shift work appears to influence the risk of obesity over time. Yet, only three prior studies have conducted stratified analyses based on obesity status to evaluate the possibility of effect modification. Two of these studies focused on shift work and breast cancer, but they found no evidence of effect modification by obesity [24,26]; a third study of shift work and endometrial cancer did identify obesity as an effect modifier [32]. […] Clearly, additional studies need to carefully consider the role of body mass index—a possible confounding factor, but more likely effect modifying factor—in the association between shift work and obesity.”

I should make clear that although it makes sense to assume that obesity is a potentially major variable in the sleep-cancer risk relation, there are a lot of other variables that likely play a role as well, and that the book actually talks about these things as well even though I haven’t covered them here:

“Although the exact mechanisms by which various sleep disorders may affect the initiation and progression of cancer are largely unknown, disruption of circadian rhythm, pervasive in individuals with sleep disorders, is thought to be the underlying denominator linking sleep disorders, as well as shift work and sleep deprivation, to cancer. The circadian system synchronizes the host’s daily cyclical physiology from gene expression to behavior [55]. Disruption of circadian rhythm may influence tumorigenesis through a number of mechanisms, including disturbed homeostasis and metabolism (details provided in Chap. 2), suppression of melatonin secretion (details provided in Chap. 3), intermittent hypoxia and oxidative stress (details provided in Chap. 5), reduced capacity in DNA repair, and energy imbalance.”

The obesity link relates to a few of these, but there’s a lot of other stuff going on as well. I may talk about some of those things later – I thought chapter 7 was quite interesting, so I’ve ended up talking quite a bit about that chapter in this post, and neglected to cover some of the earlier stuff covered in the book.

May 21, 2014 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Medicine | Leave a comment

Lupus – The Essential Clinician’s Guide

I read this book for different reasons than the ones that usually apply when I find myself reading medical textbooks; I’d wish those reasons did not exist.

The book is nice; there’s a lot of information in there despite the low page count, and it’s slightly less technical than are some related books (e.g. this one, which I’ve also briefly had a look at). The book has a lot of data, but as a result of the nature of the publication a lot of information related to this data is missing from the coverage; for example we’re told that “[o]ral ulcers are present in 20% of patients”, but we’re not told how many patients this estimate is based upon, and/or how confident we are that that number is correct. You have to take some stuff on faith and I’ve read the book assuming that given the background of the author, his reported estimates are at least in the right ballpark. The author might argue that providing such additional ‘meta-data’ as well to the readers could have easily doubled the page-count without adding much information which is relevant to the people reading the publication, and if he did do that I’m not actually sure I’d necessarily disagree with him; it’s a good book in terms of what it sets out to do, and I can’t really blame the author for not writing a different book. Part of why I chose to give the book a high rating is also that I liked it much better than some other superficially similar short books I’ve read, e.g. this one.

I’ve added some observations from the book below and added some hopefully helpful links which may make the post a bit easier to read.

“Based on the definitions of lupus given in Chapter 2, 50% of all cases of lupus are systemic lupus erythematosus, evenly divided into organ-threatening (25%) and non-organ-threatening (25%) categories. The remainder of cases are cutaneous lupus (40%), mixed connective tissue disease and/or overlap syndromes (10%), and drug-induced lupus (<1%). There are seven undifferentiated connective tissue disease patients for every lupus patient [a different way to say this: “For every patient with SLE [Systemic Lupus Erythematosus], there are six or seven who display lupus-like symptoms without meeting SLE criteria”]. The true incidence and prevalence of systemic lupus in the United States are difficult to ascertain. Surveys have shown that only one in three patients told by a physician that he or she has SLE meets the American College of Rheumatology criteria.[1] […] Once thought to be a benign process, MCTD [mixed connective tissue disease] has a 20-year mortality rate of over 50%. […] Approximately one white male in 10,000, one white female in 1,000, and one African American female in 250 in the United States have SLE.[4] People of color are diagnosed with lupus more frequently than are Caucasians, but this statistic can get complicated. For example, Filipinos and Chinese are diagnosed with lupus much more frequently than are Japanese or Malays. […] Nearly 90% of persons with SLE are female, as opposed to 80% with chronic cutaneous lupus and 50% with drug-induced lupus. Most develop the disorder during their reproductive years. The female-to-male ratio is 2:1 before puberty, as high as 8:1 during years of active menstruation, and 2.3:1 for patients over the age of 60″

“Pathogenesis undergoes an often-gradual process consisting of several phases: predisposition, benign autoimmunity, prodrome, and clinical systemic lupus erythematosus. Only one person in 10 who possess lupus susceptibility genes ever develops full-blown lupus. Many individuals have “subclinical autoimmunity,” or undifferentiated connective tissue disorders where the process is attenuated. […] At least 30 susceptibility genes for SLE have been identified, and their presence varies widely depending on race, ethnicity, and geography.[1] […] Most of the lupus-associated genes have odds ratios (relative risks) of less than 2.5 (1 would indicate no predisposition), and they are only of minimal clinical value […] Lupus patients are normally fertile. However, only 67% of pregnancies in SLE patients are successful, compared to 85% in the general population.[4] […] A woman with lupus has a 2% risk of her son and a 10% risk of her daughter having lupus […]
As only 28% of monozygotic twins both have SLE, environmental factors clearly play a role. […] The development of autoantibodies precedes the first symptoms of SLE by two to nine years.”

Immune complexes and apoptotic cells circulate in the bloodstream and need to be disposed of so they do not settle in tissues (which causes inflammation) or release chemicals (e.g., cytokines, chemokines) which also promote inflammation. In SLE, this clearance fails due to a variety of mechanisms: defective phagocytosis, altered transport by complement receptors, defective regulation of T helper cells by regulatory T cells, inadequate production or function of regulatory cells that kill or suppress autoreactive B cells, low production of interleukin 2 by T cells, and defects in apoptosis that permit the survival of effector T and autoreactive B cells […] Tissue damage is produced by the deposition of circulating immune complexes into tissue, which in turn activates endothelial cells, cytokines, and chemokines. In the kidneys, this produces inflammation, followed by proliferation and ultimately fibrosis (scarring). Complement activation, overloading of the Complement Receptor 1 (CR1) transport system, antibodies to complement components […], and congenital or acquired deficiency in complement components also lead to tissue inflammation and damage. Lupus is also characterized by accelerated atherosclerosis.”

“Half of persons with systemic lupus erythematosus present with organ-threatening disease. The remaining individuals do not present with cardiopulmonary, hepatic, or renal symptoms; central nervous system (CNS) vasculitis; hemolytic anemia; or thrombocytopenia on initial evaluation. Organ involvement is relatively easy to diagnose […] On the other hand, it can often take one to two years and several physician consultations before the presence of organ-sparing lupus is ascertained. Rashes can shorten the length to time of diagnosis, but young, healthy-appearing women with non-specific symptoms of fatigue and aching are often thought to have other processes or are given psychosocial explanations. […] Lupus patients complain of a sense of malaise and fatigue. Over 90% with the disease report this to their physician, and its lack of specificity can be problematic. […] Arthralgias are present in at least 80% of SLE patients, whereas observable inflammatory arthritis involving two or more joints is found in 50% at some point in the course of the disease. […] Two-thirds of lupus patients self-report sensitivity to sunlight.[2] […] Present in a little more than one-third of lupus patients, the butterfly rash is one of the disease’s most recognizable features.”

“Oral ulcers are present in 20% of patients with SLE […] Patients with lupus may complain of pain on taking a deep breath, shortness of breath, windedness, wheezing, or chest pains. The most common problem relates to pleurisy. At autopsy, most lupus patients show evidence of pleural scarring or prior inflammation. Manifested by pain or a catching sensation on taking a deep breath, pleural symptoms are present in 60% of lupus patients, and frank effusions noted in 25%, during a lifetime. […] Seen in 1% to 9% of persons with SLE, patients with acute lupus pneumonitis (ALP) present acutely with shortness of breath and fever. Often treated telephonically as a pulmonary infection with antibiotics, the process progresses rapidly if high doses of corticosteroids are not prescribed […] ALP has an 80% mortality rate if not diagnosed within two weeks of onset; recovery is usually rapid with appropriate treatment. […] Pericardial involvement is found in 60% of lupus patients at autopsy and incidentally, and asymptomatically in 25% on 2-D echocardiogram. Frank effusions are seen in 25% during the course of disease but in fewer than 5% of patients at any given point in time […] Myocardial dysfunction (often from subclinical inflammation) is found on stress echocardiography in 40% with SLE, but only 5% to 10% ever experience frank myocarditis. […] Coronary artery disease, hypertension, insulin resistance, metabolic syndrome, hyperhomocystinemia, and hyperlipidemia are more common in lupus patients […] Most lupus patients complain of at least some intermittent cognitive impairment […] Autoimmune thyroiditis, type 1 diabetes, autoimmune adrenalitis, premenstrual disease flares, and elevated prolactin levels have increased prevalence in SLE. […] Thirty percent of individuals with SLE have some form of renal involvement; in half of these cases, it mandates organ-specific therapy […] Lupus nephritis is associated with high morbidity and mortality.”

“Lupus costs the American public approximately $20 billion a year in lost wages, disability, hospitalizations, medical visits, and medication […] Direct costs account for one-third, and indirect costs two-thirds, of this amount.[1,3] The overwhelming majority of lupus patients with non-organ-threatening disease are employed full time, while 50% with organ involvement are disabled[4] […] Part-time employment is possible for many lupus patients. Total permanent disability is not to be taken lightly. Disabled patients tend to be less independent, less socially interactive, and more depressed, and to have less self-esteem.[2] […] Currently, most patients with systemic lupus erythematosus survive at least 20 years,[1] although their quality of life is not always optimal. Historically, 40% of deaths in lupus patients with serious disease were from inflammation and occurred within two years of diagnosis. Approximately 10% of deaths took place over the following 10 years. The remaining 40% of lupus patients died 12 to 25 years after diagnosis, mostly from infections and complications of chronic steroid therapy and immunosuppression. This “bimodal” curve has been altered in the past 10 years.[2] Death due to lupus during the first two years is becoming less common; individuals with serious SLE still have a 10- to 30-year shortened life expectancy due to complications of therapy.[3] […] Patients with drug-induced lupus, chronic cutaneous lupus, and non-organ-threatening SLE without antiphospholipid antibodies have a normal survival rate. […] Approximately 5% of persons with SLE experience spontaneous remission without treatment.”

May 19, 2014 Posted by | Books, Epidemiology, Immunology, Medicine | Leave a comment

The Psychology of Personnel Selection (II)

Here’s my first post about the book.

The second half of the book was about psychological constructs used for personnel selection. The constructs included in the coverage are: IQ/general mental ability, personality traits, creativity, leadership, and talent. Some of these chapters were better than others, and I actually talked a bit about some of the parts of the book which I had no intention of covering here elsewhere – go have a look at the comments I made in that thread if you want to know more about what kind of stuff is included in the book. They note in the introduction to chapter 10 that: “it remains unclear what talent actually is, whether it needs special nurturing to last and what it predicts” – and as far as I’m concerned, they could actually have stopped right then and there and this would have been perfectly fine. They didn’t, though.

I was far more interested in the stuff covered in the first couple of chapters in the second half than I was in the other stuff, and in this post I’ll restrict the coverage to the IQ/mental ability chapter and the personality trait chapter – if you’re curious to know more about what kind of stuff is covered in the last few chapters of the book, I again refer to the comments I made in the MR thread to which I link above. Despite the fact that I only spend time here on the first two chapters of the second half, the authors spend a combined 80 pages on these two topics, whereas the last three topics get a combined total of 60 pages – so I’m actually covering a big chunk of the remaining material even though I don’t talk about leadership, talent or creativity here.

I liked the book, but some parts of it were much better than others and the three star rating I gave it on goodreads is sort of a compromise rating; in this post I’ve mostly covered stuff I liked and/or found interesting. If you’re interested in this kind of stuff it’s not a bad book, but if you aren’t you probably don’t need to read it. The book never did get around to talking all that much about the interaction effects I talked about in the first post (i.e. ‘method interactions’), but as you might be able to tell from the coverage below there was significantly more focus on these aspects during the second half of the book than there was during the first half, which was nice.

Some observations from the book below and a few comments.

“To say that GMA [‘General Mental Ability’] predicts occupational outcomes, such as job or training performance, is as much a truism as an understatement, and is really beyond debate […] Indeed, there is so much evidence for the validity of GMA in the prediction of job and training performance that an entire book could be written simply describing these findings. There are several great and relatively compact sources of reference […] The predictive power of GMA at work is rivalled by no other psychological trait […] That said, GMA should not be used as single predictor of job performance as some traits, notably Conscientiousness and Integrity […], have incremental validity over and above GMA, explaining additional variance in occupational outcomes of interest […] The validity of GMA at work has been documented quite systematically since the end of World War I […] average GMA levels tend to increase with occupational level, that is, with the prestige of the job […] Furthermore, higher-level jobs tend to have substantially higher levels of inbound GMA, indicating that it is far more unlikely to find low-IQ scorers in high-level professions than it is to find high-IQ scorers in low-level professions. […] the standard deviations tend to decrease as people move up to higher-level professions, showing that these jobs tend to have not only people with higher but also more homogeneous GMA […] In a colossal quantitative review and meta-analysis of 425 studies on GMA and job performance across different levels of complexity (Hunter, 1980; Hunter & Hunter, 1984), typically referred to as ‘validity studies’, GMA was found to correlate significantly with performance at all levels of job complexity, though it is clear that the more complex the job, the more important GMA is; hence the common assertion that the relationship between GMA and job performance is moderated by job complexity. Indeed, it has been noted that job complexity is one of the few moderators of the effects of GMA on job performance […] Subsequent meta-analyses in the US were by and large congruent with Hunter’s findings […] the UK studies on GMA and job performance and training mirror the findings from the US. This is in line with the reported overlap in choices of test for measuring GMA. […] GMA is especially important for explaining individual differences in learning, which, in low-complexity professions, may not be that important after training, but, in high-complexity professions, may be needed especially in the job. However, it is clear (as much in the UK as in the US data) that GMA matters in every job and for both training and performance […] Studies in the European Community (EC) echo the pattern of results from US and UK studies.”

“Whilst powerful, the above reviewed studies provide no longitudinal evidence for the predictive power of GMA, making interpretation of causational paths largely speculative. However, there is equally impressive evidence for the longitudinal validity of GMA in the prediction of job performance. […] the longitudinal associations tend to hold even when socioeconomic status (SES) is taken into account. Thus, within the same family (where SES levels are the same for every member) family members with higher GMA tend to have significantly better jobs, and earn more, than their lower GMA relatives (Murray, 1998). In fact, at 1993 figures, and controlling for SES, people with average levels of GMA (IQ = 100) earned almost $20,000 less than siblings who were 20 IQ points brighter, and almost $10,000 more than siblings who scored 20 IQ points lower. […] Deary and colleagues found that childhood GMA accounted for 23.2 per cent and parental social class for 17.6 per cent of the total variance in social status attainment in mid life (Deary et al., 2005). The most compelling evidence for the longitudinal validity of GMA in the prediction of occupational level and income was provided by a study spanning back almost four decades (Judge et al., 1999). The authors reported correlations between GMA at age 12 and occupational level (r = .51) and income (r = .53) almost forty years later. Moreover, a reanalysis of these data (which also included the Big Five personality traits) estimated that the predictive power of GMA was almost 60 per cent higher than that of Conscientiousness (the trait that came second) (Schmidt & Hunter, 2004).”

“several robust studies, particularly in the US, report that whites tend to score higher than Hispanics, who tend to score higher than blacks on IQ tests. Estimates of white–black differences in IQ tend to give whites an average advantage of .85 to 1.00 standard deviation (that is, almost 15 IQ points), which is certainly ‘not trivial’ (Hough & Oswald, 2000, p. 636). Although group differences in job performance are somewhat less pronounced (Hattrup, Rock & Scalia, 1997; Waldman & Avolio, 1991), the mainstream view in intelligence research is that these differences are not caused by any test biases […] Despite this divisive picture of intellectual potential and job opportunities, there is little evidence for the benefits of ignoring GMA when it comes to selecting employees. In fact, most studies report just the opposite, namely detrimental effects of banning IQ-based personnel selection […] GMA-based selection is not necessarily a disadvantage for any group of society, as individuals would be rated on the basis of their own capability rather than their group membership […] people with an IQ  80 (about 10 per cent of whites and 30 per cent of blacks in the US) are currently considered unsuitable for the US army by federal law and there are few civilian employers who would hire under this GMA threshold”

I’ve seen people on the internet frame the army as a(/n) (good?) option for poor young black people in the US with limited options/who can’t afford to go to college; ‘a military career may be much better than a minimum-wage job, all things considered’. Considering that the 30 % number above is the proportion of all blacks who’d get rejected, the proportion of young poor blacks with limited options who may not be able to get in may be quite high. Although I’ve read about cutoffs like these before, I got a bit of a shock when I realized how many people don’t even have options like these available to them. Okay, back to the text – why does GMA matter?

“The main reason why GMA predicts job performance, and related outcomes, is that it causes faster, better, more effective and enduring knowledge acquisition. […] In simple terms, having a higher IQ means being able to learn faster. […] Another reason why IQ tests predict job performance is that higher GMA is linked to higher job role breadth, enabling brighter employees to perform a wider range of tasks and, in turn, be rated more highly by their supervisors […] Although GMA is a strong predictor of overall job performance (correlating at about r=.50, and thus explaining 25 per cent of the variance in job performance), it matters most in complex or intellectually demanding jobs (where it correlates at about r = .80 with job performance) and least in unintellectual or cognitively simple jobs (where it correlates with job performance at about r = .20). Objective measures of performance correlate more highly with GMA measures than subjective assessments of performance, such as supervisory ratings, do. […] Specific abilities, that is, variance in cognitive abilities unaccounted for by the general GMA factor, are insignificant predictors of job performance and related outcomes once GMA is taken into account. This is counterintuitive to most people because the layperson tends to overestimate the importance of situational and job-specific factors when interpreting the determinants of work performance.”

GMA is measured or tested via objective performance tests […], whereas personality traits are assessed via subjective inventories, notably self- or other-reports (but especially self-reports). In that sense, one can distinguish between cognitive abilities and personality traits on the basis of assessment methods, whereby the former reflect individual differences in the capacity to identify correct responses to a standardised test (verbal or non-verbal), whereas the latter reflect individual differences in general behavioural tendencies, assessed only subjectively, that is, through people’s accounts (one’s own or others’). This led to a now well-established distinction in psychology to refer to cognitive abilities in terms of maximal performance and personality traits in terms of typical performance […] With regard to job and training performance, which have been the criteria par excellence in personnel selection for over a century, it is interesting that although GMA is a good predictor of job and training performance, we use maximal performance measures (ability tests) to predict typical performance (aggregate levels of achievement at work for instance income or occupational level). […] It should be noted that personality traits are not only assessed via self-report inventories (though that is indeed the most common way of assessing them). Observation, situational tests, projective techniques and even objective measures can also serve as measures of personality […] The most commonly used forms of observation in personnel selection are interviews […] and biodata […] Employers may not explicitly state that what they are assessing in an interview or looking for in biodata is indeed traces of personality, but there is longstanding evidence for the fact that candidates’/interviewees’ personality traits affect employers’ decisions (Wagner, 1949), despite the fact that most interviewees fake […] Although it has long been argued that personality traits – or indeed any psychological construct – should not be assessed only with one method, e.g., self-report or interview […], but with a multi-method approach […], most researchers and practitioners continue to rely on single methods of assessment and, in the case of personality, that method is self-report inventories. However, it is important to disentangle the variance that is caused by the method of assessment and the actual trait or construct that is being assessed. This is a complex theoretical and methodological issue: for example, self-reports of cognitive ability tend to correlate with performance tests of cognitive ability only at r = .40 (Chamorro-Premuzic, Moutafi & Furnham, 2005), meaning they share less than 20 per cent of the variance.”

When I read that last sentence I thought to myself that r = .40 is very low. Yeah, I know about the Dunning–Kruger effect and all that stuff – I’ve written about stuff like that before here on several occasions (see e.g. this) – but even so. This is strange to me, considering how good people usually are (‘seem to me to be?’) at figuring out where they belong in the social hierarchies they are members of. Note that a correlation like that is not explained by an ‘everybody overestimate themselves and this means that there’s not a very good correspondance between actual scores and self-evaluations’-argument – if everybody did overestimate themselves to the same extent, the ordering would be completely unaffected, and GMA metrics only care about the orderings. You need smart people to overestimate themselves less than the not-so-smart people, or to be more likely to underestimate themselves, to see correlations like these. Okay, anyway, back to the book. Quite a bit of the personality trait chapter covered things which I’ve previously read about in much more detail in e.g. Funder or Leary & Hoyle, but not all of it was review and I do want to talk a bit about some of the stuff in the book which I either have not read about before, or have at least not talked about much here on the blog:

“The question of whether personality inventories should be used or not in the context of personnel selection has divided practitioners and researchers for decades. Practitioners tend to assign much more weight to personality than to abilities, but are reluctant to accept the validity of self-reports because common sense indicates that people can and will fake. On the other hand, researchers are still debating whether faking is really a problem and whether the validities of personality inventories are acceptable, meaningless or high. […] Thus the answer to the question of whether personality tests should be used in personnel selection will depend mostly on who you ask, even if answers are based on exactly the same data. [I consider this to be a big red flag, but perhaps that’s just me…] […] What is beyond debate is that personality inventories are weaker predictors of job and training performance than are cognitive ability tests”

“Regardless of where one stands in relation to the use of personality inventories […], it is clear that Conscientiousness is the most important personality predictor of job performance […], and thus the most important non-ability factor in personnel selection, at least among the Big Five personality traits. […] Agreeableness seems to be advantageous in jobs requiring interpersonal interactions or where getting along is paramount […] A typical case is customer service jobs, and indeed Agreeableness has been found to predict performance on these jobs quite well […], especially if based on teamwork rather than individualistic tasks”

“Even since personality inventories were developed there have been objections to the use of such tests in personnel selection […] In the context of work psychology the two main criticisms are that it is easy to fake responses to a personality inventory and that personality traits are only weak predictors of occupational outcomes. […] In an attempt to provide a comprehensive review of the literature, Michael Campion (in Morgeson et al., 2007) examined the salient studies on faking […], concluding that:
‘Four overall conclusions can be drawn from this review of the research literature on faking in personality tests. First, the total number of studies on the topic is large, suggesting that faking has been viewed as an important problem. Second, people can and apparently do fake their responses on personality tests. Third, almost half the studies where criterion-related validity was studied found some effect of faking on criterion-related validity. Fourth, there has been substantial research devoted to techniques for detecting and mitigating faking, but no techniques appear to solve the problem adequately.'” […]

“Even if faking can be overcome, or in cases where it does not seriously threaten the validity of personality traits as predictors of work-related outcomes, critics of the use of personality inventories in personnel selection have another, often more fundamental, objection, namely the fact that the magnitude of the association between personality traits and the predicted criteria is modest at best, and often non-significant […] The irony is that opposite conclusions are often drawn from exactly the same data. […] Regardless of the magnitude of the correlation between personality scores and work-related outcomes, it is clear that the validity of personality inventories is largely dependent on the type of criterion we chose to predict. Thus, unlike with GMA, many factors moderate the effects of personality on job and training performance […] It is plausible to predict that the validities of personality traits will increase substantially if the correct criteria (or predictors) are chosen.”

May 19, 2014 Posted by | Books, Psychology | Leave a comment

The Well of Lost Plots

I don’t really feel like blogging anything which takes any effort at the moment. So despite having enough material from The Psychology of Personnel Selection, which I finished a few days ago, and Impact of Sleep and Sleep Disturbances on Obesity and Cancer, which I’ve yet to finish but have spent some time on, for at least a couple of posts, I’ll cover a novel instead. I read two novels this weekend, Harper Lee’s To kill a Mockingbird and Jasper Fforde’s The Well of Lost Plots – I’ll blog the book I liked best. I found Lee sort of boring in a way, although I don’t exactly think the book is awful (it’s probably overrated, but that’s different) – there was way too little George R.R. Martin in that book and way too much Tolkien.

Jasper Fforde’s book was brilliant though – I think I liked this one better than the second book in the series. I’ve added a few sample quotes from it below. If you haven’t read along for very long, here’s incidentally a post I wrote about the first book in Fforde’s series, The Eyre Affair.

“I found the correct door. It opened to a vast waiting room full of bored people who all clutched numbered tickets and stared vacantly at the ceiling. There was another door at the far end with a desk next to it manned by a single receptionist. He stared at my sheet when I presented it, sniffed and said:
‘How did you know I was single?’
‘Just then, in your description of me.’
‘I meant single as in solitary.'”


“‘Good. Item seven. The had had and that that problem. […] The use of had had and that that has to be strictly controlled; they can interrupt the ImaginoTransference quite dramatically, causing readers to go back over the sentence in confusion, something we try to avoid.’
‘Go on.’
‘It’s mostly just an unlicenced usage problem. At the last count David Copperfield alone had had had had sixty-three times, all but ten unapproved. Pilgrim’s Progress may also be a problem owing to its had had / that that ratio.’
‘So what’s the problem in Progress?’
‘That that had that that ten times but had had had had only thrice. Increased had had usage had had to be overlooked but not if the number exceeds that that that usage.’
‘Hmm,’ said the Bellman. ‘I thought had had had had TGC’s approval for use in Dickens? What’s the problem?’
‘Take the first had had and that that in the book by way of example,’ […] ‘You would have thought that that first had had had had good occasion to be seen as had, had you not? Had had had approval but had had had not; equally it is true to say that that that that had had approval but that that other that that had not.’
‘So the problem with that other that that was that—?’
‘That that other-other that that had had approval.'” (it doesn’t stop there, but I think you’ve got the general idea…)

“He handed me a letter. I unfolded it and read:

Dear Mr Spratt,

It has come to our attention that you may be attempting to give up the booze and reconcile with your wife. While we approve of this as a plot device to generate more friction and inner conflicts, we most strongly advise you not to carry it through to a happy reconciliation, as this would put you in direct contravention of Rule 11C of the Union of Sad Loner Detective’s Code, as ratified by the Union of Literary Detectives, and it will ultimately result in your expulsion from the association with subsequent loss of benefits.
I trust you will do the decent thing and halt this damaging and abnormal behaviour before it leads to your downfall.
PS. Despite repeated demands, you have failed to drive a classic car or pursue an unusual hobby. Please do so at once or face the consequences.”

“Fire was not an option in a published work; they had tried it once in Samuel Pepys’ Diary and burnt down half of London.” (relevant link)

“The twentieth century has seen books being written and published at an unprecedented rate – even the introduction of the Procrastination1.3 and Writer’sBlock2.4 Outlander viruses couldn’t slow the authors down.”

May 18, 2014 Posted by | Books | 2 Comments

The Psychology of Personnel Selection (1)

“Ideally those in the business of selection want to use reliable and valid measures to accurately assess a person’s abilities, motives, values and traits. There are many techniques available and at least a century of research trying to determine the psychometric properties of these methods. Over the past twenty years there have been excellent meta-analyses of the predictive validity of various techniques.
In this chapter we considered some assessment and selection techniques that have alas ‘stood the test of time’ despite being consistently shown to be both unreliable and invalid. They are perhaps more a testament to the credulity, naivety and desperation of people who should know better.
However, it is important to explain why these methods are still used. One explanation is the Barnum effect [see also this] whereby people accept as valid about themselves and others high base-rate, positive information. The use to a client of personal validation of a test by a consultant or test publisher should thus be questioned.
Part of the problem for selectors is their relative ignorance of the issues which are the subject of this book. Even specialists in human resources remain uninformed about research showing the poor validity and reliability of different methods.”

Like the body language book, this book belongs in the category of ‘books that might actually contain information that will be useful for me to know at some point’ – the stuff covered is certainly related to thoughts I posted here on the blog not long ago.

I’ve read the first half of the book by now, which covers the topic of methods of personnel selection. The second half deal with (psychological) constructs used for personnel selection. I’m not particularly impressed at this point, but it’s not a bad book and there are some useful observations in here. The analysis provided in the coverage is not very sophisticated; they’ll talk about the usual correlations among methods and job performance and other outcome variables and they’ll have a word or two about the ‘variance explained’, perhaps derived from meta-reviews – but detailed analysis is often missing, or at least less detailed than I’d have liked it to have been. I should point out that it’s not just at the analytical level that the coverage is not too impressive/deep; it’s also at the more conceptual level. Biographical data – i.e. information about a person’s background and life history (/work history) and stuff like that (they call it ‘biodata’) – may for example not play a large role overall and may have a rather limited explanatory power in terms of explaining future performance, but some of the specific variables which might be introduced into the hiring analysis when covering such aspects of an applicant’s skill set and background may still be very high-impact; one example would be past prison sentences. They do talk a little about different scoring mechanisms applied by employers and this implicitly at least conceptually relates to the types of decision rules firms might use to integrate data like this into the hiring decision process – they don’t really talk about decision rules or implementation at all, but you sort of know these things are lurking there in the background – however they only talk about that stuff in the abstract, and I’m not sure I understand why they don’t go into more detail here in terms of identifying variables which might be of particular interest. Another problematic aspect to me was the almost total absence of cost-benefit stuff in the chapter about interviews. More specifically, I remember reading a study perhaps a year or two ago which found that using two interviewers rather than one seemed not to be cost-efficient (in that sample at least, or in the baseline model setting – something like that..) as the greater accuracy obtained was not sufficient to cover the increased cost associated with having an extra employee interview people, rather than doing other stuff instead. As the job-relevant information you can obtain from an interview is not particularly impressive to start with, compared to what information you may be able to get from other mechanisms, this is not too surprising, but such aspects were nevertheless still not covered. I’m too lazy to find the study, but the main point is that tradeoffs like these do exist and likely inform, or at least should inform, some firms’ interview practices – to me it seems as if it would make a lot of sense to include such information in a book like this as well.

On another note even though they do talk about how to improve upon methods which might have some promise but are often used in a suboptimal manner, how to optimally combine methods is not really addressed in the book at the analytical level, certainly not in any detail. Perhaps it’s not really fair to criticize the authors for that, as I’d probably have had some critical remarks as well if they had decided to talk about stuff like that as well, but it feels a little bit strange to me that such aspects are not covered. The authors talk about specific methods, and they talk about how good these are – interviews are this good, references are that good. They compare the methods with other methods which might be able to provide the same information, which is something you’d expect them to do. It’s made clear in the book, as you should be able to tell from the quotes below, that some methods overlap in terms of the information you’ll get out of them. However might it not also be the case that a combination of methods sometimes will provide more value to the firm than perhaps the sum of the coefficients might indicate? One might conceptualize this in terms of a hurdle model where firms will only hire someone once they’re convinced that the individual is ‘good enough’, and before some individual has cleared the hurdle and proven himself worthy all efforts aimed at finding the right candidate are basically wasted – and it turns out that the firm needs both GPA, biodata and interview information before anyone can be said to have cleared the hurdle, as these data sources are deemed sufficiently different from each other to enable the employer to assess the candidate along all the relevant dimensions (or something like that). Maybe this is not a good way to think about this, but either way many employers do use multiple methods during the same selection process, and not taking interactions among methods into account at the analytical level in any detail and in particular implicitly focusing only on one particular type of interaction in your coverage seems problematic in light of that; you want to compare the methods, but in order to make proper comparisons you need to somehow include in your considerations/analysis the relevant potential combinations of methods as well, and the authors don’t engage in this type of analysis (there are a few remarks about ‘incremental validity’ a couple of places, but that’s it). I don’t know – maybe they’ll talk about this stuff in more detail in the second part of the book.

I’ve added a few observations from the book below, as well as some more comments. The main thing I’ve taken away from the first part of the book is that even the most informative methods available to future employers don’t really tell you nearly as much as you might think they do (and it seems clear from the coverage that most of them probably tell the employers/interviewers much less than these people think they do) – there’s a lot of variation in performance etc. which is in some sense unaccounted for.

“Whatever we might believe about physiognomy and personology, it is clear that many people make inferences about others based on appearance. […] Considerable experimental evidence suggests that people can and do infer personality traits from faces […] Taken as a whole, this research shows that the process of inferring traits from faces is highly reliable. That is, different judges tend to infer similar traits from given faces. [see incidentally Funder for more details on this kind of stuff; the details are messier than the authors let on here – for example it matters a lot which traits we’re talking about here, as some are much easier to observe than are others] […] However, the picture that emerges regarding the validity of physiognomic judgements is more ambiguous. […] There remains very little evidence that body shape is a robust marker of temperament or ability and should therefore be used for personnel selection. That said, it is to be expected that people’s (for example, interviewers’) perceptions of others’ (e.g., interviewees’ or job applicants’) psychological traits will be influenced by physical traits, but this will inevitably represent a distorted and erroneous source of information and should therefore be avoided.”

“The result of an interview is usually a decision. Ideally this process involves collecting, evaluating and integrating specific salient information into a logical algorithm that has shown to be predictive.
However, there is an academic literature on impression formation that has examined experimentally how precisely people select particular pieces of information. Studies looking at the process in selection interviews have shown all too often how interviewers may make their minds up before the interview even occurs (based on the application form or CV of the candidate), or that they make up their minds too quickly based on first impression (superficial data) or their own personal implicit theories of personality. Equally, they overweigh or overemphasise negative information or bias information not in line with the algorithm they use. […] Research in this area has gone on for fifty years at least. Over the years small, relatively unsophisticated studies have been replaced by ever more useful and important meta-analyses. There are now a sufficient number of meta-analyses that some have done helpful summaries of them. Thus Cook (2004) reviewed Hunter and Hunter (1984) (30 studies); Wiesner and Cronshaw (1988) (160 studies); Huffcutt and Arthur (1994) (114 studies) and McDaniel, Whetzel, Schmidt and Maurer (1994) (245 studies). These meta-analyses covered many different studies done in different countries over different jobs and different time periods, but the results were surprisingly consistent. Results were clear: the validity coefficient for unstructured interviews as predictors of job performance is around r = .15 (range .11 – .18), while that for structured interviews is around r = .28 (range .24 – .34). Cook (2004) calculates the overall validity of all interviews over three recent meta-analyses – taking job performance as the common denominator of all criteria examined – to be around r = .23.”

“given that interviews are used to infer information about candidates’ abilities or personality traits […], they provide very little unique information about a candidate and show little incremental validity over established psychometric tests (of ability and personality) in the prediction of future job performance […] All sorts of extraneous factors like the perfume a person wears at interview have been shown to influence ratings.”

There’s a literature on this stuff, and they provide a few samples of the sort of findings that may pop up when people look at these things. I’ve talked about some of these before, but I’ll add them here anyway. Attractive people get higher evaluations – this is not surprising. Female interviewers gave higher ratings than male interviewers. Early impressions were more important than factual information for interviewer ratings. There’s a contrast effect at work where your rating may be influenced by the rating of the guy who came before you. Non-verbal communication clearly matters – ‘applicants who looked straight ahead, as opposed to downwards, were rated as being more alert, assertive and dependable; they were also more likely to be hired. Applicants who demonstrated a greater amount of eye contact, head moving and smiling received higher evaluations.’ Interviewers give higher ratings to applicants they perceive to be similar to themselves, and/or applicants they find ‘likeable’. Interviewers may rate negative information more heavily than positive information. They have been found to talk more when they’ve formed a favourable decision. Pre-interview impressions have been found to have strong effects on the outcome of an interview; if the interviewer is favourably inclined before the interview starts, the interviewer is more likely to rate you highly and to think you handled the interview well. In terms of the time interviewers spend making a decision, it’s noteworthy that the decision to hire may be made very fast: “Interviewers reached a final decision early in the interview process; some studies have indicated the decision is made after an average of 4 minutes. Decisions to hire were made sooner than decisions not to hire.”

A bit more job interview stuff:

“Interpersonal skills manifest in interviewing can be characterised by:

Fluency: smooth, controlled, unflustered progress. 
Rapidity: speedy responses to answers and issues.
Automaticity: performing tasks without having to think.
Simultaneity: the ability to mesh and coordinate multiple, verbal and non-verbal tasks at the same time.
Knowledge: Knowing the what, how, when and why of the whole interview process.

Skills also involve understanding the real goal of the interview, being perceptive, understanding what is and what is not being said, and empathy. Recent research in the past decade has argued that the key issue assessed by the employment interview is the person–organisational fit […] [however] most interviewers try to assess candidate’s personality traits, followed closely by social or interpersonal skills, and not that closely by intelligence and knowledge. On a few occasions, interviewers focus on assessing interviewees’ preferences or interests and physical attributes, and the variable of least interest appears to be fit […] It is noteworthy that all these variables can be assessed via reliable and valid psychometric tests […], which begs the question of what if any unique information (that is reliable and valid) can be extracted from employment interviews.”

What about references? One funny observation I hadn’t thought about in this context is that if you’re an employer who want to get rid of a guy, a ‘good way’ to help him on his way is to write a nice reference letter. In a way you have a much stronger incentive to provide the low-productivity worker with a very nice letter of reference than you do your star employee; you’d much rather the latter didn’t go anywhere and kept working in your company. More generally, references tend to say nice things about the people who ask for them (and not only nice things, but the same nice things – ‘referees tend to write similar references for all candidates’), meaning that the variance is low – (in particular the unstructured) references don’t actually tell you very much because they tend to look very similar. Here’s part of what they write in the book:

“References are almost as widely used in personnel selection as the interview […] Yet there has been a surprising dearth of research on the reliability and validity of the reference letter; and, as shown in this chapter, an assessment of the existing evidence suggests that the reference is a poor indicator of candidates’ potential. Thus Judge and Higgins (1998) concluded that ‘despite widespread use, reference reports also appear to rank among the least valid selection measures’ […] The low reliability of references has been explained in terms of evaluative biases (Feldman, 1981) attributable to personality characteristics of the referee […] Most notably, the referee’s mood when writing a reference will influence whether it is more or less positive […] Some of the sources of such mood states are arguably dispositional […] and personality characteristics can have other (non-affective) effects on evaluations, too. For example, agreeable referees […] can be expected to provide more positive evaluations”

“the question remains as to whether […] referees can provide any additional information to, say, psychometric tests”

A little more from the book:

“Although the wider literature has provided compelling evidence for the fact that cognitive ability tests, particularly general mental ability scores, are the best single predictor of work performance […], Dean and Russell’s (2005) results provide a robust source of evidence in support of the validity of coherently constructed and scored biodata scales”

One big problem is that although that may be the case, that validity is mostly academical as that’s mostly not how employers handle the data – i.e. using ‘coherently constructed and scored scales’: “Biodata are typically obtained through application forms […] It is […] noteworthy that application forms are generally not treated or scored as biodata. Rather, they represent the collection method for obtaining biographical information and employers or recruiters often assess this information in non-structured, informal, intuitive ways”. So, yeah.

“The most important conclusion with regard to biodata is no doubt that they represent a valid approach for predicting occupational success (in its various forms). Indeed, meta-analytic estimates provided validities for biodata in the region of .25 […] In any case, this means that biodata are as valid predictors as the best personality scales, though the fact that biodata scales overlap with both personality and cognitive ability measures limits the appeal of biodata.”

“GPA-based selection has been the target of recurrent criticisms over the years and there are still many employers and recruiters who are reluctant to select on the basis of GPA. […] [however] many selection strategies use GPA to ‘sift’ or select out candidates during the early stages of the selection process […] In the past ten years meta-analysis has provided compelling evidence for the validity of GPA in occupational settings. Most notably, Roth and colleagues reported corrected validities above .30 for job performance […] and .20 for initial level of earnings […] the highest validity was found for job performance one year after graduating, with validities decreasing thereafter […] When salary is taken as the criterion, the highest validity was found for current salary, followed by starting salary, and last for salary growth […] the overall corrected validity above .30 for performance and around .20 for salary is at least comparable to and often higher than that of personality traits […] the causes of individual differences in GPA are at least in part similar to the causes of individual differences in job outcomes. […] GPA can be conceptually linked to occupational performance in that it carries variance from both ability and non-ability factors that are determinants of individual differences in real-world success”

May 14, 2014 Posted by | Books, Psychology | Leave a comment

The Origin and Evolution of Cultures (II)

“Brain tissue is quite expensive. All else equal, selection will favor the stupidest possible creatures.”

I really liked that quote. Here’s a related one from the book:

“On the cost side, selection will favor as small a nervous system as possible. If our hypothesis is correct, animals with complex cognition foot the cost of a large brain by adapting more swiftly and accurately to variable environments.”

This post doesn’t really deal in much more detail with the observations above, I just liked those quotes and they didn’t really fit in with the rest of the coverage, though I could probably have put them in there somewhere. Before moving on to the main coverage I should note that it would make a lot of sense for people who read this post to read my first post about the book before reading this one. If you’ve already done so, do carry on.

After I’d read the first couple hundred pages I was a bit exhausted, and I’ve taken a break from this book for a while; as I pointed out on goodreads when I started, “I’m far from certain I’ll manage to get through this one in one go.” Yesterday I decided to pick up the book again, and fortunately the next few chapters seem less technical than the ones that had me putting the book away for a while.

The book is really nice, but it feels hard for me to blog because of the technical nature of the coverage (much of this stuff is really just applied game theory). Most chapters will deal with a specific model and talk about the model results, and unless I actually tell you all about what the models are doing and which assumptions are made (i.e., basically repost the entire book here) a lot of critical details will be left out – there are a lot of caveats and nuances, and not including them in the coverage might give people the wrong idea about what’s going on in the book. Sometimes a complex model is compared to a simple model in a chapter and the complex model is the more interesting one; in those cases you may need to cover the simple model as well for it to make sense to talk about the details of the complex model, and we’re back to ‘it’s hard to exclude anything’. A general ‘problem’ with this book in terms of these things – which is of course properly to be considered a strength – is that there aren’t really that many pages with fluffy stuff you can just leave out. Fortunately they occasionally draw conclusions from the models and try to give a big-picture account of what’s going on, and I’ve disproportionately quoted from those passages in the post below. I’ve left a lot of details out, but there was no alternative to doing that. A lot of crucial context which I’ve not realized is missing is probably missing anyway – do ask questions if something is unclear here.

“Human brains […] are adapted to life in small-scale hunting and gathering societies of the Pleistocene. They will guide behavior within such societies with considerable precision, but behave unpredictably in other situations. […] Learning devices will be favored only when environments are variable in time or space in difficult to predict ways. Social learning is a device for multiplying the power of individual learning. […] Social learning can economize on the trial and error part of learning. […] Selection will favor individual learners who add social learning [‘learn from others, e.g. by imitating them‘] to their repertoire so long as copying is fairly accurate and the extra overhead cost of the capacity to copy is not too high. In some circumstances, the models suggest that social learning will be quite important relative to individual learning. It can be a great advantage compared to a system that relies on genes only to transmit information and individual learning to adapt to the variation. Selection will also favor heuristics that bias social learning in adaptive directions. When the behavior of models [‘people you might copy’] is variable, individuals who try to choose the best model by using simple heuristics like “copy dominants” or “go with the majority,” or by using complex cognitive analyses, are more likely to do well than those who blindly copy. Contrarily, if it is easy for individuals to learn the right thing to do by themselves, or if environments vary little, then social learning is of no utility.”

“We believe that the lessons of [the] model [they just talked about] are robust. It formalizes three basic assumptions:

1. The environment varies.
2. Cues about the environment are imperfect, so individuals make errors.
3. Imitation increases the accuracy (or reduces the cost) of learning.

We have analyzed several models that incorporate these assumptions but differ in other features. All of these models lead to the same qualitative conclusion: when learning is difficult and environments do not change too fast, most individuals imitate at evolutionary equilibrium. At that equilibrium, an optimally imitating population is better off, on average, than a population that does not imitate. […] for something to be a norm, there has to be a conformist element. People must agree on the appropriate behavior and disapprove of others who do not behave appropriately. We […] show that individuals who respond to such disapproval by conforming to the social norm are more likely to acquire the best behavior. […] as the tendency to conform increases, so does the equilibrium amount of imitation. […] all conditions that lead a substantial fraction of the population to rely on imitation also lead to very strong conformity. […] a tendency to conform increases the number of people who follow social norms and decreases the numbers who think for themselves.”

“Human populations are richly subdivided into groups marked by seemingly arbitrary symbolic traits, including distinctive styles of dress, cuisine, or dialect. Such symbolically marked groups often have distinctive moral codes and norms of behavior, and sometimes exhibit economic specialization. […] The following two chapters explore the idea that symbolically marked groups arise and are maintained because dress, dialect, and other markers allow people to identify in-group members. In chapter 6, we analyze a model that assumes that identifying in-group members is useful because it allows selective imitation. Rapid cultural adaption makes the local population a valuable source of information about what is adaptive in the local environment. Individuals are well advised to imitate locals and avoid learning from immigrants […] studies like those of Fredrik Barth […] suggest that contemporary ethnic groups often occupy different ecological niches. […] In chapter 7, we […] study a model in which markers allow selective social interaction. […] These models have several interesting and, at least to us, less-than-obvious properties. First, the same nonrandom interaction that makes markers useful also creates and maintains variation in symbolic marker traits as an unintended by-product. Nonrandom interaction acts to increase correlation between arbitrary markers and locally adaptive behaviors. This, in turn, makes markers more useful, setting up a positive feedback process that can amplify small differences in markers between groups. […] once groups have become sharply marked, the feedback process is sufficient by itself to maintain group marking even if groups are perfectly mixed and there is no population structure other than that caused by the markers. […] processes closely related to those modeled here can lead to the “runaway” evolution of marker and preference traits, which have no adaptive or functional explanation […] It is easy to imagine that the adaptive uses of cultural markers are common enough so that selection on genes maintains a cognitive capacity to use them despite the runaway process carrying some to maladaptive extremes. We are convinced that complexities of this sort are a pervasive feature of the coevolutionary process that links genes and culture. If this idea is correct, any attempt to reduce the problems of human evolution to binary choices between sociobiological and cultural explanations is bound to fail.”

“Studies of the diffusion of innovations […] suggest that people often use two simple rules to increase the likelihood that they acquire locally adaptive beliefs by imitation. The chance that individual A will adopt an innovation modeled by individual B [i.e., ‘do as B does’] often seems to depend upon (1) how successful B is, and (2) the similarity of A and B.”

“Many anthropologists believe that people follow the social norms of their society without much thought. According to this view, human behavior is mainly the result of social norms and rarely the result of considered decisions. […] Many anthropologists also believe that social norms lead to adaptive behaviors; by following norms, people can behave sensibly without having to understand why they do what they do. […] Norms will change behavior only if they prescribe behavior that differs from what people would do in the absence of norms. […] By this notion, people obey norms because they are rewarded by others if they do and punished if they do not. As long as the rewards and punishments are sufficiently large, norms can stabilize a vast range of different behaviors.”

One thing to note both in relation to the paragraph above and to the passage quoted below is that there’s a big conceptual difference between strategies which punish defection strategies by withholding future cooperation, and strategies which ‘actively’ punish defectors (presumably e.g. by beating them up, killing them…). Perhaps one way to conceptualize the difference between the two types of strategies is to think of the former set of strategies as a collection of strategies where punished individuals are limited to a payoff of 0, whereas punished individuals in the latter context might experience (unbounded?) negative payoffs as well. Reciprocating strategies, where you cooperate when others do and sanction defection with non-cooperation in the future, are what Boyd and Richerson look at first, and it turns out that such strategies actually don’t do very well in large groups, in the sense that it seems implausible that such strategies in their models on their own would support cooperative equilibria when n is large, which is the motivation for looking at actual ‘punishment strategies’ that go a bit further than that. A problem with punishment strategies is that they’re often (but not always) altruistic in the sense that if punishment works by making defectors switch to ‘cooperate’ in future periods and it’s costly for an individual to punish someone, then punishing someone is quite likely to mostly benefit other people (especially as n grows) while the person doing the punishing is the one incurring the cost – punishment is a public good. So people may decide to become ‘reluctant punishers’ that let the others do the punishing, and if enough people go that route these equilibria become unstable (this is a problem termed ‘the problem of second-order cooperation’ – you can defect at any stage in the game, and in this particular case it’s a two-stage game where you can either defect from the start, or you defect at the second stage and refuse to punish those that defected during the first period). If n is small, punishment strategies may not necessarily be altruistic – you may meet and interact with the guy enough times in the future for it to make sense to punish him now – and if the cost of punishment is small compared to the benefits from cooperation that will of course also help support equilibria of that nature. A general thing to note here, which is perhaps not made perfectly clear in the stuff above, is that finding out how ‘cooperative equilibria’ of one kind or another may come about and under which conditions they’re stable is really a big part of understanding what culture is all about and how it works, when you look at it from a certain point of view – it’s puzzling that humans cooperate with other humans to the extent that they do, and as people who’ve done theoretical work on this stuff have found out over the years, it’s actually not at all easy to figure out why they (we) do that. It’s certainly a lot more complicated than people who don’t know anything about such topics presumably think it is.

I really liked the stuff they had on moralistic strategies, a subset of the punishment strategies analyzed in chapter 9, and I’ve quoted from this below:

“Moralistic strategies [are] strategies that punish defectors, individuals who do not punish noncooperators, and individuals who do not punish nonpunishers […] moralistic strategies can cause any individually costly behavior to be evolutionarily stable, whether or not it creates a group benefit. Once enough individuals are prepared to punish any behavior, even the most absurd, and to punish those who do not punish, then everyone is best off conforming to the norm. Moralistic strategies are a potential mechanism for stabilizing a wide range of behaviors. […] moralistic punishment is inherently diversifying in the sense that many different behaviors may be stabilized in exactly the same environment. It may also provide the basis for stable among-group variation. […] In the model studied here, punishers collect private benefit by inducing cooperation in their group that compensates them for punishing, while providing a public good for reluctant cooperators. There are often polymorphic equilibria in which punishers are relatively rare, generating a simple political division of labor […] This finding invites study of further punishment strategies. Consider, for example, strategies that punish but do not cooperate. Such individuals might be able to coerce more reluctant cooperators than cooperator-punishers and therefore support cooperation in still larger groups.”

That chapter has a lot more details about those things. Anyway, behavioural strategies that look terribly maladaptive ‘from the outside’ (and/or may in fact be terribly maladaptive (…at the group level) – do note that these two do not necessarily overlap) may become fixed in a population even so, and such equilibria, once reached, may be very hard to break. This isn’t exactly an uplifting story, but of course if you’ve had a look around the world this shouldn’t be news. As mentioned, it’s very much worth having in mind that a strategy which outsiders might think is really quite awful, because it leads to behaviours the outsiders don’t like, may still be highly adaptive – the adaptiveness of a behavioural strategy set and whether said strategy set gives you a good feeling in your stomach has got nothing to do with each other, and there’s no Eternal Law of Progress, whatever that latter word might mean, guiding which strategy sets ‘win’.

May 11, 2014 Posted by | Anthropology, Books, culture, Evolutionary biology | Leave a comment

Geophysical Hazards

Yesterday I started reading the Springer publication Geophysical Hazards: Minimizing Risk, Maximizing Awareness. I didn’t get very far because the book is really bad, and I’ve now decided I’m not going to finish this book – if you want to know the short version of what I thought about the book, read my review on goodreads. In this post I want to include a few observations I made along the way while reading this book, in order to at least somehow justify having read the parts of it I did; there was a little bit of interesting stuff, but you had to go through a lot of crap to get to it and it’s just not worth it.

I’ve decided to mostly just talk about the book in this post, rather than quote from it, however I do want to start out with a quote from the book:

“the Budapest Manifesto on Risk Science and Sustainability[2] provided a generic framework suitable for environmental risk management across a variety of disciplines, including hazards studies. […] It can be summarised by the following list of items that need to be examined:

• Consultation
• Concerns
• Consequences
• Calculations
• Certainties, uncertainties, probabilities
• Comparing against pre-determined criteria
• Control, mitigate and adapt
• Communicate
• Monitor
• Review.

There are 10 items listed above. Risk management praxis consists in undertaking steps 2–8 in sequential order, and then re-checking to make sure that any residual risk has been dealt with adequately. […] certain terms have not been incorporated into the Budapest Manifesto framework. In particular, the idea that one needs to determine the context within which a risk management activity takes place. The first step in risk management is always to establish the context in which to operate.”

Okay, on to my own observations. The first one is that mobile phones are actually really neat things which in theory could be/become excellent tools to use in a disaster setting – for example a government agency could notify/warn people of a disaster in progress via a text message (according to the book Finland decided that this was the way to go already in 2005), or, say, victims trapped under debris in a disaster setting might be identified/found through their phone signals. One major issue is that most phone networks have little excess capacity, and widespread disasters are therefore likely to interrupt such lines of communication completely (lots of people calling their loved ones all at the same time, or calling relevant local governmental agencies), making mobile phones much better suited for small-scale disasters than for large-scale disasters. The ‘phones may not be all that helpful in a big disaster setting’-point I have encountered before, e.g. here and here, but I hadn’t really thought about the fact that these sorts of dynamics don’t really play any role in small-scale disasters, where these tools may actually be immensely useful.

There’s a lot of geographic variation in the pattern of environmental hazard losses, which is a result of the interplay of a large number of variables. It turns out that we don’t know nearly as much about those variables as I’d have thought. Anyway, in some areas floods are most relevant, in others seismic activity is important, in some areas droughts are key. Half of the human fatalities from loss-causing geophysical hazards affecting the United States during the period 1960-2007 according to SHELDUS (see link below) were losses due to ‘severe weather’ (33.2%) and ‘winter weather’ (15.5%); most of the remaining fatalities were due to ‘tornados’ (14.5%), ‘flooding’ (12.2%) and ‘heat and droughts’ (15.8%) – the website is here, if you’re interested in having a closer look at this stuff. My own opinion would be that when a tornado forms it’s accurate to say that the weather is acting up a bit (if that’s not ‘severe weather’ then…) – but of course this is all just a matter of coding and figuring out how best to categorize these things. Although the above might indicate that we have a lot of data on this type of stuff we really don’t, and there are huge data problems in this area which make deciding upon proper courses of action, in terms of the proper risk reduction strategies to engage in, very difficult. Of course data problems tend to be most severe in the areas where fatalities are most likely to be high, because the populations at risk those places are particularly vulnerable. It’s important to note that missing data and vulnerability variables are not independent as such, as missing decision-relevant data may be conceptualized as factors impacting the vulnerability of the people affected by the disaster in an undesirable manner; if rescue workers have poor information, matching aid supplies to needs may be difficult, which can cost lives. Getting the distribution of disaster victims wrong can lead to misallocation of resources, and misallocation of resources often costs lives in disaster settings (“It is not uncommon for relief teams to be deployed to a disaster area without full knowledge of how many people will need aid or where they’re located relative to the impact area, let alone have information on age and gender – characteristics that are vital in the delivery of food, water, shelter, and other assistance needs.”). Simply knowing where people live is in some contexts difficult; a great majority of people on Earth have been ‘accounted for’ in a relatively recent census (“More than 85% of the world’s population has been enumerated within a national census since 2000 (NRC 2007).”), but people move around, have children and die all the time, and some countries are much better at tracking that kind of stuff than are others (which is both a good thing and a bad thing – in this context keeping track of people is a good thing). One fundamental problem, in terms of optimizing potential relief efforts, is that the more fine-grained information you have, the less likely your data is to be up-to-date; there’s a tradeoff between how often you can gather the relevant information from people and how much information you gather from each person, as obtaining each bit of information takes time and money.

Geographical variation in loss profiles is partly due to different physical processes playing a role in different places, but many other variables play a role in mediating how severe the consequences of a given event may be to a given population of people, and how likely a hazard is to become a disaster. Having good data on stuff like gender and age isn’t just helpful because it makes relief efforts easier – it’s also helpful because not all people are equally vulnerable, and variables such as these actually have an impact on the pre-disaster risk assesssment as well. Population density (if more people live in an area, the likelihood of someone (or many someones) dying from a given cause goes up – megacities are particularly vulnearable), age profile (old and infirm people as well as children may be more likely to die in disaster contexts, because of the limited mobility – which translates into a limited ability to get out of harm’s way – and care-taking requirements of these individuals), socioeconomic factors (rich people are generally better able to handle/absorb the consequences of a disaster; this is both the case at the community level and at the international level – disasters in poor countries tend to cause more fatalities and lower economic losses than disasters in rich countries. However they note in the book that it seems that the relative economic losses (disaster losses compared to national GDP) are higher in poor countries than in rich countries), ‘disaster preparedness’, infrastructure, etc. all play important roles, but little seems to be known about precisely how they mediate risks in different contexts. As one author puts it, “At present, most of the vulnerability metrics are descriptive, not predictive indices and are mainly used as a representation of multi-dimensional phenomena, such as those pre-existing conditions in communities that make them susceptible to harm.” It’s incidentally an assumption taken as a given throughout the part of the book I read that governments have important roles to play in assessing such risks and trying to counter them in various ways, although at no point do the authors even attempt to justify this assumption – one of many reasons why I dislike the book. From a political economy point of view I think it’s safe to assume that justifying public involvement in handling/preventing/… large-scale disasters which are hard to insure against is conceptually much easier than is justifying government efforts aimed at saving 10 people from dying of heat-stroke during a warm summer, and when your disaster data includes data of the latter type, cost-effectiveness considerations are bound to come up during the discussion. At least if you ask people who aren’t a member of some four-letter UN organization.

May 10, 2014 Posted by | Books | Leave a comment

Military Geography: For Professionals and the Public (III)

I’ve finished the book. You can read my other posts about the book here and here. I ended up at three stars on goodreads, even though the author actually commits what in my book is a capital offence; he uses acronyms/abbreviations without explaining what they mean in the text. There’s an addendum with stuff like that included in the book, but who the hell would want to frequently jump 100 pages just to understand what’s being said in a chapter? I sure don’t – I’d rather beat up the author for organizing this in a stupid manner. It isn’t that many terms which go unexplained, but you really need to not have any of them when a chapter is written in this manner:

“Option A left both nations within Pacific Command’s area of responsibility, where they had been since their establishment in 1954, and activated a unified command subordinate to CINCPAC; Option B envisaged an independent command on the same level as PACOM. The JCS recommended Option A, CINCPAC concurred, the Secretary of Defense approved, and U.S. Military Assistance Command Vietnam (MACV) emerged in 1962, but the new lashup never worked the way official “wiring diagrams” indicated. COMUSMACV often bypassed CINCPAC to deal directly with superiors in Washington, DC, including the President and Secretary of Defense, who played active parts in daily operations. CINCPAC conducted the air war and surface naval operations, while COMUSMACV took charge on the ground. […] The Navy, backed by the Marine Corps, resisted change because CINCNELM was thoroughly familiar with Middle East problems and the likelihood of major U.S. military involvement anywhere in Black Africa seemed remote. The Secretary of Defense found the Chairman’s arguments persuasive, added MEAFSA to CINCSTRIKE’s responsibilities, and disbanded NELM on December 1, 1963. […] CINCSOUTH and the Army Chief of Staff postulated that general war was a remote possibility, but if it did occur, LANTCOM would have to rivet attention on the Atlantic Ocean whereas Southern Command, armed with a wealth of Latin American experience, was ready, willing, and able to counter Communist activities that posed clear and present dangers to U.S. interests throughout the Caribbean. CINCLANT contended that it would be imprudent to pass responsibility for the Caribbean from his command to SOUTHCOM…”

(The quote is from chapter 16, on military areas of responsibility. I should point out that this chapter is far worse than any other chapter in this respect, and none of the others even come close).

Part three on political-military geography – the above quote is from this section – was in my opinion the weakest part of the coverage, whereas the last part – on area analyses – was okay and quite interesting at times; the latter contains one introductory chapter and then moves on to analyze in some detail the area analysis parts of Operation Neptune and Operation Plan El Paso (a military operation plan which was thought up during the Vietnam War, the goal of which was to stop traffic along the Ho Chi Minh Trail. As the author put it, the plan was stillborn – it never made it off the drawing board). I liked the chapter on Operation Neptune better than the one on the Vietnam War stuff, but both contain important insights related to how geographic factors can affect military operations and how important such aspects may be. Part three wasn’t bad as such, I just suspect that it wasn’t quite as interesting to me as it might have been given a different approach – some aspects covered there are really quite important in terms of understanding how this kind of stuff works, and I’ve added some of that stuff below. On the other hand some parts of it seemed a bit out of place; the chapter on geopolitical friction included paragraphs on atmospheric polution, hazardous waste disposal, and oil spills, and although such activities may well increase friction among neighbouring countries I think you’d be hard pressed to imagine a scenario where they’d lead to outright war. Singapore has e.g. suffered badly the last few years from the smoke caused by Indonesian forest fires, but it doesn’t seem like the military is getting ready to strike back anytime soon – they rather seem to deal with this in a different manner (“The Singaporean military has also reportedly suspended all outdoor training”).

Below I’ve added some final observations/quotes from the book. I decided against covering the chapter on the Normandy landings below despite really liking that chapter, mostly because at least some of the most important relevant geographic factors, including reasons for picking Normandy and the important role weather phenomena played in the decision process, are actually included in the wiki article to which I link above. I’m not really sure if I’d recommend the book, but if you’re interested in the kind of stuff that I’ve quoted in the previous posts and this one, and you want to learn more about stuff like this, it may be worth giving it a shot.

“Spokesmen for each Armed Service, who advise chiefs of state, foreign ministers, and senior defense officials, commonly possess dissimilar views concerning political-military problems and corrective actions, because they operate in distinctive geographic mediums and genuflect before different geopolitical gurus who variously advocate land, sea, air, or space power. Many (not all) members of each service are firmly convinced that their convictions are correct and believe competing opinions are flawed. The dominant school of thought in any country or long-standing coalition (such as NATO and the now defunct Warsaw Pact) consequently exerts profound effects on military roles, missions, strategies, tactics, plans, programs, and force postures. […] Army generals […] subdivide continents into theaters, areas of operation, and zones of action within which terrain features limit deployments, schemes of maneuver, weapon effectiveness, and logistical support. Ground forces engaged in conventional combat are loath to lose contact with adversaries until they emerge victorious and, if necessary, impose political-military control by occupying hostile territory. Armies once were self-sufficient, but dependence on aerial firepower currently is pronounced and, unless circumstances allow them to move overland, they can neither reach distant objective areas nor sustain themselves after arrival without adequate airlift and sealift. Senior army officials consequently tend to favor command structures and relationships that assure essential interservice support whenever and wherever required.”

“Free-wheeling marecentric forces, unlike armies, rely little on joint service cooperation, enjoy a global reach channelized only by geographic choke points, and generally determine unilaterally whether, where, and when to fight, because they most often are able to make or break contact with enemy formations as they see fit. Admirals as a rule accordingly resent bureaucratic restrictions on naval freedom of action and defy anybody to draw recognizable boundaries across their watery domain, which is a featureless plane except along littorals where land and sea meet […] Land-based air forces operate in a medium that surface navies might envy, where there are three dimensions rather than two, no choke points, no topographic impediments, and visibility to far distant horizons, being less limited by Earth’s curvature, is restricted only by clouds except in mountainous terrain. […] aerocentric generals (like admirals) prefer the greatest possible autonomy and are leery of boundaries that limit flexibility because, in the main, they believe that unfettered air power could be the decisive military instrument and make protracted wars obsolete. All services attach top priority to air superiority, without which most combat missions ashore or afloat become excessively costly, even infeasible.[11]”

Each service as it stands is superior in some environments and inferior in others. Armies generally function more efficiently than air forces in heavily forested regions and rugged terrain, whereas air power is especially advantageous over sparsely covered plains. Ballistic missile submarines at sea, being mobile as well as invisible to enemy targeteers, are less vulnerable to prelaunch attacks than “sitting duck” intercontinental ballistic missiles (ICBMs) in concrete silos ashore. Reasonable degrees of centralized control coupled with joint doctrines, joint education, and joint training programs that effectively integrate multiservice capabilities thus seem desirable.”

“Boundary disputes [between USSR and China] bubbled in earnest about 1960, when the Sino-Soviet entente started to split. The first large-scale clashes occurred in Xinjiang Province during early autumn 1964, when Muslim resentment against repressive Chinese rule motivated about 50,000 Kazakhs, Uighurs, and other ethnic groups to riot, then take shelter in the Soviet Union. Tensions along the Far Eastern frontier reached a fever pitch in 1967 after howling mobs besieged the Soviet Embassy in Beijing for more than 2 weeks. Both sides briefly massed a total of 600,000 troops along the border—nearly 40 divisions on the Soviet side and perhaps 50 or 60 Chinese counterparts. Damansky Island (Zhanbao to the Chinese) was twice the site of stiff fighting in March 1969, followed in August by confrontations at Xinjiang’s Dzungarian Gate, after which both sides took pains to defuse situations, partly because each at that point possessed nuclear weapons with delivery systems that could reach the other’s core areas.16 China, however, has never renounced its claims, which future leaders might vigorously pursue if Chinese military power continues to expand while Russian armed strength subsides.” (I never knew about that stuff. The wiki has more here and here.)

“Water requirements often outstrip sources in regions where agricultural and industrial expansion coincide with arid climates and rampant population growth creates unprecedented demands. Poor sanitation practices, contaminated runoff from tilled fields, industrial pollutants, and raw sewage discharged upstream make potable supplies a luxury in many such countries.[50]
Scarcities accompanied by fierce competition have spawned the term “hydropolitics” in the Middle East, where more than half of the people depend on water that originates in or passes through at least one foreign country before it reaches consumers [my emphasis]. […] Nearly all water in Egypt flows down the Nile from catch basins in eight other countries […]
Central and South Asia experience similar water supply problems. Deforestation in Nepal intensifies flooding along the Ganges while India, in turn, pursues water diversion projects that deprive delta dwellers in Bangladesh.”

“Theater commanders in chief, who exercise operational control over land, sea, air, and amphibious forces within respective jurisdictions, as a rule delegate to major subordinate commands authority and accountability over parts of their AORs [Areas Of Reponsibility, – US] for operational, logistical, and administrative purposes. Tactical areas of responsibility (TAORs) facilitate control and coordination at lower levels. The boundaries that CINCs [Commanders IN Chiefs, -US] and other commanders draw are designed to facilitate freedom of action within assigned zones, ensure adequate coverage of objectives and target suites yet avoid undesirable duplication of effort, prevent confusion, and reduce risks of fratricide from so-called “friendly fire.”
Theater and tactical AORs differ from global and regional subdivisions in several important respects: international sensitivities tend to diminish (but do not disappear), whereas interservice rivalries remain strong; areas of interest and influence tend to blur boundary lines; and TAORs are subject to frequent change during fluid operations. […] Operation plans and orders employed by land and amphibious forces at every level commonly prescribe boundaries and other control lines to prevent gaps and forestall interference by combat and support forces with friendly formations on either flank, to the front, or toward the rear. Well drafted boundaries wherever possible follow ridges, rivers, roads, city streets, and other geographic features that are clearly recognizable on maps as well as on the ground. They neither divide responsibility for dominant terrain between two or more commands nor position forces from one command on both sides of formidable obstacles unless sensible alternatives seem unavailable. […] Area analyses developed for combat forces emphasize critical terrain, avenues of approach, natural and manmade obstacles, cover, concealment, observation, and fields of fire […] Marked advantages accrue to armed forces that hold, control, or destroy critical (sometimes decisive) terrain, which is a lower level analog of strategically crucial core areas. Typical examples range from commanding heights and military headquarters to geographic choke points, telecommunication centers, logistical installations, power plants, dams, locks, airfields, seaports, railway marshaling yards, and road junctions. Features that qualify differ at each echelon, because senior commanders and their subordinates have different perspectives. Three- and four-star officers, for example, might see an entire peninsula as critical terrain while successively lower levels focus first on one coastal city, then on the naval shipyard therein, the next layer down on harbor facilities, and finally on pierside warehouses.”

“Perhaps the single most important lesson to be learned from the previous pages is the folly of slighting geographic factors during the preparation of any military plan, the conduct of any military operation, or the expenditure of scarce resources and funds on any military program.”

May 7, 2014 Posted by | Books, Geography, History | 3 Comments

Open Thread

I was recently reminded that I probably ought to revive these things.

First a few ‘meta’ observations regarding content on this blog, then I’ll leave the word to you:

I was recently reminded of the fact that people sometimes read stuff I’ve written a long time ago (usually I try – mostly successfully – to put it out of my mind that this ever happens). So it’s worth pointing out two things here. First, if I wrote something 5 years ago, I’ve probably changed my mind about it at least three times since then. Very occasionally I will by accident realize that I’ve written something awful a long time ago, and if that happens I may decide to delete the post or make corrections to it; but usually that doesn’t happen, so there’s a lot of crud in the archives which I’ve simply neglected to get rid of because it takes a lot of time and effort dealing with that kind of stuff unless you just delete everything indiscriminately, a move I’ve been hesitant to make (perhaps with little justification). Secondly, if I just published a post (people who subscribe to the blog in one way or another will, as far as I’ve been made aware, usually be able to tell when a post was published) here on the blog and you’re reading it right after publication, there’s a high likelihood the post will change later on. I often feel a desire to make corrections and perhaps add stuff to or delete stuff from a post within the first hour or two of a post’s existence. If you only plan on reading a post once then reading it right after I published it might not be the optimal strategy as it’ll likely at that point still be a work in progress. The more words I’ve written on my own, the more likely it is that corrections will be made later on. Sometimes the time-lag between publication and correction can be significant, e.g. it sometimes happens that I post something before I go to bed and then make adjustments to the post the next day. Some people would probably argue that this procedural approach is inefficient and that I ought to finish the post, with corrections, before I publish it, but one reason why I’m not that careful about such things is that I consider most of the stuff published here to be relatively ‘fluid’ anyway; this isn’t a book, I always retain the right and the opportunity to correct errors, delete a paragraph or a post I don’t like, or really whatever strikes my fancy. If a few people happen to come across a few error-ridden and in retrospect only half-finished posts along the way, I don’t really mind that. Usually those errors will get corrected in time and thoughts will be developed in more (or, as the case may be, less) detail. I have considered how to approach major adjustments before – one option I’ve considered is to rename the posts and add a ‘revised’ in a parenthesis to the post title, perhaps with a little note at the beginning as well, in order to inform readers who’ve only read what later turned out to be an early draft that the post has changed – but I haven’t really found a solution I like. So for now it is the way it is. Feedback and ideas are welcome.

Okay, the word is yours. Read anything interesting?

May 4, 2014 Posted by | meta, Open Thread | 3 Comments

Military Geography: For Professionals and the Public (II)

“Portions of this book may be quoted or reprinted without permission, provided that a standard source credit line is included.”

I allow myself to cover Collins’ book in a bit more detail than I do many other books, for the above reason (I shall assume that the link – along with the fact that I actually report the title of the book in the blog post title – corresponds to a ‘standard source credit line’).

I ended my coverage of the book in my first post when I got to chapter 7, although I’d read a bit further than that at that point. Chapters 7, 8, 9, 10, 11, 12, and 13 deal with: Inner and outer space, natural resources and raw materials, populations (this chapter is the first in the second part of the book, on cultural geography), urbanization, lines of communication, military bases, and fortresses and field fortifications, respectively. A few minor errors/inaccuracies pop up here and there, but this is to be expected of a book this size written by a single author, and as I’m learning something new in each chapter I don’t mind so much that he occasionally e.g. by mistake adds a zero to his kilometer-conversion of the distances reported in miles (“an area 350 by 250 miles — 5,630 by 3,220 kilometers”) because it’s not hard to tell when a mistake has been made and there aren’t that many of them. Though I have found some topics more interesting to read about than others, in general I’d say I like this book.

Although the chapter on warfare in space is in some sense the ‘least relevant’ chapter included in the book in terms of understanding the kinds of wars humans have engaged in so far, I thought it was a rather interesting chapter, especially as most media portrails take some, liberties, when dealing with such topics that make it hard to appreciate how this kind of stuff might actually play out (and no, I don’t think reading Ender’s Game is quite enough to get a good sense of these things). Here’s some stuff from the chapter:

“Space and the seas are superficially similar, but differences are dramatic:
• Continents bound all five oceans, which are liquid and almost opaque, whereas space has no shape and little substance.
• Earth’s curvature limits sea surface visibility to line-of-sight, whereas visibility as well as maneuver room are virtually limitless in space.
• Acoustics, an antisubmarine warfare staple, play no part in space, because sound cannot survive in a vacuum.
• Space welcomes electromagnetic radiation, whereas water is practically impervious to radio and radar waves.
• Day-night cycles and shock waves, which are prevalent everywhere on Earth, are nonexistent in space.
• Atmospheric phenomena and salt water interfere with light and focused energy rays on Earth, but neither refract in space.
Space moreover has no north, east, south, or west to designate locations and directions. A nonrotating celestial sphere of infinite radius, with its center at Earth’s core, is the reference frame. Declination, the astronomical analog of latitude, is the angular distance north or south of the celestial equator, right ascension is the counterpart of longitude, and the constellation Aries, against which spectators on Earth see the sun when it crosses Earth’s Equator in springtime, defines the prime meridian. Angular positions in space are measured from that celestial counterpart of Greenwich Observatory.”

“Geographic influences on nuclear, directed energy, chemical, biological, and conventional weapon effects are far-reaching and fundamental. Atmospheric interfaces, gravity, and vacuum are the most important factors. […]
Nuclear weapons detonated in Earth’s atmosphere create shock waves, violent winds, and intense heat that inflict severe damage and casualties well beyond ground zero.[13] No such effects would occur in space, because winds never blow in a vacuum, shock waves cannot develop where no air, water, or soil resists compression, and neither fireballs nor superheated atmosphere could develop more than 65 miles (105 kilometers) above Earth’s surface. Consequently, it would take direct hits or near misses to achieve required results with nuclear blast and thermal radiation. […] Self-contained biospheres in space afford a superlative environment for chemical and biological warfare compared with Earth, where weather and terrain virtually dictate delivery times, places, and techniques.[15] Most spacecraft and installations on the Moon, which must rely on closed-circuit life support systems that continuously recirculate air and recycle water, are conceivable targets for special operations forces armed with colorless, odorless, lethal, or incapacitating agents that would be almost impossible to spot before symptoms appear [for some coverage of (marginally?) related topics, see this]. Cumbersome masks and suits could protect individuals only if worn constantly. Sanctuaries comparable to the toxic-free citadels that eat up precious room on some ships would be infeasible for most spacecraft and safeguard only a few selected personnel. Any vehicle or structure victimized by persistent chemicals probably would become permanently uninhabitable, because vast quantities of water and solvents required for decontamination would be unavailable.”

“A one-month supply of oxygen, food, and drinking water just for a crew of three amounts to more than a ton stored at the expense of precious propellant and military payloads. Each crew member in turn would deposit an equal amount of waste in the form of feces, urine, perspiration, internal gases, carbon dioxide, and other exhalation vapors that could quickly reach toxic proportions in a sealed capsule unless quelled, expelled, or sterilized. Life support systems currently dump or stow organic waste on short missions, but such practices do little to alleviate long-term resupply problems. […] Motion sickness, somewhat like an aggravated form of sea sickness, afflicts about half of all space travelers whose responses to medical suppressants are unpredictable. It conceivably might undermine mission proficiency enough during the first few days of each flight to mark the difference between military success and failure”

“Military space forces at the bottom of Earth’s “gravity well” need immense energy to leave launch pads and climb quickly into space. Adversaries at the top, in positions analogous to “high ground,” have far greater maneuver room and freedom of action. Put simply, it is easier to drop objects down a well than to throw them out. […] L4 and L5, the two stable libration points, […] theoretically could dominate Earth and Moon because they look down both gravity wells. No other location is equally commanding.”

One assumption underlying much of the analysis in that chapter is that humans will actually be the ones doing the physical fighting that takes place in space as well – i.e. there’ll be humans in that space shuttle, and the humans will be the ones directing their lasers (or whatever) at the enemy. This is a weakness of the coverage in that chapter, I think, as strategies involving robotics as a key element seem to me an obvious way to try to work around a lot of the constraints imposed on organisms conducting war in space. Humans need a lot of stuff to survive in space, and if you can get machines to do the unpleasant stuff via remote control that seems like a no-brainer; yet that aspect isn’t really covered in the chapter. Humans have used space probes for decades and although we weren’t as far along in 1998 as we are now, we had been making progress for a long time. Humans sent a probe to the moon to pick up some lunar soil and return it to Earth – successfully – almost 30 years before the book was written. However it is just one chapter, and I think I’ve said enough about space for now – despite this shortcoming it’s still an interesting chapter.

Let’s talk a little bit about disease instead. Did you know that in World War II in the year 1944, 67% of U.S. military casualties in Europe were disease casualties? I didn’t. Only 23% were classified as combat casualties, with the last 10% being ‘noncombat casualties’. Presumably some of these were people who went insane – some of them committing suicide, others ‘just’ becoming unable to perform their military duties – because of the psychological strain, so it’s probably debatable to which extent they were all ‘non-combat’. As Fussell observed in his book:

“In war it is not just the weak soldiers, or the sensitive ones, or the highly imaginative or cowardly ones, who will break down. Inevitably, all will break down if in combat long enough […] As medical observers have reported, “There is no such thing as ‘getting used to combat’ … Each moment of combat imposes a strain so great that men will break down in direct relation to the intensity and duration of their experience.” Thus – and this is unequivocal: ‘Psychiatric casualties are as inevitable as gunshot and shrapnel wounds in warfare.”

On the other hand the 10% figure probably also includes stuff like accidents and desertions, at least some of which were not ‘battle-related’; the American army incidentally had 19.000 acknowledged deserters during WW2, and less than half of them (9000) had been found by 1948 – I again refer to Fussell’s book, p. 151.

Disease casualties made up an even larger proportion of the US WW2 casualties in the Southwest Pacific (83% disease, 5% combat, 12 percent non-combat) and China-Burma (90% disease, 2% combat, 8% non-combat). It’s perhaps important to keep in mind in this context that ‘casualties’ != ‘soldiers who died’ (it’s rather best thought of as ‘soldiers put out of action’) and that the proportion of dead people in the three groups may well be dissimilar. There’s also some conceptual overlap between the groups (bacterial infections in gunshot wounds) which it’s unclear how the source provided in the book has dealt with. However it’s still thoughtprovoking data. Of course Stevenson covered this kind of stuff as well, and in much greater detail – if you want to know more about these things, that’s a place to start. According to the source in the book, disease casualties incidentally accounted for two-thirds of casualties during the Vietnam War in 1969 (just like in Europe 25 years earlier – combat was 19% and noncombat 14%). Diseases vary a lot and different diseases will call for different prevention strategies; for example Fussell mentions in his book that contracting a sexually transmitted disease during WW2 was actually a punishable offence for US soldiers (p.108).

More quotes from Collins’ book below:

“Urban combat […] disrupts unit cohesion, complicates control, blunts offensive momentum, and causes casualties to soar on both sides.
Most military doctrines the world over consequently advise land force commanders to isolate or bypass built-up areas, but the subjugation of political, industrial, commercial, transportation, and communication centers even so may sometimes decisively affect the outcome of battles, campaigns, even wars. Military commanders in such events face an endless variety of structures and facilities the seizure or control of which demands esoteric plans, programs, and procedures, since no two cities are quite alike.”

“Street fighting ensues whenever armed forces try to wrest urbanized terrain from stubborn defenders. It can be brutal but brief in villages and a lengthy, agonizing struggle between small, isolated units in cities where concrete canyons and culs-de-sac degrade technological advantages, severely limit vehicular mobility, render tactical communications unreliable, complicate intelligence collection, and swallow troops wholesale. Restrictive rules of engagement designed to reduce collateral damage and casualties may further decease benefits obtainable from aerial firepower as well as artillery and magnify dependence on foot soldiers.[22] […] Motorized troops must stick to streets and open spaces, whereas infantrymen fight three-dimensional wars at ground level, on rooftops, and in subterranean structures such as subways, sewers, and cellars, creeping over, under, or around each structure, blasting “mouseholes” through walls, ceilings, and floors when more convenient avenues are unavailable. Mines, booby traps, barbed wire, road blocks, rubble, and other obstacles abound […] Every inner city building becomes a potential strong point, particularly those that overlook key intersections or open spaces.[26] Clear fields of fire for flat-trajectory weapons seldom exceed 200 yards (185 meters) even in suburbs, where ornamental shrubbery and sweeping curves often limit lines-of-sight.”

“Tanks and other armored vehicles inch through inner cities at a snail’s pace, find little room to maneuver on narrow or rubble-clogged streets, cannot turn sharp corners, and are vulnerable beneath enemy-occupied buildings unless they “button up,” which limits visibility and invites ambush. Many lucrative targets remain beyond reach, because most range-finders produce fuzzy images close up, tank turrets cannot swivel freely in cramped quarters, and main guns on level ground can neither elevate nor depress enough to blast upper stories or basements nearby. […] Conventional urban combat consequently calls for few rather than many tanks, mainly to furnish close support for frontline infantry.[28] Exceptions to that rule normally involve opponents in disarray or other special circumstances […] Urban jungles, like their leafy analogs, discourage artillery. […] High-angle artillery fire in urban areas […] is often used mainly to clear rooftops and target troops in the open while mortars, which are more maneuverable and less destructive, handle most close support missions. […] Urban combat inhibits lighter crew-served arms as well. Backblast makes it dangerous to emplace recoilless weapons in small, unvented rooms or other cramped spaces where loose objects, glass, and combustible materials must be covered or removed. Enclosures so amplify explosive sounds that personnel without earplugs become deaf after a few experiences.”

“Low-level U.S. [air] raids against Japan, all at night, slighted high explosives in favor of incendiaries, mainly magnesium, white phosphorus, and jellied gasoline […] Successes destroyed 40 percent of 66 cities, left almost one-third of Japan’s population homeless, and inflicted far more casualties than Japanese Armed Forces suffered during all of World War II. The cataclysmic Tokyo raid of March 9 and 10, 1945, killed 83,000 when high winds among flimsy wooden and rice paper structures whipped up uncontrollable fire storms […] Japanese noncombatants felt shock effects many times greater than those that accompanied urban bombing campaigns against Germany, because attacks were concentrated in a much shorter period.[58]”

“Military requirements determine the number, characteristics, essential service life, and acceptable construction time of airfields in any area of operations. Topography, climatic conditions, vegetation, hydrology, soils, and logistical convenience strongly influence locations. Preferable sites feature the flattest terrain, the clearest weather, the most favorable winds, the fewest obstructions, the freest drainage, and easiest access to prominent land lines of communication but, if that ideal is unattainable for political, military, geographic, or cultural reasons, decisionmakers must compromise.
Primary runways generally parallel the direction of prevailing winds, taking high-velocity cross-currents into account. Runway lengths required by any given type aircraft would be standard everywhere if Planet Earth were a perfectly flat plain at sea level, all thermometers consistently registered any given temperature, the surface never was slick with rain, sleet, or snow, and all pilots were equally competent. Military airfield designers in the real world, however, must extend runways to compensate for increases in altitude and do likewise where temperatures of the warmest month average more than 59 °F (15 °C), because those factors singly or in combination create rarefied air that degrades engine performance and affords less lift. Takeoffs up inclines and landings downhill also require longer runways.”

Yes, there are actually algorithms for how to account for these variables in the field:


“No nation, not even the British Empire at its zenith, deployed armed forces at as many military installations beyond its borders as the United States of America did during the Cold War. They were unusual compared with most bases abroad, being sited on the sovereign territory of allies and other friends with whom the U.S. Government negotiated mutually acceptable Status of Forces Agreements that legally prescribed U.S. rights, privileges, and limitations. All such bases and facilities exploited geographical positions that promoted U.S. security interests, affirmed U.S. global involvement, extended U.S. military reach, and strengthened U.S. alliance systems. They also positioned U.S. Armed Forces to deter Soviet aggression and respond most effectively if required. […] Nearly 1,700 U.S. installations, large and small, eventually circled the Northern Hemisphere in locations selected especially to monitor military activities inside the Soviet Union, ensure early warning if Soviet Armed Forces attacked, and block the most likely land, sea, and air avenues of Soviet advance. […] Eighty-one Distant Early Warning (DEW) Stations, draped 4,000 miles (6,435 kilometers) along the 70th Parallel from the Aleutian Islands to the Atlantic Ocean, watched for enemy bombers in the early 1960s […] Mid-Canada and Pine Tree Lines, augmented by a generous group of gap-filler radars, provided back-ups farther south, but that complex shrank considerably as soon as better technologies became available.”

May 4, 2014 Posted by | Books, Geography | Leave a comment

Military Geography: For Professionals and the Public (I)

I’m currently reading this book by John Collins, ‘a retired U.S. Army colonel and a distinguished visiting research fellow at the National Defense University’, as they describe him here – perhaps other parts of that description are more impressive: He’s also a former ‘chief of the Strategic Research Group at the National War College’ and he was for 24 years ‘the senior specialist in national defense at the Congressional Research Service’. Long story short: It seems as if he knows what he’s talking about. I have encountered a few minor problems/inaccuracies, including this interesting remark: “Viking raiders invaded Ireland in the 6th century A.D” – but they haven’t subtracted much from the coverage. In that specific case I certainly felt compelled to give the author the benefit of the doubt and assume that this was a typo, with a ‘9’ turning into a ‘6’; the sixth century is a bit too early – see also this… The very early (supposed) Viking invasion mentioned in that sentence is incidentally completely unrelated to the rest of the coverage in that chapter, so the damage is very minimal anyway. In general it’s a rather neat little book so far.

The author argues in his introduction that military geography is a neglected topic:

“few schools and colleges conduct courses in military geography, none confers a degree, instructional materials seldom emphasize fundamentals, and most service manuals have tunnel vision […] My contacts in the Pentagon and Congress were bemused when I began to write this book, because they had never heard of a discipline called “military geography.””

…and I actually found this really quite interesting, because the kind of stuff covered in this book so far would be precisely the kind of stuff I’d expect any general to know and to have received education in. Things may have changed in the meantime as the book isn’t written yesterday (it’s from 1998, so obviously some aspects of military doctrine have changed since then), but even if things have changed it’s still very interesting to me that that was the state of affairs when the book was written.

Waging war in an optimal manner is a lot more complicated than it looks like in the movies, and this book provides you with a lot of information and details which make the complexity easier to appreciate. Most people know that soldiers who are to be deployed in a desert will require different types of equipment and supplies than will soldiers who are to be deployed in a rainforest, and that fighting conditions in flat terrain and -conditions in hilly terrain will be different from each other, but aside from a few obvious observations in such regards most people probably would be hard pressed to say much about how the environmental constraints affect deployment strategies, optimal supply chain management, fuel consumption, etc. Even an observation as simple as: ‘it is hot in the desert, so soldiers fighting there will need a lot of water’ arguably to some extent eluded German strategists during the North African Campaign, as this quote from the book illustrates:

“Sweat evaporates so rapidly in dry desert heat that humans commonly lose about 1 pint of water per hour even at rest, yet never notice adverse effects or feel thirsty until the deficit reaches four times that amount (2 quarts, or 2 liters), by which time heat prostration may be imminent. Heavy exertion requires much greater intake, but Rommel’s Afrika Korps in the summer of 1942 carried only 15 quarts per day for trucks and tanks as well as personnel. His parched troops made every drop count, yet still ran dry during one offensive and survived only because they captured British water supplies.[58] U.S. military personnel in Saudi Arabia and Kuwait, who were much better endowed logistically, consumed approximately 11 gallons per day (42 liters), plus 10 to 12 gallons more per vehicle.” (There’s more on this topic below).

One could also talk about the obvious need for providing soldiers with proper winter clothes if you’re planning on invading Russia (-ll-)… The book goes into a lot of detail about these kinds of things, because the physical environment affects all kinds of aspects of warfare in ways that are actually quite hard to imagine. I’ve added some observations from the book below. I’ve read roughly the first 200 pages and so far I like it.

“This consolidated guide, designed to fill undesirable gaps, has a threefold purpose:

• To provide a textbook for academic use
• To provide a handbook for use by political-military professionals
• To enhance public appreciation for the impact of geography on military affairs.

Parts One and Two, both of which are primers, view physical and cultural geography from military perspectives. Part Three probes the influence of political-military geography on service roles and missions, geographic causes of conflict, and complex factors that affect military areas of responsibility. Part Four describes analytical techniques that relate geography to sensible courses of military action, then puts principles into practice with two dissimilar case studies—one emphasizes geographic influences on combat operations, while the other stresses logistics.”

“Soil conditions and rock affect the performance of many conventional weapons and delivery vehicles. Rocky outcroppings and gravel magnify the lethal radius of conventional munitions, which ricochet on impact and scatter stone splinters like shrapnel, whereas mushy soil smothers high explosives that burrow before they detonate. Even light artillery pieces leave fairly heavy “footprints” in saturated earth, a peculiarity that limits (sometimes eliminates) desirable firing positions. Gunners struggled to keep towed artillery pieces on targets when they worked at or near maximum tube elevations on wet ground in Vietnam where it didn’t take many rounds to drive 155-mm howitzer trails so deeply into the mire that recoil mechanisms malfunctioned. […] Surface conditions likewise amplify or mute nuclear weapon effects. The diameters and depths of craters are less when soil is dry than when soaked, nuclear shock waves transmitted through wet clay are perhaps 50 times more powerful than those through loose sand, and the intensities as well as decay rates of nuclear radiation reflect soil compositions and densities.”

“Legitimate terrors confront warriors in dark woods, where armed forces battle like blindfolded boxers who cannot see their opponents, small-unit actions by foot troops predominate, control is uncertain, and fluid maneuvers are infeasible. State-of-the-art technologies confer few advantages regardless of the day and age:
• Vehicles of any kind are virtually useless, except on beaten paths.
• Tree trunks deflect flat-trajectory projectiles. […]
• Tanks can bulldoze small trees, but the vegetative pileups impede or stop progress.
• The lethal radius of conventional bombs and artillery shells is much less than in open terrain, although the “bonus” effect of flying wood splinters can be considerable.
• Hand grenades bounce aimlessly unless rolled at short ranges that sometimes endanger the senders.
• Napalm burns out rapidly in moist greenery; flares illuminate very little; and dense foliage deadens radio communications.”

“Atmospheric phenomena significantly affect the performance of weapon systems and munitions. Pressure changes and relative humidity alter barometric fusing and arming calculations, dense air reduces maximum effective ranges, gusty crosswinds near Earth’s surface make free rockets and guided missiles wobble erratically, while winds aloft influence ballistic trajectories. Rain-soaked soils deaden artillery rounds, but frozen ground increases fragmentation from contact-fused shells. Dense fog, which degrades visual surveillance and target acquisition capabilities, also makes life difficult for forward observers, whose mission is to adjust artillery fire. Line-of-sight weapons, such as tube-launched, optically tracked, wire-guided (TOW) antitank missiles, are worthless where visibility is very limited. Exhaust plumes that follow TOWs moreover form ice fog in cold, damp air, which conceals targets from gunners even on clear days, and reveals firing positions to enemy sharpshooters. Scorching heat makes armored vehicles too hot to touch without gloves, reduces sustained rates of fire for automatic weapons, artillery, and tank guns, and renders white phosphorus ammunition unstable. […] Nuclear weapons respond to weather in several ways, of which winds on the surface and aloft perhaps are most important. […] Low air bursts beneath clouds amplify thermal radiation by reflection, whereas the heat from bursts above cloud blankets bounces back into space. Heavy precipitation raises the temperature at which thermal radiation will ignite given materials and reduces the spread of secondary fires. Detonations after dark increase the range at which flashes from nuclear explosions blind unprotected viewers. Blasts on, beneath, or at low altitudes above Earth’s surface suck enormous amounts of debris up the stems of mushroom clouds that drift downwind. The heaviest, most contaminated chaff falls back near ground zero within a few minutes, but winds aloft waft a deadly mist hundreds or thousands of miles. The size, shape, and potency of resultant radioactive fallout patterns differ with wind speeds and directions, because terrain shadows, crosswinds, and local precipitation sometimes create hot spots and skip zones within each fan. Fallout from one test conducted atop a tower in Nevada, for example, drifted northeast and retained strong radioactive concentrations around ground zero, while a second test from the same tower on a different date featured a “furnace” that was seven times hotter than its immediate surroundings 60 miles (95 kilometers) northwest of the test (figure 18). Such erratic results are hard to predict even under ideal conditions.[35]”

“Dry cold below freezing encourages frostbite among poorly clothed and trained personnel. German Armed Forces in Russia suffered 100,000 casualties from that cause during the winter of 1941-1942, of which 15,000 required amputations. Human breath turned to icicles in that brutal cold, eyelids froze together, flesh that touched metal cold-welded, gasoline accidentally sprayed on bare skin raised blisters the size of golf balls, butchers’ axes rebounded like boomerangs from horse meat as solid as stone, and cooks sliced butter with saws. Dehydration, contrary to popular misconceptions, can be prevalent in frigid weather when personnel exhale bodily moisture with every breath. Low temperatures, which inhibit clotting, cause wounds to bleed more freely, and severe shock due to slow circulation sets in early unless treated expeditiously. […] Armed forces in enervating heat face a different set of difficulties. Water consumption soars to prevent dehydration, since exertions over an 8-hour period in 100 °F (38°C) heat demand about 15 quarts a day (14 liters). Logisticians in the desert are hard pressed to supply huge loads, which amount to 30 pounds per person, or 270 tons for an 18,000-man U.S. armored division. […] Myriad other matters attract concerted attention. […] The rate of gum accumulations in stored gasoline quadruples with each 20°F increase in temperature, which clogs filters and lowers octane ratings when forces deplete stockpiles slowly.”

“Cold clime logistical loads expand prodigiously in response to requirements for more of almost everything from rations, clothing, tents, water heaters, and stoves to whitewash, snow plows, antifreeze, batteries, repair parts, construction materials, and specialized accouterments such as snow shoes and skis. Armed forces in wintry weather burn fuel at outrageous rates. Motor vehicles churning through snow, for example, consume perhaps 25 percent more than on solid ground. It takes 10 gallons (38 liters) of diesel per day to keep a 10-man squad tent habitable when the thermometer registers -20 °F (-29 °C). […] Generous, lightweight, well-balanced, nutritious, and preferably warm rations are essential in very cold weather, especially for troops engaged in strenuous activities. The U.S. Army sets 4,500 calories per day as a goal, although Finnish counterparts with greater practical experience recommend 6,000. Sweets make excellent instant-energy snacks between regular meals. Commanders and cooks must constantly bear in mind that food not in well-insulated containers will freeze in transit between kitchens and consumers. Each individual moreover requires 4 to 6 quarts (liters) of drinking water per day to prevent dehydration in cold weather, although adequate sources are difficult to tap when streams turn to ice. Five-gallon (18-liter) cans as well as canteens freeze fast in subzero temperatures, even when first filled with hot water. […] Combat and support troops engaged in strenuous activities must guard against overdressing, which can be just as injurious as overexposure if excessive perspiration leads to exhaustion or evaporation causes bodies to cool too rapidly. […]

Big maintenance problems begin to develop at about -10°F (-23 °C) and intensify with every degree that thermometers drop thereafter:
• Lubricants stiffen.
• Metals lose tensile strength.
• Rubber loses plasticity.
• Plastics and ceramics become less ductile.
• Battery efficiencies decrease dramatically.
• Fuels vaporize incompletely.
• Glass cracks when suddenly heated.
• Seals are subject to failure.
• Static electricity increases.
• Gauges and dials stick.
Combustion engines are hard to start, partly because battery output at best is far below normal (practically zero at -40 °F and -40 °C).”

“Tropical rain forests, which never are neutral, favor well-prepared forces and penalize military leaders who fail to understand that:
• Small unit actions predominate.
• Overland movement invariably is slow and laborious.
• Troops mounted on horseback and motor vehicles are less mobile than foot soldiers.
• Natural drop zones, landing zones, and potential airstrips are small and scarce.
• Visibility and fields of fire for flat trajectory weapons are severely limited.
• Land navigation requires specialized techniques.
• Tanks, artillery, other heavy weapons, and close air support aircraft are inhibited.
• Command, control, communications, and logistics are especially difficult.
• Special operations forces and defenders enjoy distinctive advantages.
• Quantitative and technological superiority count less than adaptability. […]

Overland travel in jungles averages about ½ mile an hour where the going is good and ½ mile a day where it is not, unless troops follow well-trodden trails that invite adversaries to install mines, booby traps, road blocks, and ambushes. […] Jungles are filled with animate and inanimate objects that bite, sting, and stick, a host of microorganisms that are harmful to humans, fungus infections that troops affectionately call “jungle rot,” and steamy atmosphere that encourages profuse perspiration, body rashes, and heat exhaustion. […] More casualties could be traced to malaria than to hostile fire during World War II campaigns in the South Pacific.”


May 2, 2014 Posted by | Books, Geography | Leave a comment