The Origin and Evolution of Cultures (II)
“Brain tissue is quite expensive. All else equal, selection will favor the stupidest possible creatures.”
I really liked that quote. Here’s a related one from the book:
“On the cost side, selection will favor as small a nervous system as possible. If our hypothesis is correct, animals with complex cognition foot the cost of a large brain by adapting more swiftly and accurately to variable environments.”
This post doesn’t really deal in much more detail with the observations above, I just liked those quotes and they didn’t really fit in with the rest of the coverage, though I could probably have put them in there somewhere. Before moving on to the main coverage I should note that it would make a lot of sense for people who read this post to read my first post about the book before reading this one. If you’ve already done so, do carry on.
After I’d read the first couple hundred pages I was a bit exhausted, and I’ve taken a break from this book for a while; as I pointed out on goodreads when I started, “I’m far from certain I’ll manage to get through this one in one go.” Yesterday I decided to pick up the book again, and fortunately the next few chapters seem less technical than the ones that had me putting the book away for a while.
The book is really nice, but it feels hard for me to blog because of the technical nature of the coverage (much of this stuff is really just applied game theory). Most chapters will deal with a specific model and talk about the model results, and unless I actually tell you all about what the models are doing and which assumptions are made (i.e., basically repost the entire book here) a lot of critical details will be left out – there are a lot of caveats and nuances, and not including them in the coverage might give people the wrong idea about what’s going on in the book. Sometimes a complex model is compared to a simple model in a chapter and the complex model is the more interesting one; in those cases you may need to cover the simple model as well for it to make sense to talk about the details of the complex model, and we’re back to ‘it’s hard to exclude anything’. A general ‘problem’ with this book in terms of these things – which is of course properly to be considered a strength – is that there aren’t really that many pages with fluffy stuff you can just leave out. Fortunately they occasionally draw conclusions from the models and try to give a big-picture account of what’s going on, and I’ve disproportionately quoted from those passages in the post below. I’ve left a lot of details out, but there was no alternative to doing that. A lot of crucial context which I’ve not realized is missing is probably missing anyway – do ask questions if something is unclear here.
“Human brains […] are adapted to life in small-scale hunting and gathering societies of the Pleistocene. They will guide behavior within such societies with considerable precision, but behave unpredictably in other situations. […] Learning devices will be favored only when environments are variable in time or space in difficult to predict ways. Social learning is a device for multiplying the power of individual learning. […] Social learning can economize on the trial and error part of learning. […] Selection will favor individual learners who add social learning [‘learn from others, e.g. by imitating them‘] to their repertoire so long as copying is fairly accurate and the extra overhead cost of the capacity to copy is not too high. In some circumstances, the models suggest that social learning will be quite important relative to individual learning. It can be a great advantage compared to a system that relies on genes only to transmit information and individual learning to adapt to the variation. Selection will also favor heuristics that bias social learning in adaptive directions. When the behavior of models [‘people you might copy’] is variable, individuals who try to choose the best model by using simple heuristics like “copy dominants” or “go with the majority,” or by using complex cognitive analyses, are more likely to do well than those who blindly copy. Contrarily, if it is easy for individuals to learn the right thing to do by themselves, or if environments vary little, then social learning is of no utility.”
“We believe that the lessons of [the] model [they just talked about] are robust. It formalizes three basic assumptions:
1. The environment varies.
2. Cues about the environment are imperfect, so individuals make errors.
3. Imitation increases the accuracy (or reduces the cost) of learning.
We have analyzed several models that incorporate these assumptions but differ in other features. All of these models lead to the same qualitative conclusion: when learning is difficult and environments do not change too fast, most individuals imitate at evolutionary equilibrium. At that equilibrium, an optimally imitating population is better off, on average, than a population that does not imitate. […] for something to be a norm, there has to be a conformist element. People must agree on the appropriate behavior and disapprove of others who do not behave appropriately. We […] show that individuals who respond to such disapproval by conforming to the social norm are more likely to acquire the best behavior. […] as the tendency to conform increases, so does the equilibrium amount of imitation. […] all conditions that lead a substantial fraction of the population to rely on imitation also lead to very strong conformity. […] a tendency to conform increases the number of people who follow social norms and decreases the numbers who think for themselves.”
“Human populations are richly subdivided into groups marked by seemingly arbitrary symbolic traits, including distinctive styles of dress, cuisine, or dialect. Such symbolically marked groups often have distinctive moral codes and norms of behavior, and sometimes exhibit economic specialization. […] The following two chapters explore the idea that symbolically marked groups arise and are maintained because dress, dialect, and other markers allow people to identify in-group members. In chapter 6, we analyze a model that assumes that identifying in-group members is useful because it allows selective imitation. Rapid cultural adaption makes the local population a valuable source of information about what is adaptive in the local environment. Individuals are well advised to imitate locals and avoid learning from immigrants […] studies like those of Fredrik Barth […] suggest that contemporary ethnic groups often occupy different ecological niches. […] In chapter 7, we […] study a model in which markers allow selective social interaction. […] These models have several interesting and, at least to us, less-than-obvious properties. First, the same nonrandom interaction that makes markers useful also creates and maintains variation in symbolic marker traits as an unintended by-product. Nonrandom interaction acts to increase correlation between arbitrary markers and locally adaptive behaviors. This, in turn, makes markers more useful, setting up a positive feedback process that can amplify small differences in markers between groups. […] once groups have become sharply marked, the feedback process is sufficient by itself to maintain group marking even if groups are perfectly mixed and there is no population structure other than that caused by the markers. […] processes closely related to those modeled here can lead to the “runaway” evolution of marker and preference traits, which have no adaptive or functional explanation […] It is easy to imagine that the adaptive uses of cultural markers are common enough so that selection on genes maintains a cognitive capacity to use them despite the runaway process carrying some to maladaptive extremes. We are convinced that complexities of this sort are a pervasive feature of the coevolutionary process that links genes and culture. If this idea is correct, any attempt to reduce the problems of human evolution to binary choices between sociobiological and cultural explanations is bound to fail.”
“Studies of the diffusion of innovations […] suggest that people often use two simple rules to increase the likelihood that they acquire locally adaptive beliefs by imitation. The chance that individual A will adopt an innovation modeled by individual B [i.e., ‘do as B does’] often seems to depend upon (1) how successful B is, and (2) the similarity of A and B.”
“Many anthropologists believe that people follow the social norms of their society without much thought. According to this view, human behavior is mainly the result of social norms and rarely the result of considered decisions. […] Many anthropologists also believe that social norms lead to adaptive behaviors; by following norms, people can behave sensibly without having to understand why they do what they do. […] Norms will change behavior only if they prescribe behavior that differs from what people would do in the absence of norms. […] By this notion, people obey norms because they are rewarded by others if they do and punished if they do not. As long as the rewards and punishments are sufficiently large, norms can stabilize a vast range of different behaviors.”
One thing to note both in relation to the paragraph above and to the passage quoted below is that there’s a big conceptual difference between strategies which punish defection strategies by withholding future cooperation, and strategies which ‘actively’ punish defectors (presumably e.g. by beating them up, killing them…). Perhaps one way to conceptualize the difference between the two types of strategies is to think of the former set of strategies as a collection of strategies where punished individuals are limited to a payoff of 0, whereas punished individuals in the latter context might experience (unbounded?) negative payoffs as well. Reciprocating strategies, where you cooperate when others do and sanction defection with non-cooperation in the future, are what Boyd and Richerson look at first, and it turns out that such strategies actually don’t do very well in large groups, in the sense that it seems implausible that such strategies in their models on their own would support cooperative equilibria when n is large, which is the motivation for looking at actual ‘punishment strategies’ that go a bit further than that. A problem with punishment strategies is that they’re often (but not always) altruistic in the sense that if punishment works by making defectors switch to ‘cooperate’ in future periods and it’s costly for an individual to punish someone, then punishing someone is quite likely to mostly benefit other people (especially as n grows) while the person doing the punishing is the one incurring the cost – punishment is a public good. So people may decide to become ‘reluctant punishers’ that let the others do the punishing, and if enough people go that route these equilibria become unstable (this is a problem termed ‘the problem of second-order cooperation’ – you can defect at any stage in the game, and in this particular case it’s a two-stage game where you can either defect from the start, or you defect at the second stage and refuse to punish those that defected during the first period). If n is small, punishment strategies may not necessarily be altruistic – you may meet and interact with the guy enough times in the future for it to make sense to punish him now – and if the cost of punishment is small compared to the benefits from cooperation that will of course also help support equilibria of that nature. A general thing to note here, which is perhaps not made perfectly clear in the stuff above, is that finding out how ‘cooperative equilibria’ of one kind or another may come about and under which conditions they’re stable is really a big part of understanding what culture is all about and how it works, when you look at it from a certain point of view – it’s puzzling that humans cooperate with other humans to the extent that they do, and as people who’ve done theoretical work on this stuff have found out over the years, it’s actually not at all easy to figure out why they (we) do that. It’s certainly a lot more complicated than people who don’t know anything about such topics presumably think it is.
I really liked the stuff they had on moralistic strategies, a subset of the punishment strategies analyzed in chapter 9, and I’ve quoted from this below:
“Moralistic strategies [are] strategies that punish defectors, individuals who do not punish noncooperators, and individuals who do not punish nonpunishers […] moralistic strategies can cause any individually costly behavior to be evolutionarily stable, whether or not it creates a group benefit. Once enough individuals are prepared to punish any behavior, even the most absurd, and to punish those who do not punish, then everyone is best off conforming to the norm. Moralistic strategies are a potential mechanism for stabilizing a wide range of behaviors. […] moralistic punishment is inherently diversifying in the sense that many different behaviors may be stabilized in exactly the same environment. It may also provide the basis for stable among-group variation. […] In the model studied here, punishers collect private benefit by inducing cooperation in their group that compensates them for punishing, while providing a public good for reluctant cooperators. There are often polymorphic equilibria in which punishers are relatively rare, generating a simple political division of labor […] This finding invites study of further punishment strategies. Consider, for example, strategies that punish but do not cooperate. Such individuals might be able to coerce more reluctant cooperators than cooperator-punishers and therefore support cooperation in still larger groups.”
That chapter has a lot more details about those things. Anyway, behavioural strategies that look terribly maladaptive ‘from the outside’ (and/or may in fact be terribly maladaptive (…at the group level) – do note that these two do not necessarily overlap) may become fixed in a population even so, and such equilibria, once reached, may be very hard to break. This isn’t exactly an uplifting story, but of course if you’ve had a look around the world this shouldn’t be news. As mentioned, it’s very much worth having in mind that a strategy which outsiders might think is really quite awful, because it leads to behaviours the outsiders don’t like, may still be highly adaptive – the adaptiveness of a behavioural strategy set and whether said strategy set gives you a good feeling in your stomach has got nothing to do with each other, and there’s no Eternal Law of Progress, whatever that latter word might mean, guiding which strategy sets ‘win’.
No comments yet.