I have previously here on the blog posted multiple lectures in my ‘lecture-posts’, or I have combined a lecture with other stuff (e.g. links such as those in the previous ‘random stuff’ post). I think such approaches have made me less likely to post lectures on the blog (if I don’t post a lecture soon after I’ve watched it, my experience tells me that I not infrequently simply never get around to posting it), and combined with this issue is also the issue that I don’t really watch a lot of lectures these days. For these reasons I have decided to start posting single lecture posts here on the blog; when I start thinking about the time expenditure of people reading along here in a way this approach actually also seems justified – although it might take me as much time/work to watch and cover, say, 4 lectures as it would take me to read and cover 100 pages of a textbook, the time expenditure required by a reader of the blog would be very different in those two cases (you’ll usually be able to read a post that took me multiple hours to write in a short amount of time, whereas ‘the time advantage’ of the reader is close to negligible (maybe not completely; search costs are not completely irrelevant) in the case of lectures). By posting multiple lectures in the same post I probably decrease the expected value of the time readers spend watching the content I upload, which seems suboptimal.
Here’s the youtube description of the lecture, which was posted a few days ago on the IAS youtube account:
“Over the past two decades, information theory has reemerged within computational complexity theory as a mathematical tool for obtaining unconditional lower bounds in a number of models, including streaming algorithms, data structures, and communication complexity. Many of these applications can be systematized and extended via the study of information complexity – which treats information revealed or transmitted as the resource to be conserved. In this overview talk we will discuss the two-party information complexity and its properties – and the interactive analogues of classical source coding theorems. We will then discuss applications to exact communication complexity bounds, hardness amplification, and quantum communication complexity.”
He actually decided to skip the quantum communication complexity stuff because of the time constraint. I should note that the lecture was ‘easy enough’ for me to follow most of it, so it is not really that difficult, at least not if you know some basic information theory.
A few links to related stuff (you can take these links as indications of what sort of stuff the lecture is about/discusses, if you’re on the fence about whether or not to watch it):
Computational complexity theory.
Shannon’s source coding theorem.
From Information to Exact Communication (in the lecture he discusses some aspects covered in this paper).
Unique games conjecture (Two-prover proof systems).
A Counterexample to Strong Parallel Repetition (another paper mentioned/briefly discussed during the lecture).
An interesting aspect I once again noted during this lecture is the sort of loose linkage you sometimes observe between the topics of game theory/microeconomics and computer science. Of course the link is made explicit a few minutes later in the talk when he discusses the unique games conjecture to which I link above, but it’s perhaps worth noting that the link is on display even before that point is reached. Around 38 minutes into the lecture he mentions that one of the relevant proofs ‘involves such things as Lagrange multipliers and optimization’. I was far from surprised, as from a certain point of view the problem he discusses at that point is conceptually very similar to some problems encountered in auction theory, where Lagrange multipliers and optimization problems are frequently encountered… If you are too unfamiliar with that field to realize how the similar problem might appear in an auction theory context, what you have there are instead auction partipants who prefer not to reveal their true willingness to pay; and some auction designs actually work in a very similar manner as does the (pseudo-)protocol described in the lecture, and are thus used to reveal it (for some subset of participants at least)).
“Put very crudely, the main thesis of this book is that certain types of norms are possible solutions to problems posed by certain types of social interaction situations. […] Three types of paradigmatic situations are dealt with. They are referred to as (1) Prisoner’s Dilemma-type situations; (2) Co-ordination situations; (3) Inequality (or Partiality) situations. Each of them, it is claimed, poses a basic difficulty, to some or all of the individuals involved in them. Three types of norms, respectively, are offered as solutions to these situational problems. It is shown how, and in what sense, the adoption of these norms of social behaviour can indeed resolve the specified problem.”
I should probably before moving on apologize for the infrequent updates – you should expect blogging to be light also in the months to come. With that out of the way, the book to which the title of this post refers and from which the above quote is taken is this Oxford University Press publication. Here’s what I wrote about the book on goodreads:
“The last chapter wasn’t in my opinion nearly as good as the others, presumably in part because I was unfamiliar with a lot of the literature to which she referred, but also because I could not really agree with all the distinctions and arguments made, and I was close to giving the book 3 stars as a result of this [I gave the book 4 stars on goodreads]. I think she overplays the ‘impersonal’ nature of norms in that chapter; if a norm based on sanctions is not enforced then it is irrelevant, and to the extent that it is enforced *someone* needs to impose the sanction on the transgressor. The fact that it’s actually in some contexts considered ‘a problem that needs explaining’ to figure out exactly how to support a model with sanctioning in a context where enforcement is costly to the individual (it’s a problem because of the free-riding issue – it’s always easier to let someone else do the sanctioning…) seems to have eluded Margalit (for details on this topic, see e.g. Boyd and Richerson).
It’s probably helpful to be familiar with basic game theoretic concepts if you’re planning on reading this book (it has a lot of game theory, though most of it is quite simple stuff), as well as perhaps having some familiarity with basic economics (rationality assumptions, utility functions, etc.) but I’m not sure it’s strictly necessary – I think the author does cover most of the basic things you need to know to be able to follow the arguments. The first three chapters are quite good.”
I should point out here that when I was writing the review above I had been completely unaware of how long ago the book was written; the book is pretty self-contained and I hadn’t really noticed when I picked up the book that it’s actually a rather old book. If I had been aware of this I would not have been nearly as vocal in my criticism of the content of the last chapter in my review as was the case, given that some of the insights I blame the author for being unaware of were only discussed in the literature after the publication of this book; the unaddressed problems do remain unaddressed and they are problematic, but it’s probably unfair to blame the author for not thinking about stuff which probably nobody really had given any thought at the time of publication.
In the post below I’ll talk a little bit about the book and add some more quotes. It probably makes sense to start out by giving a brief outline of the problems encountered in the three settings mentioned above. The basic problem encountered in prisoner’s dilemma-type situations is that unilateral defection is an attractive proposition, but if everybody yields to this temptation and defect then that will lead to a bad outcome. The problem faced is thus to figure out some way to make sure that defection is not an attractive option. In the co-ordination setting, there are several mutually beneficial states, none of which are strictly preferred to the others; that is, there is a coincidence of interests among the parties involved. The problem is that it’s difficult to come to an an explicit agreement as to which of the states to aim for. An example could be whether to drive in the right side of the road or the left side of the road. It probably doesn’t really matter much which side of the road you’re driving on, as long as you’re driving in the same side of the road as the other drivers do. The coincidence of interests here need not be perfect; one person might slightly prefer to drive in the right side of the road, all else equal, but even so it’ll be in his or her interest to drive in the same side of the road as do the other drivers; there’s no incentive for unilateral defection, and the main problem is figuring out how to achieve the outcome where behaviour is coordinated so that one of the available equilibria is reached. In the third setting, there’s some inequality present and one party is at an advantage; the problem here is how to maintain this advantageous position and how to fortify it so that it’s stable.
Some quotes and a few more comments:
“[One] angle from which it may be illuminating to view the account of norms offered here is that of evolutionary explanations. […] I propose to regard the argument underlying this book as, in a borrowed and somewhat metaphorical sense, a natural selection theory of the development of norms.”
“Norms do not as a rule come into existence at a definite point in time, nor are they the result of a manageable number of identifiable acts. The are, rather, the resultant of complex patterns of behaviour of a large number of people over a protracted period of time.”
“it is proposed that the main elements in the characterization of norms of obligation be: a significant social pressure for conformity to them and against deviation – actual or potential – from them; the belief by the people concerned in their indispensability for the proper functioning of society; and the expected clashes between their dictates on the one hand and personal interests and desires on the other.”
It should be noted here that far from all norms qualify as norms of obligation; this is but one norm subgroup, though it’s an important one. The author notes explicitly that norms encountered in the context of coordination problems are not norms of obligation.
“A situation of the generalized PD variety poses a problem to the participants involved. The problem is that of protecting an unstable yet jointly beneficial state of affairs from deteriorating, so to speak, into a stable yet jointly destructive one. My contention concerning such a situation is that a norm, backed by appropriate sanctions, could solve this problem. In this sense it can be said that such situations ‘call for’ norms. It can further be said that a norm solving the problem inherent in a situation of this type is generated by it. Such norms I shall call PD norms. […] the smaller and the more determinate the class of participants in a generalized PD-structured situation, and the more isolated the occurrence of the dilemma among them, the more likely it is that there might be solutions other than (PD) norms to the pertinent problem […] And conversely, the larger and the more indeterminate the class of participants, and the more frequent the occurrence of the dilemma among them, the more likely it is that a solution, if any, would be in the form of a PD norm. […] the more difficult (or costly) it is to ensure […] personal contact, […] the more acute the need for some impersonal device, such as social norms, which would induce the desired co-operation.”
You can easily add more details to the conceptual framework underlying the analysis in order to refine it in various ways, and the author does talk a little bit about how you might go about doing that; for example it might not be realistic that nobody ever deviates, and so you might decide to replace an unrealistic stability condition that nobody deviates with another one which might be that at most some percentage, say X, of the population deviates. Such refined theoretical models can incidentally yield very interesting and non-trivial theoretical results – Boyd and Richerson cover such models in The Origin and Evolution of Cultures. It should perhaps be noted that even relatively simple models dealing with these sorts of topics may easily end up nevertheless being sufficiently complicated for analytical solutions to not be forthcoming.
“there are norms whose function is to maintain social control on certain groups of people through preventing them from solving the problem inherent in the PD-structured situation in which they are placed. That is, these norms are designed to help keep these people in a state of affairs which, while disadvantageous to them […] is considered beneficial to society as a whole. A conspicuous example of norms of this type are anti-trust laws.”
In the context of coordination problems, the author distinguishes between two solution mechanisms/norms; conventions and decrees. Broadly speaking conventions can be thought of as established solutions to coordination problems encountered in the past, whereas decrees are solutions to novel problems where no equilibrium has yet been established – see also the more detailed quotes below. In the context of sanctions an important difference between coordination norms and PD norms is that sanctions can be said to play a primary role in the context of PD norms but only a secondary role in the context of coordination norms; nobody has a unilateral incentive to deviate in the context of coordination-type situations/problems and so defection so to speak carries its own punishment independent of the potential level of an associated sanction. If everybody else drive in the right side of the road, you don’t gain anything from driving in the left side of the road – and it’s unlikely to be the size of the fine which is the primary reason why you don’t drive in the left side of the road in such a context.
“It is worth noting that within the large class of problems of strategy (i.e. problems of interdependent decision), the problems of co-ordination stand in opposition to problems of conflict, the contrast being particularly acute between the extreme cases of pure co-ordination on the one hand and of pure conflict (the so-called zero-sum problems) on the other. Whereas in the pure co-ordination case the parties’ interests converge completely, and the agents win or lose together, in the pure conflict case the parties’ interests diverge completely, and one person’s gain is the other’s loss. […] [Shelling argues] that games of strategy range over a continuum with games of pure conflict […] and games of pure co-ordination as opposite limits. All other games […] involve mixtures in varying proportions of conflict and co-ordination, of competition and partnership, and are referred to as mixed-motive games.”
One thing to add here, which is of course not mentioned in the book, is that whereas the situation does play a sometimes major role in terms of which setting you find yourself in, there’s also a relevant mental/psychological aspect to consider here; in the context of bargaining, it’s a very well-established result that bargainers who conceive of the bargaining situation as a zero-sum (‘conflict’) game do worse than bargainers who do not.
“Very generally, where communities which have their own ways of going about things – their own arrangements, regularities, conventions – come into contact, and where the situation demands that barriers between them be dropped, or that one – any one – of them absorb the other, various co-ordination problems are likely to crop up and to call for […] decree-type co-ordination norms to solve them.”
“Conventions are, typically:
(1) Non-statutory norms, which need not be enacted, formulated, or promulgated.
(2) They are neither issued nor promulgated by any identifiable authority, and are hence what is usually called impersonal, or anonymous norms.
(3) They involve in the main non-institutionalized, non-organized, and informal sanctions (i.e. punishments or rewards).
Decrees, in contrast, are, typically:
(2) Issued and promulgated by some appropriately endowed authority (not necessarily at the level of the state);
(3) The sanctions they involve might be organized, institutionalized, and formal, even physical.”
Conventions and decrees are quite different, but in terms of what they do they solve similar problems:
“Since a co-ordination problem is a situation such that any of its co-ordination equilibria is preferred, by all involved, to any combination of actions which is not a co-ordination equlibrium, each of those involved is interested in there being something which will point – in a way conspicuous to all and perceived to be conspicuous to all – to one particular co-ordination equilibrum as the solution. This precisely is what our co-ordination norms, whether conventions or decrees, do.”
“Thibaut and Kelley note that norms ‘will develop more rapidly and more surely in highly cohesive groups than in less cohesive groups’ – assuming that the majority of the members have about the same degree of dependence on the group […] To the extent that norms reduce interference, cut communication costs, heighten value similarity and insure the interaction sequence necessary for task performance, norms improve the reward-cost positions attained by the members of a dyad and thus increase the cohesiveness of the dyad”
“[I]n so far as conformity to a co-ordination norm ensures the achievement of some co-ordination equilibrium, which for everyone involved in the corresponding co-ordination problem belongs of necessity to the group of preferred outcomes, it is rational for everyone to conform to it. Are we to conclude from this, however, that the social choice to which the co-ordination norm is instrumental is itself rational? My answer to this question is that although it is rational to conform to a prevailing co-ordination norm, the social choice resulting from it is not necessarily rational. […] it may not be optimal, for some or for all involved. It can in principle be changed into a better one, only this involves an explicit process which is not always feasible. […] The changing of an existing convention in favour of a ‘better’, more rational one, has to be explicit. It can be achieved through an explicit agreement of all concerned, or through a regulation (decree) issued and properly promulgated by some appropriately endowed authority. Where communication, or promulgation, is impossible, it is difficult to see how an existing convention (which is a co-ordination norm) might be changed. It is of some interest to note that whereas an ‘act of convening’ is not necessary for a convention to form, it might be necessary for an existing convention to be exchanged for an alternative one.”
“The difference in the role played by the two types of norms might now be formulated thus: a co-ordination norm helps those involved ‘meet’ each other; a PD norm helps those involved protect themselves from damaging, even ruining, each other.”
“[T]here are states of inequality which appear on the surface to be stable but which are, in a somewhat subtle and complicated way, strategically unstable. They may be in equilibrium, but it is a rather flimsy one; far from being self-perpetuating, they are susceptible to threats. Now the assumption that the party discriminated in favour of is interested in the preservation of such a status quo leads reasonably to the assumption that he will seek to fortify it against its potential undermining. […] it is the central thesis of this chapter that [a] significant device to render the status quo stable [is] to fortify it by norms. The idea is that once it is in some sense normatively required that the status quo endure, the nature of the possible calculations and considerations of deviance fundamentally changes: it is no longer evaluated only in terms of being ‘costly’ or ‘risky’, but as being ‘wrong‘ or ‘subversive‘. […] the methods of norms and force as possible fortifiers of the status quo in question are functionally equivalent […] provided the norms are effective, they both amount to making deviance from the status quo more costly through the impositions of sanctions.”
“Once norms are internalized, one abides by them not out of fear of the pending sanctions associated with them, but out of some inner conviction. And when this is so, one is likely to conform to the norms even in one’s thoughts, intentions, and in what one does in private.”
“The function of norms, generally speaking, is to put restraints on possible courses of conduct, to restrict the number of alternatives open for action. When a certain course of conduct is normatively denounced (is considered ‘wrong’), it becomes a less eligible course of conduct than it might otherwise have been: although through lying, for example, one might quite conveniently get away with some misdeed, its being recognized and acknowledged as normatively (morally) prohibited normally makes it a less attractive way out, or even precludes its having been considered an alternative in the first place. In this sense, then, norms might be said to be coercive, to the extent that they function as constraints on actions; that is, to the extent that they prevent one for doing an ation one might have done had there been no norm denouncing it, or at least to the extent that they render a certain course of action less eligible than it might otherwise have been.”
“[N]orms are rather easily accepted as part of the ‘natural order of things’. To be sure, one might be quite resentful of this natural order, or of one’s lot therein, and regard it as discriminating against one. But usually there is very little one is going to do about it unless – and until – the object of one’s resentment is personified: only few will start a revolution against an elusive oppressive ‘system’; many more might revolt against an identifiable oppressive ruler. […] These norms have to apply to the privileged as well as to the deprived, or else they lose much of their effectiveness as a disguise for the real exercise of power underlying them. […] The absence of any precedents in which someone privileged was spared the sanction, the absence of any loopholes which might facilitate a discriminatory application of the norms, contribute to their deterrence value”.
Below are three new lectures from the Institute of Advanced Study. As far as I’ve gathered they’re all from an IAS symposium called ‘Lens of Computation on the Sciences’ – all three lecturers are computer scientists, but you don’t have to be a computer scientist to watch these lectures.
Should computer scientists and economists band together more and try to use the insights from one field to help solve problems in the other field? Roughgarden thinks so, and provides examples of how this might be done/has been done. Applications discussed in the lecture include traffic management and auction design. I’m not sure how much of this lecture is easy to follow for people who don’t know anything about either topic (i.e., computer science and economics), but I found it not too difficult to follow – it probably helped that I’ve actually done work on a few of the things he touches upon in the lecture, such as basic auction theory, the fixed point theorems and related proofs, basic queueing theory and basic discrete maths/graph theory. Either way there are certainly much more technical lectures than this one available at the IAS channel.
I don’t have Facebook and I’m not planning on ever getting a FB account, so I’m not really sure I care about the things this guy is trying to do, but the lecturer does touch upon some interesting topics in network theory. Not a great lecture in my opinion and occasionally I think the lecturer ‘drifts’ a bit, talking without saying very much, but it’s also not a terrible lecture. A few times I was really annoyed that you can’t see where he’s pointing that damn laser pointer, but this issue should not stop you from watching the video, especially not if you have an interest in analytical aspects of how to approach and make sense of ‘Big Data’.
I’ve noticed that Scott Alexander has said some nice things about Scott Aaronson a few times, but until now I’ve never actually read any of the latter guy’s stuff or watched any lectures by him. I agree with Scott (Alexander) that Scott (Aaronson) is definitely a smart guy. This is an interesting lecture; I won’t pretend I understood all of it, but it has some thought-provoking ideas and important points in the context of quantum computing and it’s actually a quite entertaining lecture; I was close to laughing a couple of times.
“A commonplace argument in contemporary writing on trust is that we would all be better off if we were all more trusting, and therefore we should all trust more […] Current writings commonly focus on trust as somehow the relevant variable in explaining differences across cases of successful cooperation. Typically, however, the crucial variable is the trustworthiness of those who are to be trusted or relied upon. […] It is not trust per se, but trusting the right people that makes for successful relationships and happiness.”
“If we wish to understand the role of trust in society […], we must get beyond the flaccid – and often wrong – assumption that trust is simply good. This supposition must be heavily qualified, because trusting the malevolent or the radically incompetent can be foolish and often even grossly harmful. […] trust only make[s] sense in dealings with those who are or who could be induced to be trustworthy. To trust the untrustworthy can be disastrous.”
That it’s stupid to trust people who cannot be trusted should in my opinion be blatantly obvious, yet somehow to a lot of people it doesn’t seem to be at all obvious; in light of this problem (…I maintain that this is indeed a problem) the above observations are probably among the most important ones included in Hardin’s book. The book includes some strong criticism of much of the current/extant literature on trust. The two most common fields of study within this area of research are game-theoretic ‘trust games’, which according to the author are ill-named as they don’t really seem to be dealing much, if at all, with the topic of trust, and (poor) survey research which asks people questions which are hard to answer and tend to yield answers which are even harder to interpret. I have included below a few concluding remarks from the chapter on these topics:
“Both of the current empirical research programs on trust are largely misguided. The T-games [‘trust-games’], as played […] do not elicit or measure anything resembling ordinary trust relations; and their findings are basically irrelevant to the modeling and assessment of trust and trustworthiness. The only thing that relates the so-called trust game […] to trust is its name, which is wrong and misleading. Survey questions currently in wide use are radically unconstrained. They therefore force subjects to assume the relevant degrees of constraint, such as how costly the risk of failed cooperation would be. […] In sum, therefore, there is relatively little to learn about trust from these two massive research programs. Without returning their protocols to address standard conceptions of trust, they cannot contribute much to understanding trust as we generally know it, and they cannot play a very constructive role in explaining social behavior, institutions, or social and political change. These are distressing conclusions because both these enterprises have been enormous, and in many ways they have been carried out with admirable care.”
There is ‘relatively little to learn about trust from these two massive research programs’, but one to me potentially important observation, hidden away in the notes at the end of the book, is perhaps worth mentioning here: “There is a commonplace claim that trust will beget trustworthiness […] Schotter [as an aside this guy was incidentally the author of the Micro textbook we used in introductory Microeconomics] and Sopher (2006) do not find this to be true in game experiments that they run, while they do find that trustworthiness (cooperativeness in the play of games) does beget trust (or cooperation).”
There were a few parts of the coverage which confused me somewhat until it occurred to me that the author might not have read Boyd and Richerson, or other people who might have familiarized him with their line of thinking and research (once again, you should read Boyd and Richerson).
Moving on, a few remarks on social capital:
“Like other forms of capital and human capital, social capital is not completely fungible but may be specific to certain activities. A given form of social capital that is valuable in facilitating certain actions may be useless or even harmful for others. […] [A] mistake is the tendency to speak of social capital as though it were a particular kind of thing that has generalized value, as money very nearly does […] it[‘s value] must vary in the sense that what is functional in one context may not be in another.”
It is important to keep in mind that trust which leads to increased cooperation can end up leading to both good outcomes and bad:
“Widespread customs and even very local practices of personal networks can impose destructive norms on people, norms that have all of the structural qualities of interpersonal capital. […] in general, social capital has no normative valence […] It is generally about means for doing things, and the things can be hideously bad as well as good, although the literature on social capital focuses almost exclusively on the good things it can enable and it often lauds social capital as itself a wonderful thing to develop […] Community and social capital are not per se good. It is a grand normative fiction of our time to suppose that they are.”
The book has a chapter specifically about trust on the internet which related to the coverage included in Barak et al.‘s book, a publication which I have unfortunately neglected to blog (this book of course goes into a lot more detail). A key point in that chapter is that the internet is not really all that special in terms of these things, in the sense that to the extent that it facilitates coordination etc., it can be used to accomplish beneficial things as well as harmful things – i.e. it’s also neutrally valenced. Barak et al.‘s book has a lot more stuff about how this medium impacts communication and optimal communication strategies etc., which links in quite a bit with trust aspects, but I won’t go into this stuff here and I’m pretty sure I’ve covered related topics before here on the blog, e.g. back when I covered Hargie.
The chapter about terrorism and distrust had some interesting observations. A few quotes:
“We know from varied contexts that people can have a more positive view of individuals from a group than they have of the group.”
“Mere statistical doubt in the likely trustworthiness of the members of some identifiable group can be sufficient to induce distrust of all members of the group with whom one has no personal relationship on which to have established trust. […] This statistical doubt can trump relational considerations and can block the initial risk-taking that might allow for a test of another individual’s trustworthiness by stereotyping that individual as primarily a member of some group. If there are many people with whom one can have a particular beneficial interaction, narrowing the set by excluding certain stereotypes is efficient […] Unfortunately, however, excluding very systematically on the basis of ethnicity or race becomes pervasively destructive of community relations.”
One thing to keep in mind here is that people’s stereotypes are often quite accurate. When groups don’t trust each other it’s always a lot of fun to argue about who’s to blame for that state of affairs, but it’s important here to keep in mind that both groups will always have mental models of both the in-group and the out-group (see also the coverage below). Also it should be kept in mind that to the extent that people’s stereotypes are accurate, blaming stereotyping behaviours for the problems of the people who get stereotyped is conceptually equivalent to blaming people for discriminating against untrustworthy people by not trusting people who are not trustworthy. You always come back to the problem that what’s at the heart of the matter is never just trust, but rather trustworthiness. To the extent that the two are related, trust follows trustworthiness, not the other way around.
“There’s a fairly extensive literature on so-called generalized trust, which is trust in the anonymous or general other person, including strangers, whom we might encounter, perhaps with some restrictions on what isues would come under that trust. […] [Generalized trust] is an implausible notion. In any real-world context, I trust some more than others and I trust any given person more about some things than about others and more in some contexts than in others. […] Whereas generalized trust or group-generalized trust makes little or no sense (other than as a claim of optimism), group-generalized distrust in many contexts makes very good sense. If you were Jewish, Gypsy, or gay, you had good reason to distrust all officers of the Nazi state and probably most citizens in Nazi Germany as well. American Indians of the western plains had very good reason to distrust whites. During Milosevic’s wars and pogroms, Serbs, Croatians, and Muslims in then Yugoslavia had increasingly good reasons to distrust most members of the other groups, especially while the latter were acting as groups. […] In all of these cases, distrust is defined by the belief that members of the other groups and their representatives are hostile to one’s interests. Trust relationships between members of these various groups are the unusual cases that require explanation; the relatively group-generalized distrust is easy to understand and justify.”
“In the current circumstances of mostly Arab and Islamic terrorism against israel and the West and much of the rest of the world, it is surely a very tiny fraction of all Arabs and Islamists who are genuinely a threat, but the scale of their threat may make many Israelis and westerners wary of virtually all Arabs and Islamists […] many who are not prospects for taking terrorist action evidently sympathize with and even support these actions”
“When cooperation is organized by communal norms, it can become highly exclusionary, so that only members of the community can have cooperative relations with those in the community. In such a case, the norms of cooperativeness are norms of exclusion […] For many fundamentalist groups, continued loyalty to the group and its beliefs is secured by isolating the group and its members from many other influences so that relations within the community are governed by extensive norms of exclusion. When this happens, it is not only trust relations but also basic beliefs that are constrained. If we encounter no one with contrary beliefs our own beliefs will tend to prevail by inertia and lack of questioning and they will be reinforced by our secluded, exclusionary community. There are many strong, extreme beliefs about religious issues as well as about many other things. […] The two matters for which such staunch loyalty to unquestioned beliefs are politically most important are probably religious and nationalist commitments […] Such beliefs are often maintained by blocking our alternative views and by sanctioning those within the group who stray. […] Narrowing one’s associations to others in an isolated extremist group cripples one’s epistemology by blocking out general questioning of the group’s beliefs […] To an outsider those beliefs might be utterly crazy. Indeed, virtually all strong religious beliefs sound crazy or silly to those who do not share them. […] In some ways, the internet allows individuals and small groups to be quite isolated while nevertheless maintaining substantial contact with others of like mind. Islamic terrorists in the West can be almost completely isolated individually while maintaining nearly instant, frequent contact with other and with groups in the Middle East, Pakistan, or Afghanistan, as well as with groups of other potential terrorists in target nations.”
A few recent examples:
i. I played Citadels with my little brother this Christmas. I spotted two obvious instances of poor modelling which happened during the game.
The game is complex and I won’t go over all the rules here – it should be noted that the game complexity is probably part of why these errors to be described below were made in the first place. But anyway, we were in a situation where my brother had picked a specific card. Having picked that specific card he had to try to guess which card I had on my hand – if he guessed correctly, I’d lose my turn and the income that turn would generate (which would benefit him and harm me, making him more likely to win the game). There were two obvious candidates; one card generating a potential income of 2 and another other card generating a potential income of 5. He knew I’d taken one of these cards but not which of them I’d picked – if I randomized my draw completely there’d thus be a 50% chance for him to pick the right card. The situation took place during one ’round’ (subgame) of the game, and both of us knew that this would not be the last round in the game. But we did not know how many more rounds were to be played – a conservative estimate would be at least 4 or 5. Whether it would make sense to consider the round to be one round of several in a semi-‘pure’ repeated game or not, and which type of repeated game we’re talking about, depended to some extent on which cards would be picked in future rounds (as I mentioned, the game is complicated – the fundamentals of the stage game can change during gameplay, e.g. I might end up in my brother’s position, i.e. as the player who should guess which card the other player had taken, in a future round); but it would make little sense to consider it a single-shot game.
Now the first thing to note here is that if you consider it a repeated game, it probably doesn’t make a lot of sense not to at least consider to mix strategies. You could probably make an even stronger argument: Consider that if I play ‘2’ (the card giving me an income of 2) with a probability of 100 % my brother would probably pick up on that relatively fast and pick that card every round, and I’d end up with an income of zero – and if I always played ‘5’, he’d always pick 5. So the second person, the one picking the card to be guessed, has to consider adding some uncertainty to the table or he’s probably going to be in trouble. Now let’s think about how one might best mix strategies in this situation. An important theoretical aspect here is that while it’s certainly a finite game, the lenght of the game is still unknown, or at least uncertain, to the players (they do have some idea how long it’ll take to finish the game). This uncertainty adds complexity, and even though only relatively few rounds of the game is left, the game is still much too complex to be solvable by backwards induction by the players while they play the game even if such a solution might exist. Incidentally in the specific game in question when playing that specific subgame I evaluated the costs of reversing the roles of the players (so that I’d get to be the one guessing, which would be a permissible change to the stage game given a specific subgame strategy constellation) to be too high to implement – but my brother didn’t know that.
The first modelling error here was done by me when I was deciding which card to pick. I did pure randomization when I picked my card – basically I shuffled the cards and picked one of the two cards at random. Basically this was just me being stupid, because this is obviously not the best mixed strategy (it’s only optimal in the case where the expected income derived from the two cards are equal). One way to think about this is that a 50% likelihood of picking either card gives you an expected income of 0,5 x 2 + 0,5 x 5 = 3,5 if your opponent also mixes 50/50 – and foolishly I’d considered only that strategic response to my mixing strategy. The problem is that of course the opponent needn’t mix at all! A mixing strategy on his part is obviously dominated by the pure strategy of always picking ‘5’ – if he always picks ‘5’, I end up with an average income of only 1 (I get an income of 2 every second round). I realized this 5 seconds after I’d picked my card..
This is where we get to the second modelling error. My little brother said after that specific round had been played – where he’d picked 5 and I’d gotten lucky and randomly picked 2 (so the inferior strategy did not cost me anything in this specific case) that ‘of course he’d picked 5, it was the dominant strategy’. I thought that this was obviously true in the specific case of a mixing strategy on my part with 50/50 mixing, but that it would not be an optimal response to other mixing strategies with a low probability of playing ‘5’ (nor would it be an optimal response to the pure strategy of 2). I assumed we’d play at least four more rounds, and in that case it would probably be optimal to go with a mixing strategy with a ~30/70% likelihood or something along those lines (i.e. one ‘5’ and 3 ‘2’s in the rounds to come) – I figured that 5 is 2,5 as much as 2, so I should play ‘2’ 2,5 times as often as ‘5’ in equilibrium; i.e. 2,5 ‘2’s for every 1 ‘5’, meaning I should play ‘2’ in 2,5 out of 3,5 rounds, which would be about 70% of the time. I assumed my little brother would mix as well in the rounds to come when I would no longer obviously mix 50/50 and that he’d reach a similar conclusion – that he should pick 5 more often than 2 to minimize my potential income and end up near the (assumed) long-run equilibrium. After the game my little brother made it clear to me that he had not mixed but had played 5 every time, and he stated that he’d picked that strategy because it was ‘the dominant strategy’ and because it would be his best response to any strategy I could come up with. Which it clearly wasn’t.
ii. I went shopping yesterday. I got to the store and it was full of people. I generally dislike shopping when there’s a lot of people around, and I generally avoid this by strategically shopping at times during the day where I know not very many people go shopping. I have previously arrived to a store, decided it was too full of people and postponed my shopping to a later point in time because of that, but yesterday I decided instead to just get it over with fast. When I came back home I remembered that it’s been mentioned in the papers that a lot of people are sick with influenza in Aarhus, and so I realized that I’d just exposed myself to a huge health risk considering how many people were in the store. If asked about this type of stuff before I left my home, I’d have said that such a risk would be completely unacceptable to me, because I have exams before long and thus it would be very inconvenient for me to get sick at this point. If I’d included that health risk in my model, I would not have gone shopping yesterday.
I will often avoid taking public transportation when it’s possible for me to do so due to similar health-related reasons – diseases are easily transmitted in such environments. People often do not remember to include risks like these in their mental models. That’s poor modelling.
Even (reasonably) simple card games and everyday decisions about stuff like when and where to go grocery shopping can include models too complex for humans to handle well; our cognitive limitations are easy to ignore if we don’t think about them, but they’re there just the same. Social dynamics are usually a lot more complex to model than the stuff in the post. Sometimes it seems almost unbelievable to me that people somehow make all this stuff work – taking all those decisions they do on an average day, interacting with all those other people along the way… Given how complex the world is and how even very simple things like a card game can cause us all kinds of problems when we start thinking about them, I find this pretty amazing to think about.
“SUMMARY AND CONCLUSIONS
Documents provided by the Department of Energy reveal the frequent and systematic use of human subjects as guinea pigs for radiation experiments. Some experiments were conducted in the 1940s at the dawn of the nuclear age, and might be attributed to an ignorance of the long term effects of radiation exposure, or to the atomic hubris that accompanied the making of the first nuclear bombs. But other experiments were conducted during the supposedly more enlightened 1960s and 1970s. In either event, such experiments cannot be excused.
These experiments were conducted under the sponsorship of the Manhattan Project, the Atomic Energy Commission, or the Energy Research and Development Administration, all predecessor agencies of the Department of Energy. These experiments spanned roughly thirty years. This report presents the findings of the Subcommittee staff on this project.
Literally hundreds of individuals were exposed to radiation in experiments which provided little or no medical benefit to the subjects. The chief objectives of these experiments were to directly measure the biological effects of redioactive material; to measure doses from injected, ingested, or inhaled redioactive substances; or to measure the time it took radioactive substances to pass through the human body. American citizens thus became nuclear calibration devices.
In many cases, subjects willingly participated in experiments, but they became willing guinea pigs nonetheless. In some cases, the human subjects were captive audiences or populations that experimenters might frighteningly have considered “expendable”: the elderly, prisoners, hospital patients suffering from terminal diseases or who might not have retained their full faculties for informed consent. For some human subjects, informed consent was not obtained or there is no evidence that informed consent was granted. For a number of these same subjects, the government covered up the nature of the experiments and deceived the families of deceased victims as to what had transpired. In many experiments, subjects received doses that approached or even exceeded presently recognized limits for occupational radiation exposure. Doses were as great as 98 times the body burden recognized at the time the experiments were conducted.”
It seems that the Tuskegee syphilis experiment wasn’t quite as unique as I’d thought.
ii. Diuretic Treatment of Hypertension. Interesting, lots of stuff there I didn’t know.
“After adjusting for age, sex, education, and race/ethnicity, risk of death was higher in low-income than high-income group for both all-cause mortality (Hazard ratio [HR], 1.98; 95% confidence interval [CI]: 1.37, 2.85) and cardiovascular disease (CVD)/diabetes mortality (HR, 3.68; 95% CI: 1.64, 8.27). The combination of the four pathways attenuated 58% of the association between income and all-cause mortality and 35% of that of CVD/diabetes mortality. Health behaviors attenuated the risk of all-cause and CVD/diabetes mortality by 30% and 21%, respectively, in the low-income group. Health status attenuated 39% of all-cause mortality and 18% of CVD/diabetes mortality, whereas, health insurance and inflammation accounted for only a small portion of the income-associated mortality (≤6%).
Excess mortality associated with lower income can be largely accounted for by poor health status and unhealthy behaviors. Future studies should address behavioral modification, as well as possible strategies to improve health status in low-income people.”
iv. Influence of Opinion Dynamics on the Evolution of Games. I’ve only just skimmed this, but it looks interesting. Here’s the abstract:
“Under certain circumstances such as lack of information or bounded rationality, human players can take decisions on which strategy to choose in a game on the basis of simple opinions. These opinions can be modified after each round by observing own or others payoff results but can be also modified after interchanging impressions with other players. In this way, the update of the strategies can become a question that goes beyond simple evolutionary rules based on fitness and become a social issue. In this work, we explore this scenario by coupling a game with an opinion dynamics model. The opinion is represented by a continuous variable that corresponds to the certainty of the agents respect to which strategy is best. The opinions transform into actions by making the selection of an strategy a stochastic event with a probability regulated by the opinion. A certain regard for the previous round payoff is included but the main update rules of the opinion are given by a model inspired in social interchanges. We find that the fixed points of the dynamics of the coupled model are different from those of the evolutionary game or the opinion models alone. Furthermore, new features emerge such as the independence of the fraction of cooperators with respect to the topology of the social interaction network or the presence of a small fraction of extremist players.”
v. This is awesome.
“Determining the fitness consequences of sibling interactions is pivotal for understanding the evolution of family living, but studies investigating them across lifetime are lacking. We used a large demographic dataset on preindustrial humans from Finland to study the effect of elder siblings on key life-history traits. The presence of elder siblings improved the chances of younger siblings surviving to sexual maturity, suggesting that despite a competition for parental resources, they may help rearing their younger siblings. After reaching sexual maturity however, same-sex elder siblings’ presence was associated with reduced reproductive success in the focal individual, indicating the existence of competition among same-sex siblings. Overall, lifetime fitness was reduced by same-sex elder siblings’ presence and increased by opposite-sex elder siblings’ presence. Our study shows opposite effects of sibling interactions depending on the life-history stage, and highlights the need for using long-term fitness measures to understand the selection pressures acting on sibling interactions.”
Where did they get their data? Well, it was hard for people living in the 17th and 18th century to avoid death or taxes too:
“The demographic dataset from historical Finnish populations was compiled from records of the Lutheran church, which was obliged by law to document all dates of births, marriages and deaths in the population for tax purposes [25–29]. As migration events were relatively rare and the migration records maintained by the church allowed us to follow dispersers in the majority of the cases, these records provide us with relatively accurate information on individual survival and reproductive histories  (e.g. 91% of individuals with known birth date were followed to sexual maturity at age 15 years). Our study period is limited to the eighteenth and nineteenth centuries, before the transition to reduced birth and mortality rates .”
vii. I’ve posted about this topic before, here’s a new study on cancer screening procedures: Effect of Three Decades of Screening Mammography on Breast-Cancer Incidence. I think the results are depressing:
“The introduction of screening mammography in the United States has been associated with a doubling in the number of cases of early-stage breast cancer that are detected each year, from 112 to 234 cases per 100,000 women — an absolute increase of 122 cases per 100,000 women. Concomitantly, the rate at which women present with late-stage cancer has decreased by 8%, from 102 to 94 cases per 100,000 women — an absolute decrease of 8 cases per 100,000 women. With the assumption of a constant underlying disease burden, only 8 of the 122 additional early-stage cancers diagnosed were expected to progress to advanced disease. After excluding the transient excess incidence associated with hormone-replacement therapy and adjusting for trends in the incidence of breast cancer among women younger than 40 years of age, we estimated that breast cancer was overdiagnosed (i.e., tumors were detected on screening that would never have led to clinical symptoms) in 1.3 million U.S. women in the past 30 years. We estimated that in 2008, breast cancer was overdiagnosed in more than 70,000 women; this accounted for 31% of all breast cancers diagnosed.
Despite substantial increases in the number of cases of early-stage breast cancer detected, screening mammography has only marginally reduced the rate at which women present with advanced cancer. Although it is not certain which women have been affected, the imbalance suggests that there is substantial overdiagnosis, accounting for nearly a third of all newly diagnosed breast cancers, and that screening is having, at best, only a small effect on the rate of death from breast cancer.”
i. Tasmanian Devil (featured).
“The Tasmanian devil (Sarcophilus harrisii) is a carnivorous marsupial of the family Dasyuridae, now found in the wild only on the Australian island state of Tasmania. The size of a small dog, it became the largest carnivorous marsupial in the world following the extinction of the thylacine in 1936. It is characterised by its stocky and muscular build, black fur, pungent odour, extremely loud and disturbing screech, keen sense of smell, and ferocity when feeding. The Tasmanian devil’s large head and neck allow it to generate amongst the strongest bite per unit body mass of any extant mammal land predator, and it hunts prey and scavenges carrion as well as eating household products if humans are living nearby. Although it usually is solitary, it sometimes eats with other devils and defecates in a communal location. Unlike most other dasyurids, the devil thermoregulates effectively and is active during the middle of the day without overheating. Despite its rotund appearance, the devil is capable of surprising speed and endurance, and can climb trees and swim across rivers. […]
On average, devils eat about 15% of their body weight each day, although they can eat up to 40% of their body weight in 30 minutes if the opportunity arises. This means they can become very heavy and lethargic after a large meal; in this state they tend to waddle away slowly and lie down, becoming easy to approach. […]
Since the late 1990s, devil facial tumour disease has drastically reduced the devil population and now threatens the survival of the species, which in 2008 was declared to be endangered. Programs are currently being undertaken by the Government of Tasmania to reduce the impact of the disease, including an initiative to build up a group of healthy devils in captivity, isolated from the disease. […] First seen in 1996, devil facial tumour disease (DFTD) has ravaged Tasmania’s wild devils, and estimates of the impact range from 20% to as much as a 50% decline in the devil population, with over 65% of the state affected. The state’s west coast area and far north-west are the only places where devils are tumour free. Individual devils die within months of infection.
The disease is an example of a transmissible cancer, which means that it is contagious and passed from one animal to another. Short of a cure, scientists are removing the sick animals and quarantining healthy devils in case the wild population dies out. Because Tasmanian devils have extremely low levels of genetic diversity and a chromosomal mutation unique among carnivorous mammals, they are more prone to the infectious cancer.”
ii. Mengistu Haile Mariam. A bad guy.
He “is an Ethiopian politician who was the most prominent officer of the Derg, the Communist military junta that governed Ethiopia from 1974 to 1987, and President of the People’s Democratic Republic of Ethiopia from 1987 to 1991. He oversaw the Ethiopian Red Terror of 1977–1978, a campaign of repression against the Ethiopian People’s Revolutionary Party and other anti-Derg factions. Mengistu fled to Zimbabwe in 1991 at the conclusion of the Ethiopian Civil War, and remains there despite an Ethiopian court verdict finding him guilty in absentia of genocide. Some estimates, for the number of deaths his regime were responsible for, are as high as 1.285 million dead.”
iii. Waldseemüller map.
The full version is at the link, 29,700 × 16,500 pixels and almost 100 MB.
“The Waldseemüller map, Universalis Cosmographia, is a printed wall map of the world by German cartographer Martin Waldseemüller, originally published in April 1507. It is known as the first map to use the name “America“. The map is drafted on a modification of Ptolemy’s second projection, expanded to accommodate the Americas and the high latitudes. A single copy of the map survives, presently housed at the Library of Congress in Washington, D.C. […]
While some maps after 1500 show, with ambiguity, an eastern coastline for Asia distinct from the Americas, the Waldseemüller map apparently indicates the existence of a new ocean between the trans-Atlantic regions of the Spanish discoveries and the Asia of Ptolemy and Marco Polo as exhibited on the 1492 Behaim globe. The first historical records of Europeans to set eyes on this ocean, the Pacific, are recorded as Vasco Núñez de Balboa in 1513 or, Ponce de León in 1512 or 1513. Those dates are five to six years after Waldseemüller made his map. […] The historian Peter Whitfield has theorized that Waldseemüller incorporated the ocean into his map because Vespucci’s accounts of the Americas, with their so-called “savage” peoples, could not be reconciled with contemporary knowledge of India, China, and the islands of Indies. Thus, in the view of Whitfield, Waldseemüller reasoned that the newly discovered lands could not be part of Asia, but must be separate from it, a leap of intuition that was later proved uncannily precise.”
iv. Battle of Arnhem. The Wikipedia community thinks it’s a ‘good article’, I think it’s great.
“In game theory, the centipede game, first introduced by Rosenthal (1981), is an extensive form game in which two players take turns choosing either to take a slightly larger share of a slowly increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one’s opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds (hence the name), any game with this structure but a different number of rounds is called a centipede game. Wherein thus discussed becomes particularly an objective of coverage rather than that of gain and the unique subgame perfect equilibrium (and every Nash equilibrium) of these games indicates that the first player take the pot on the very first round of the game; however in empirical tests relatively few players do so, and as a result achieve a higher payoff than the payoff predicted by the equilibria analysis. These results are taken to show that subgame perfect equilibria and Nash equilibria fail to predict human play in some circumstances. The Centipede game is commonly used in introductory game theory courses and texts to highlight the concept of backward induction and the iterated elimination of dominated strategies, which show a standard way of providing a solution to the game.”
vii. Compromise of 1850.
“The Compromise of 1850 was a package of five bills, passed in September 1850, which defused a four-year confrontation between the slave states of the South and the free states of the North regarding the status of territories acquired during the Mexican-American War (1846–1848). The compromise, drafted by Whig Senator Henry Clay of Kentucky and brokered by Clay and Democrat Stephen Douglas, avoided secession or civil war and reduced sectional conflict for four years.
The Compromise was greeted with relief, although each side disliked specific provisions.”
The article has much more, including plenty of relevant maps.
Just some random notes, I probably shouldn’t publish this but I decided to do it anyway even though it’s not very structured.
So, I started out just by thinking about a simple question: Why do people talk with/to each other?
Now, we all know that there’s no simple answer to that question. There are answers – many of them. Categories like information exchange and social bonding/social relations management probably cover many of the reasons though there are others. Theoretically there’s probably a distinction to be made between conversations where people are very aware of what they want to accomplish with the conversation and how it can be expected to proceed on the one hand (conversation with a coworker about the new DHL-standards, board-meeting with a 12-point agenda, a doctor’s conversation with a patient); and conversations where the goal(s) is (are) more hazy and the expected duration is much more uncertain. Many of the conversations where people will be uncertain as to why they even engaged in them in the first place if asked directly probably can be argued to have quite clear goals if perceived in a certain light; goals having to do with social relations management and bonding. If you find yourself in a situation where you don’t know why you’re talking, you’re probably doing it for reasons having to do with social relations management/bonding. And if you feel the need to ask yourself why you’re talking with the person with whom you’re talking (‘why am I even talking to this guy?), you probably won’t be for long.
Conversations usually evolve over time because of interaction effects; new inputs are being delivered along the way, shaping the direction of the conversation. Two conversations with roughly the same starting point can end up in very different places. It’s worth noting that inputs supplied can be both verbal or non-verbal and people often underestimate the impact non-verbal behaviour may have on a conversation/social interaction.
Human interaction is too complex for it to be optimal for people engaging in conversations to always think hard about stuff like what to say and what not to say or how and when to say whatever it is that (perhaps?) needs saying. Conversations proceed at a much faster speed than the human brain can process all the potentially relevant information, and so a lot of information get excluded by default. Conveniently we do not think much about the fact that there are a lot of things we don’t think about when interacting with others. Excluding a lot of information and ideas means that the communication gets more efficient, at least if measured in terms of words/minute or similar metrics. Body language can convey a lot of information fast, so people who are good at that (and good at reading it) will ceteris paribus be better communicators than will people who are not.
Many conversations follow, at least to some extent, some basic scripts people have internalized. Most people know pretty well how to react when asked a question like ‘how are you?’ and they know the general direction in which a conversation starting in such a manner may be expected to proceed, just as they know what to say when a person shares the information that he recently got one day older than he was the day before. We often don’t think very much about the meta-aspects related to what to say in any given social situation, because if we had to do that all the time we couldn’t really do anything else.
However even though both a lot of the stuff we talk about and the way we talk about them to a very large extent follow scripts, a lot of feedback still does take place along the way; you need to all the time be aware if the other person is following the script, and you need to be aware which script is the right one to apply to the specific part of the conversation in question (is the secretary bringing up her weekend plans because she’s trying to tell you she can’t work overtime this Saturday, or is it because she wants you to ask her out?). Human behaviour is incredibly complex but we’re much too used to all this complexity to ever truly notice it. When one starts to think about how conversations work, it becomes clear that there are all kinds of ‘crazy’ ways for people to break the script along the way: Shouting loud inappropriate remarks in the middle of a sentence, turning your back on the person with whom you converse, asking a random question having nothing to do with the topic discussed, sitting down on the floor while the other person is talking, start moving your elbows up and down randomly while the other person is talking, punch the other guy in the stomach… The fact that people don’t even think about how it would be inappropriate to just sit down on the floor while talking to a coworker at the watercooler is an indication of just how narrow is the range of what’s considered to be acceptable behaviour. But we don’t notice, because we don’t think about such things. Which i find interesting.
In game theory a well known concept is the idea of a zero-sum game. Many arguments I like to think are zero-sum games, especially political- and similar arguments. X and Y will start out with some different sets of arguments supporting their cause. The ‘winner’ of the argument will say that his set of arguments were better than the arguments of the other party. Rarely will X and Y meet and discuss how to improve the argument sets of both X and Y. The idea is not to weed out bad arguments and replace them with good arguments; the idea is to win and that’s often easier to do with many arguments than with just a few. If X cedes the point that one of his arguments was not convincing it will generally harm the cause of X and help Y to win the argument.
Now one might here argue that human interaction would be more pleasant if people didn’t engage in ‘zero-sum conversation games’ such as the ones described above, but rather tried to always make human interaction be positive-sum. In case you were in doubt this is not where I am heading. The truth is that as long as there are surpluses of some kind somewhere, someone will try to grab part of that surplus if it is within that person’s reach. Organisms which behave that way have more children in the long run, and when it comes to human behaviour there’s a limit to how much culture matters. Another way to think about such ‘political arguments as zero-sum games’ is to think of them as a huge and important technical innovation and a great improvement upon the kind of zero-sum games people engaged in before the advent of political debates as conflict-resolution mechanisms.
i. Ironclad warship.
“An ironclad was a steam-propelled warship in the early part of the second half of the 19th century, protected by iron or steel armor plates. The ironclad was developed as a result of the vulnerability of wooden warships to explosive or incendiary shells. The first ironclad battleship, La Gloire, was launched by the French Navy in November 1859. […]
The rapid evolution of warship design in the late 19th century transformed the ironclad from a wooden-hulled vessel that carried sails to supplement its steam engines into the steel-built, turreted battleships and cruisers familiar in the 20th century. This change was pushed forward by the development of heavier naval guns (the ironclads of the 1880s carried some of the heaviest guns ever mounted at sea), more sophisticated steam engines, and advances in metallurgy which made steel shipbuilding possible.
The rapid pace of change in the ironclad period meant that many ships were obsolete as soon as they were complete, and that naval tactics were in a state of flux. Many ironclads were built to make use of the ram or the torpedo, which a number of naval designers considered the crucial weapons of naval combat. There is no clear end to the ironclad period, but towards the end of the 1890s the term ironclad dropped out of use. New ships were increasingly constructed to a standard pattern and designated battleships or armored cruisers. […]
From the 1860s to the 1880s many naval designers believed that the development of the ironclad meant that the ram was again the most important weapon in naval warfare. With steam power freeing ships from the wind, and armor making them invulnerable to shellfire, the ram seemed to offer the opportunity to strike a decisive blow.
The scant damage inflicted by the guns of Monitor and Virginia at Battle of Hampton Roads and the spectacular but lucky success of the Austrian flagship Ferdinand Max sinking the Italian Re d’Italia at Lissa gave strength to the ramming craze. From the early 1870s to early 1880s most British naval officers thought that guns were about to be replaced as the main naval armament by the ram. Those who noted the tiny number of ships that had actually been sunk by ramming struggled to be heard.
The revival of ramming had a significant effect on naval tactics. Since the 17th century the predominant tactic of naval warfare had been the line of battle, where a fleet formed a long line to give it the best fire from its broadside guns. This tactic was totally unsuited to ramming, and the ram threw fleet tactics into disarray. The question of how an ironclad fleet should deploy in battle to make best use of the ram was never tested in battle, and if it had been, combat might have shown that rams could only be used against ships which were already stopped dead in the water.”
This is how one of them looked like, click to view full size*:
ii. Allometry. John Hawks talked about this a bit in one of his lectures, I decided to look it up:
“Allometry is the study of the relationship of body size to shape, anatomy, physiology and finally behaviour […] Allometry often studies shape differences in terms of ratios of the objects’ dimensions. Two objects of different size but common shape will have their dimensions in the same ratio. Take, for example, a biological object that grows as it matures. Its size changes with age but the shapes are similar. […]
In addition to studies that focus on growth, allometry also examines shape variation among individuals of a given age (and sex), which is referred to as static allometry. Comparisons of species are used to examine interspecific or evolutionary allometry […]
Isometric scaling occurs when changes in size (during growth or over evolutionary time) do not lead to changes in proportion. […] Isometric scaling is governed by the square-cube law. An organism which doubles in length isometrically will find that the surface area available to it will increase fourfold, while its volume and mass will increase by a factor of eight. This can present problems for organisms. In the case of above, the animal now has eight times the biologically active tissue to support, but the surface area of its respiratory organs has only increased fourfold, creating a mismatch between scaling and physical demands. Similarly, the organism in the above example now has eight times the mass to support on its legs, but the strength of its bones and muscles is dependent upon their cross-sectional area, which has only increased fourfold. Therefore, this hypothetical organism would experience twice the bone and muscle loads of its smaller version. This mismatch can be avoided either by being “overbuilt” when small or by changing proportions during growth […] Allometric scaling is any change that deviates from isometry. […]
In plotting an animal’s basal metabolic rate (BMR) against the animal’s own body mass, a logarithmic straight line is obtained. Overall metabolic rate in animals is generally accepted to show negative allometry, scaling to mass to a power ≈ 0.75, known as Kleiber’s law, 1932. This means that larger-bodied species (e.g., elephants) have lower mass-specific metabolic rates and lower heart rates, as compared with smaller-bodied species (e.g., mice), this straight line is known as the “mouse to elephant curve”.
“An arthropod is an invertebrate animal having an exoskeleton (external skeleton), a segmented body, and jointed appendages. Arthropods are members of the phylum Arthropoda (from Greek ἄρθρον árthron, “joint”, and ποδός podós “leg”, which together mean “jointed leg”), and include the insects, arachnids, crustaceans, and others. Arthropods are characterized by their jointed limbs and cuticles, which are mainly made of α-chitin; the cuticles of crustaceans are also biomineralized with calcium carbonate. The rigid cuticle inhibits growth, so arthropods replace it periodically by molting. The arthropod body plan consists of repeated segments, each with a pair of appendages. It is so versatile that they have been compared to Swiss Army knives, and it has enabled them to become the most species-rich members of all ecological guilds in most environments. They have over a million described species, making up more than 80% of all described living animal species, and are one of only two animal groups that are very successful in dry environments – the other being the amniotes. They range in size from microscopic plankton up to forms a few meters long.”
Another way to put it – it’s these guys:
I thought the stuff on molting (Ecdysis) was interesting:
“The exoskeleton cannot stretch and thus restricts growth. Arthropods therefore replace their exoskeletons by molting, or shedding the old exoskeleton after growing a new one that is not yet hardened. Molting cycles run nearly continuously until an arthropod reaches full size. […] In the initial phase of molting, the animal stops feeding and its epidermis releases molting fluid, a mixture of enzymes that digests the endocuticle and thus detaches the old cuticle. This phase begins when the epidermis has secreted a new epicuticle to protect it from the enzymes, and the epidermis secretes the new exocuticle while the old cuticle is detaching. When this stage is complete, the animal makes its body swell by taking in a large quantity of water or air, and this makes the old cuticle split along predefined weaknesses where the old exocuticle was thinnest. It commonly takes several minutes for the animal to struggle out of the old cuticle. At this point the new one is wrinkled and so soft that the animal cannot support itself and finds it very difficult to move, and the new endocuticle has not yet formed. The animal continues to pump itself up to stretch the new cuticle as much as possible, then hardens the new exocuticle and eliminates the excess air or water. By the end of this phase the new endocuticle has formed. Many arthropods then eat the discarded cuticle to reclaim its materials.
Because arthropods are unprotected and nearly immobilized until the new cuticle has hardened, they are in danger both of being trapped in the old cuticle and of being attacked by predators. Molting may be responsible for 80 to 90% of all arthropod deaths.”
It’s a long article, and it has a lot of good stuff (and lots of links).
iv. Scottish independence referendum, 2014. I did not know about this.
“In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies. Coordination games are a formalization of the idea of a coordination problem, which is widespread in the social sciences, including economics, meaning situations in which all parties can realize mutual gains, but only by making mutually consistent decisions. […]
A typical case for a coordination game is choosing the side of the road upon which to drive, a social standard which can save lives if it is widely adhered to. […] In a simplified example, assume that two drivers meet on a narrow dirt road. Both have to swerve in order to avoid a head-on collision. If both execute the same swerving maneuver they will manage to pass each other, but if they choose differing maneuvers they will collide. […] In this case there are two pure Nash equilibria: either both swerve to the left, or both swerve to the right. In this example, it doesn’t matter which side both players pick, as long as they both pick the same. Both solutions are Pareto efficient. This is not true for all coordination games”
I have not yet read all of the relevant material covering this subject in Heather, so I don’t know the extent to which he (or others) disagrees with Bury (who seems to be the main source of the article). But if you didn’t know there was such a thing as an Ostrogothic Kingdom in the first place, reading the article will probably not be a step in the wrong direction.
vii. Speleology. Yet another one of those areas of research you have probably never thought about:
“Speleology (also spelled spelæology or spelaeology) is the scientific study of caves and other karst features, their make-up, structure, physical properties, history, life forms, and the processes by which they form (speleogenesis) and change over time (speleomorphology). The term speleology is also sometimes applied to the recreational activity of exploring caves, but this is more properly known as caving, spelunking or potholing. Speleology and caving are often connected, as the physical skills required for in situ study are the same.
Speleology is a cross-disciplinary field that combines the knowledge of chemistry, biology, geology, physics, meteorology and cartography to develop portraits of caves as complex, evolving systems.”
I thought the article on troglobites (small cave-dwelling animals which live permanently underground and cannot survive outside the cave environment), which it links to, was interesting too.
* I decided to present the readers with an alternative way to post images on the blog, which I’m considering applying in the future. I have been made aware that the current modus operandi, posting pictures full-size in the posts, is not always optimal given the readers’ preferences regarding browsers and which tools with which to access the site (‘modern gadgets’ vs PC). I should make it clear that if you read this blog using a PC in a firefox browser with a pretty standard screen resolution, it looks fine. Because that’s how I access and view the site.
I am, and have been for a very long time, afraid that the blog will turn too much into a wall of text and I keep reminding myself that I should take active countermeasures to prevent this from happening. I don’t care that much about illustrations and images, but I know that many people do. Is this way of presenting images which I have applied in the post – relatively small thumbs which you can click if you want to see them in full size – (much) better than the alternative?
One more thing. I know that it’s quite possible that the reason stuff like images sometimes look like crap is because the chosen theme for the blog is not optimal. But I also know that the last time I changed the theme, everything went to hell and it took me days to handle the problems which the theme change caused. That was, mind you, at a point in time where the number of posts was less than a fourth of what it is today. If I change the theme, it affects at least every post I’ve written in the last 4 years. I have no idea how it will impact stuff like videos. So even if the theme is not optimal, changing it is not an option if I can avoid it.
2. Effect of psychoactive drugs on animals. It’s not a long article, but I had to link to it because of these awesome images:
If you’d rather read about the caffeine that’s having such a huge effect on spiders, here’s the article. Here’s one bit that I found interesting:
“Extreme overdose can result in death. The median lethal dose (LD50) given orally, is 192 milligrams per kilogram in rats. The LD50 of caffeine in humans is dependent on individual sensitivity, but is estimated to be about 150 to 200 milligrams per kilogram of body mass or roughly 80 to 100 cups of coffee for an average adult. Though achieving lethal dose with caffeine would be exceptionally difficult with regular coffee, there have been reported deaths from overdosing on caffeine pills, with serious symptoms of overdose requiring hospitalization occurring from as little as 2 grams of caffeine. An exception to this would be taking a drug such as fluvoxamine or levofloxacin, which blocks the liver enzyme responsible for the metabolism of caffeine, thus increasing the central effects and blood concentrations of caffeine five-fold. Death typically occurs due to ventricular fibrillation brought about by effects of caffeine on the cardiovascular system.”
“The Schwarzschild radius (sometimes historically referred to as the gravitational radius) is the distance from the center of an object such that, if all the mass of the object were compressed within that sphere, the escape speed from the surface would equal the speed of light.” (The article has much more.)
4. Peasants’ Revolt.
“The Peasants’ Revolt, Wat Tyler’s Rebellion, or the Great Rising of 1381 was one of a number of popular revolts in late medieval Europe and is a major event in the history of England. Tyler’s Rebellion was not only the most extreme and widespread insurrection in English history but also the best-documented popular rebellion to have occurred during medieval times. The names of some of its leaders, John Ball, Wat Tyler and Jack Straw, are still familiar in popular culture, although little is known of them.
The revolt later came to be seen as a mark of the beginning of the end of serfdom in medieval England, although the revolt itself was a failure. It increased awareness in the upper classes of the need for the reform of feudalism in England and the appalling misery felt by the lower classes as a result of their enforced near-slavery.”
I found the information about the conservation efforts fascinating – this species was saved even though it was about as close to extinction as a species could possibly get. And not only did they survive, some have even been succesfully reintroduced into the wild:
“Przewalski’s horse […] or Dzungarian horse, is a rare and endangered subspecies of wild horse (Equus ferus) native to the steppes of central Asia, specifically China and Mongolia. At one time extinct in the wild, it has been reintroduced to its native habitat in Mongolia at the Khustain Nuruu National Park, Takhin Tal Nature Reserve and Khomiin Tal. […]
The world population of these horses are all descended from 9 of the 31 horses in captivity in 1945. These nine horses were mostly descended from approximately 15 captured around 1900. A cooperative venture between the Zoological Society of London and Mongolian scientists has resulted in successful reintroduction of these horses from zoos into their natural habitat in Mongolia; and as of 2011 there is an estimated free-ranging population of over 300 in the wild. The total number of these horses according to a 2005 census was about 1,500.”
6. Strategic dominance (game theory).
“Take any natural number n. If n is even, divide it by 2 to get n / 2. If n is odd, multiply it by 3 and add 1 to obtain 3n + 1. Repeat the process (which has been called “Half Or Triple Plus One”, or HOTPO) indefinitely. The conjecture is that no matter what number you start with, you will always eventually reach 1. The property has also been called oneness.” (long article, lots of stuff including several examples.)
Players: i,j (think: male, female)
Preferences: U(IO, II),
IO: Interest overlap.
II: Interest Intensity.
(i,j) have (n,m) interests (they don’t necessarily have equally many), (ni,mj). Let (ki) be the subset of individual i’s interests from the total interest set (ni) which is non-overlapping with the interests set (mj) (non-shared interests), and let (li) be the subset of interests from (ni) which do overlap with (mj) (shared interests). Assume that individual i’s total (negative) utility contribution from the interest set (ki) is equal to [-ki*(aiNO*qiNO)] – where II here enters the model as a scaling vector aiNO with 0 < aiNO < 1, where 0 denotes no interest and 1 denotes high interest, where the NO-part denotes ‘Non-Overlapping’ interests and where q is a relevance factor – some interests are intense but we don’t care if the partner shares them. To get a model one can always solve you probably need to assume q is bounded, but in the real world it often isn’t (‘dealbreakers’). Similarly, the interest set (li) which enter both utility functions Ui and Uj contributes individual i with a utility of [li*(aiO*qiO)] to total utility from entering the relationship, where Oi denotes the interests of individual i which ‘Overlaps’ with interests from the interest set (mj). Let the reservation utility be zero and total utility from entering the relationsship for individual i be li*(aiO*qiO) – ki*(aiNO*qiNO). Do note that the problem is not perfectly symmetric as the scaling parameter qi is in general not equal to qj, even if (li) = (lj). There’s also the problem that the common interest factor might enter (at least in part) the utility function as a share of total interest space – 2 common interests out of 4 might be better than 2 common interests out of 30. Though you might in some cases be able to let this effect enter the model via q.
Utility matters but we need a matching likelihood (ML) as well. Let the likelihood that (i,j) meet be a function of l*(aC), where dML/dl and dML/daC are both positive – so people are more likely to meet if they have many common interests and they are more likely to meet the more intense the interests are (the latter is more dubious than the former, ie. compare internet chess with ballet). Arguably one might include qC in the ML, because some people’s interests choices are ‘potential partner-relevant’, but it’s easier if we leave that out for now. Assume further that…
The model I was beginning to outline above had zero dynamics, no risk, no ‘family preferences’, ‘income/status’-variables, ‘age/looks’ -ll-, geography, beliefs… You might want to remember this model outline next time you hear a social scientist talk about this or that. A very simple model like the one above with few variables and simple relations between the variables can still be quite difficult to solve because you have to think very hard about what’s going on, what you’re assuming along the way and how to implement decision rules in the model that make the resulting equilibrium(/a) appear plausible (and how to get rid of implausible equilibria). Social behaviour is difficult to model and it’s hard to get good results in micro setups like these because there are too many variables at play and way too much interaction going on.
Mostly to make clear that even though low posting frequency often means that I feel less well than I sometimes do, this is not the reason for this last week’s lpf. I’m simply too busy to blog much or do stuff that’s blog-worthy. Didn’t really have a weekend this week at all.
Some random stuff/links:
2. How to mate with King vs King + 2 bishops:
3. Ever wondered what a Vickrey auction is and what the optimal bidding strategy in such an auction is? No? Now you know.
4. How long can people hold their breath under water? (and many other things. The answer of course is: ‘It depends…’)