Mechanism and Causality in Biology and Economics

First, here’s a link. Some quotes below, some comments in the last part of the post:

“I refer to an account of causal order based on Simon’s seminal analysis as the structural account.2 It is structural in the sense that what matters for determining the causal order is the relationship among the parameters and the variables and among the variables themselves. The parameterization – that is, the identification of privileged set of parameters that govern the functional relationships – is the source of the causal asymmetries that define the causal order. The idea of a privilege parameterization can be made more precise, by noting that a set of parameters is privileged when its members are, in the terminology of the econometricians, variation-free. A parameter is variation-free if, and only if, the fact that other parameters take some particular values in their ranges does not restrict the range of admissible values for that parameter.
Defining parameters as variation-free variables has a similar flavor to Hans Reichenbach’s (1956) Principle of the Common Cause: any genuine correlation among variables has a causal explanation – either one causes the other, they are mutual causes, or they have a common cause. Since we represent causal connections as obtaining only between variables simpliciter,we insist that parameters not display any mutual constraints. […] the variation-freeness of parameters is only a representational convention. Any situation in which it appears that putative parameters are mutually constraining can always be rewritten so that the constraints are moved into the functional forms that connect variables to each other.” […]

“John Anderson’s (1938, p. 128) notion of a causal field is helpful (see also Mackie 1980, p. 35; Hoover 2001, pp. 41–49). The causal field consists of background conditions that, for analytical or pragmatic reasons, we would like to set aside in order to focus on some more salient causal system. We are justified in doing so when, in fact, they do not change or when the changes are causally irrelevant. In terms of representation within the structural account, setting aside causes amounts to fixing certain parameters to constant values. The effect is not unlike Pearl’s or Woodward’s wiping out of a causal arrow, though somewhat more delicate. The replacement of a parameter by a constant amounts to absorbing that part of the causal mechanism into the functional form that connects the remaining parameters and variables.” (From chapter 3, ‘Identity, Structure, and Causal Representation in Scientific Models’. This was one of the better chapters).

“Certain substantial idealisations need to be taken also when the RD model [replicator dynamics model, US] is interpreted biologically. A different set of substantial idealisations needs to be taken when the RD model is interpreted socially. By making these different idealisations, we adapt the model for its respective representative uses. This is standard scientific practice: most, and possibly all, model uses involve idealisations. Yet when the same formal structure is employed to construct different, more specific mechanistic models, and each of these models involves different idealisations, one has to be careful when inferring purported similarities between these different mechanisms based on the common formal structure. […] the RD equation is adapted for its respective representative tasks. In the course of each adaptation, certain features of the RD are drawn on – others are accepted as useful or at least harmless idealisations. Which features are drawn on and which are accepted as idealisations differ with each adaptation. The mechanism that each adaptation of the RD represents is substantially different from each other and does not share any or little causal structure between each other.” (From Chapter 5: ‘Models of Mechanisms: The Case of the Replicator Dynamics’).

“Before formulating [a] claim, it is necessary first to clear up some terminology. Leuridan[‘s] definition ignores three traditional distinctions that have brought much-needed clarity to the discussions of laws in the philosophy of science. First, we distinguish laws (metaphysical entities that produce or are responsible for regularities) and law statements (descriptions of laws). If one does not respect this distinction, one runs the risk (as Leuridan does) of unintentionally suggesting that sentences, equations, or models are responsible for the fact that certain stable regularities hold. In like fashion, we distinguish regularities, which are statistical patterns of dependence and independence among magnitudes, from generalizations, which describe regularities. Finally, we distinguish regularities from laws, which produce or otherwise explain the patterns of dependence and independence among magnitudes (or so one might hold). […]

Strict law statements, as Leuridan understands them, are nonvacuous, universally quantified, and exceptionless statements that are unlimited in scope, apply in all times and places, and contain only purely qualitative predicates (2010, p. 318). Noting that few law statements in any science live up to these standards, Leuridan argues that the focus on strict law statements (and presumably also on strict laws) is unhelpful for understanding science. Instead, he focuses on the concept of a pragmatic law (or p-law). Following Sandra Mitchell (1997, 2000, 2003, 2009), Leuridan understands p-law statements as descriptions of stable and strong regularities that can be used to predict, explain, and manipulate phenomena. A regularity is stable in proportion to the range of conditions under which it continues to hold and to the size of the space-time region in which it holds (2010, p. 325). A regularity is strong if it is deterministic or frequent. p-law statements need not satisfy the criteria for strict law statements.” (From chapter 7: ‘Mechanisms and Laws: Clarifying the Debate’)

“This section has illustrated two central points concerning extrapolation. First, it is not necessary that the causal relationship to be extrapolated is the same in the model as in the target. Given knowledge of the probability distributions for the model and target along with the selection diagram, it can be possible to make adjustments to account for differences. Secondly, the conditions needed for extrapolation vary with the type of claim to extrapolated. In general, the more informative the causal claim, the more stringent the background assumptions needed to justify its transfer. This second point is very important for explaining how extrapolation can remain possible even when substantial uncertainty exists about the selection diagram. […]

I should emphasize that the point here is definitely not to insist upon the infirmity of causal inferences grounded in extrapolation and observational data. Uncertainties frequently arise in experiments too, especially those involving human subjects (for instance, due to noncompliance, i.e., the failure of some subjects in the experiment to follow the experimental protocol). Such uncertainties are inherent in any attempts to learn about causation in large complex systems wherein numerous practical and ethical concerns restrict the types of studies that are possible. Consequently, scientific inference in such situations usually must build a cumulative case from a variety of lines of evidence none of which is decisive in isolation. Although that may seem a rather obvious point, it does seem to get overlooked in some critical discussions of extrapolation. […] critiques which observe that extrapolations rarely if ever constitute definitive evidence sail wide of the mark. Building a case based on the coherence of multiple lines of imperfect evidence is the norm for social science and other sciences that study complex systems that are widely diffused across space and time. To insist otherwise is to misconstrue the nature of science and to obstruct applications of scientific knowledge to many pressing real-world problems.” (From chapter 10: ‘Mechanisms and Extrapolation in the Abortion-Crime Controversy’.)

“In 1992, Heckman published a seminal paper containing ‘most of the standard objections’ against randomised experiments in the social sciences. Heckman focused on the non-comparative evaluation of social policy programmes, where randomisation simply decided who would join them (without allocating the rest to a control group). Heckman claimed that even if randomisation allows the experimenters to reduce selection biases, it may produce a different bias. Specifically, experimental subjects might behave differently if joining the programme did not require ‘a lottery’. Randomisation can thus interfere with the decision patterns (the causes of action) presupposed in the programme under evaluation. […] Heckman’s main objection is that randomisation tends to eliminate risk-averse persons. This is only acceptable if risk aversion is an irrelevant trait for the outcome under investigation […] However, even if irrelevant, it compels experimenters to deal with bigger pools of potential participants in order to meet the desired sample size, so the exclusion of risk-averse subjects does not disrupt recruitment. But bigger pools may affect in turn the quality of the experiment, if it implies higher costs. One way or another, argues Heckman, randomisation is not neutral regarding the results of the experiment.” (…known stuff, but I figured I should quote it anyway as it’s unlikely that all readers are familiar with this problem. From chapter 11: ‘Causality, Impartiality and Evidence-Based Policy’. How to deal with the problem? Here’s what they conclude:)

“To sum up, in RFEs [Randomized Field Experiments – US], randomisation may generate a self-selection bias; we can only avoid with a partial or total masking of the allocation procedure. We have argued that this is a viable solution only insofar as the trial participants do not have strong preferences about the trial outcome. If they do, we cannot assume that blinded randomisation will be a control for their preferences unless we test for its success. We will only be able to claim that the trial has been impartial regarding the participants’ preferences if we have a positive proof of them being ignorant of the comparative nature of the experiment. Hence, in RFEs, randomisation is not a strong warrant of impartiality per se: we need to prove in addition that it has been masked successfully.1”

On a general note, I found some of the stuff in this book interesting, but there was some confusing stuff in there as well. I had at least some background knowledge about quite a few of the subjects covered, but a lot of the stuff in the book is written by people with a completely different background (many of the contributors are philosophers of science), and in some chapters I had a hard time ‘translating’ a specific contributor’s (gibberish? It’s not a nice word to use, but I’m tempted to use it here anyway) into stuff related to the science/the real world – I was quite close to walking away from the book while reading chapters 8 and 9, dealing with natural selection and causal processes. I didn’t, but you should most certainly not pick up this book in order to figure out how natural selection ‘actually works’; if that’s your goal, read Dawkins instead. A few times I had an ‘I knew this, but that’s actually an interesting way to think about it’-experience and I generally like having those. As in all books with multiple contributors, there’s some variation in the quality of the material across chapters – and as you might infer from the comments above, I didn’t think very highly of chapters 8 and 9. But there were other chapters as well which also did not really interest me much. I did read it all though.

Overall I’m a little disappointed, but it’s not all bad. I gave it 2 stars on goodreads, and towards the end I moved significantly closer to the 3 star rating than the one star rating. I wouldn’t recommend it though; considering how much you’re likely to get out of this, it’s probably for most people simply too much work – it’s not an easy book to read.

August 19, 2013 - Posted by | books, economics, philosophy, science, statistics

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: