Peripheral Neuropathy & Neuropathic Pain: Into the light (I)

“Peripheral neuropathy is a common medical condition, the diagnosis of which is often protracted or delayed. It is not always easy to relate a neuropathy to a specific cause. Many people do not receive a full diagnosis, their neuropathy often being described as ‘idiopathic’ or ‘cryptogenic’. It is said that in Europe, one of the most common causes is diabetes mellitus but there are also many other known potential causes. The difficulty of diagnosis, the limited number of treatment options, a perceived lack of knowledge of the subject — except in specialised clinics, the number of which are limited — all add to the difficulties which many neuropathy patients have to face. Another additional problem for many patients is that once having received a full, or even a partial diagnosis, they are then often discharged back to their primary healthcare team who, in many instances, know little about this condition and how it may impact upon their patients’ lives. In order to help bridge this gap in medical knowledge and to give healthcare providers a better understanding of this often distressing condition, The Neuropathy Trust has commissioned a new book on this complex topic.

As well as covering the anatomy of the nervous system and the basic pathological processes that may affect the peripheral nerves, the book covers a whole range of neuropathic conditions. These include, for example, Guillain Barre syndrome, chronic inflammatory demyelinating polyneuropathy, vasculitic neuropathies, infectious neuropathies, diabetic and other metabolic neuropathies, hereditary neuropathies and neuropathies in patients with cancer.”

The stuff above is the part of the amazon book description I decided to include when I added the book to goodreads.

The book is dense. There are a lot of terms defined in the book and a lot of topics covered. Despite being a quite shortish book only a couple hundred pages long (compare for example with related books like this one), it’s still the sort of book which many people might consider using as a reference work (I certainly consider doing that). The author really knows his stuff. According to the website of the European Neurological Society, “The ENS has now become the most prominent society of neurologists on the European Continent with a total of 2300 (including all categories) members from 60 countires [sic] worldwide.” I mention this because five years ago Gérard Said, the author, became the President of the ENS. He’s done/accomplished a lot of stuff besides that, the link has more details about him and what’s he’s done but what it boils down to is that this guy as already mentioned really knows his stuff. I disliked the comment on the front cover of the book that it was Written by one of the world’s leading experts and I at first considered it a decent argument against reading the book, but actually it’s probably both a fair and accurate statement; it seems like this guy really is one of the top guys in his field (I have no clue why someone like this does not have a wikipedia page whereas [random celebrity whose name I don’t know] does – well, I do have a clue, but…).

I don’t find the book particularly hard to read, but I’m frequently looking stuff up and I’ve read textbooks dealing with similar topics before (see e.g. here and here) – maybe I’m underestimating how difficult the book might be to read and understand for someone without much medical knowledge, but I think you should be perfectly able to get through the book without already having a detailed understanding of the neurological system; in my opinion the book is potentially useful for patients as well as medical practitioners, at least if the patient is willing to put in some work. An extensive glossary is included at the beginning of the book, defining most of the terms with which people might be unfamiliar. If you were wondering why I looked up so many words and concepts on wikipedia and other online sources (see below) in spite of the glossary, I should note that this is how I generally read books like this one; wiki or google will often provide additional details compared to the information included in standard glossaries, and often it’s even faster to look up such stuff online than it might be to locate the definition in the book. Another big reason for looking up key terms online was that I decided early on that a link collection like the one included below might be the best way to illustrate here on the blog which kind of content is covered in the book. Regardless of how you decide to look up stuff along the way, you should definitely not skip the definitions included in the glossary before reading the book proper – many of the terms you won’t be able to remember just on account of having read the words and definitions once or twice, but it’s definitely a good idea to have a look even so before moving on; this is probably the first book I’ve read in which the glossary was located at the front of the book instead of somewhere in the back, and it’s not a coincidence that the author decided to organize the book this way.

As a small aside, I thought this might be a reasonable place to add a ‘meta’ comment related to my book posts more generally. I’ve been considering writing slightly shorter posts about the non-fiction books I’m reading/have read; ‘classical posts’ of the kind I’ve written a lot of in the past can easily end up taking four-five hours for me to write and edit, and this means that if I don’t write short posts about the books I may easily end up not blogging them at all. This is an undesirable outcome for me. What I’ve been doing instead lately is to review more books on goodreads than I used to do; the idea being that if I end up not blogging the book, I’ll at least have reviewed it on goodreads. This incidentally means that if you want to keep track of my reading these days and would like to know what I think about the books I’m reading, the front page of this blog is no longer enough; you may need to also pay attention to my activities on goodreads or keep track of my reading via this link (I update that book list very often, usually every time I’ve finished a book). I don’t like to ‘branch out’ like that, but I also don’t like the idea of cross-posting goodreads reviews on the blog, and recently I’ve found it hard to know how to do these things optimally – this is where I’ve ended up. These days I’ll usually add a goodreads review of a non-fiction book quite shortly after I’ve finished the book, especially if I’m not sure if I’ll blog the book later.

Okay, back to the book: I think I’ll limit semi-detailed discussion of the book’s contents to the stuff included about diabetic/metabolic neuropathies, and although I’ve already encountered some relevant content and useful observations on that topic at this point, I have not yet read the chapter devoted to this topic. So you should expect me to post another post about this book some time in the future. I’ve read roughly half the book at this point and as mentioned in an earlier update on goodreads I’m seriously considering giving this book a five star rating. The book has way too much stuff to talk about all of it in detail, so what I’ll do below is to add some links to topics/terms/etc. discussed in the coverage so far which I looked up along the way, to give you a few more details than did the quote at the beginning:

Peripheral neuropathy.
Spinal nerves.
Anterior grey column.
Motor neuron.
Afferent nerve fiber.
Nodes of Ranvier.
Charcot–Marie–Tooth neuropathy.
Guillain–Barré syndrome.
Acute motor axonal neuropathy.
POEMS syndrome.
Monoclonal gammopathy of undetermined significance.
Vasa nervorum.
Vasculitic neuropathy.
Granulomatosis with polyangiitis.
Churg-Strauss syndrome.
Mononeuritis Multiplex.

October 12, 2015 Posted by | books, medicine, meta | Leave a comment

Understanding Other-Oriented Hope

“This monograph introduces, defines, exemplifies, and characterizes hope that is directed toward others rather than toward the self. […] Because vicarious hope remains a relatively neglected topic within hope theory and research, the current work aims to provide, for the first time, a robust conceptualization of other-oriented hope, and to review and critically examine existing literature on other-oriented hope.”

I really should be blogging more interesting books here instead, such as e.g. Gigerenzer’s book, but this one is easy to blog.

I’ll make this post short, but I do want to make sure no-one misses this crucial point, which is the most important observation in the context of this book: The book is a terrible book. Given that I’ve already shared (some of) my negative views about the book on goodreads I won’t go into all the many reasons why you probably shouldn’t read it here as well; instead I’ll share below a few observations from the book which might be of interest to some of the people reading along.

“Whereas other-interest encapsulates a broad and generalized orientation toward valuing, recognizing, facilitating, promoting, and celebrating positive outcomes for others that have occurred in the past or present, or that may occur in the future, other-oriented hope cleaves that portion of other-interest specific to the harbouring of future-oriented hope for others and (where possible) attendant strivings toward meeting those ends. […] Other-oriented hope is viewed as a specific form of other-interest, one in which we reveal our interest in the welfare of others by apportioning some of our future-oriented mental imaginings to others’ welfare in addition to our own, more self-focused, hope. […] we define other-oriented hope as future-oriented belief, desire, and mental imagining surrounding a valued outcome of another person that is uncertain but possible. […] The dimensions emphasized by Novotny (1989) within an illness context are that hope: is future-oriented; involves active engagement; is an inner resource; reflects possibility; is relational; and concerns issues of importance.”

“Schrank et al. (2010) factor analyzed 60-items taken from three existing hope scales. Four dimensions of hope arose, labelled trust and confidence (e.g., goal striving, positive past experience), positive future orientation (e.g., looking forward, making plans), social relations and personal value (e.g., feeling loved and needed), and lack of perspective (e.g., feeling trapped, becoming uninvolved). […] In the most influential psychological perspective on hope, […] Snyder and colleagues posit that hope is “a positive motivational state that is based on an interactively derived sense of successful (a) agency (goal-directed energy), and (b) pathways (planning to meet goals)” […]. According to this view, hope-agency beliefs provide motivation to pursue valued goals, and hope-pathways beliefs provide plausible routes to meet those goals. […] hope is most often construed as an emotion or as an emotion-based coping process.”

“Lapierre et al. (1993) report that wishes for others is a more frequent category among relatively younger elderly participants and among non-impaired relative to impaired participants. The authors suggest that less healthy individuals (i.e., relatively older and impaired) are more self-focused in their aspirations, emphasizing such fundamental goals as preserving their health. […] Herth identified changes [in hope patterns] as a function of age and impairment level of respondents, with those older than 80 and experiencing mild to moderate impairment being more likely to harbour hope focused on others compared to those who were higher functioning. Moreover, those living in long-term care facilities with moderate to severe impairment directed their hope almost entirely toward others. […] [research] strongly points to the element of vulnerability in another person as a situational influence on other-oriented hope. Learning about others’ vulnerability likely triggers compassion or empathy which, in turn, elicits other-oriented hope. […] In addition to other-oriented hope occurring in response to another’s vulnerability, vicarious hope appears also to be triggered by one’s own vulnerability. […] In related work, Hollis et al. (2007) discuss borrowed hope; for those with no hope, others who have hope for them can be impactful, because hope can be viewed as ‘contagious’.”

“Similar to recognized drawbacks or risks of self-oriented hope, other-oriented hope may be associated with a failure to accept things the way they are, frustration upon hope being dashed, risk taking, or the failure to limit losses […] There is also an opportunity-cost to other-oriented hope: Time spent hoping for another is time not spent generating, contemplating, or acting toward either one’s own hope or to yet other people’s hope. […] There may be costs to the recipient of other-oriented hope in the form of feeling coerced or controlled by others whose vicarious hope is not shared by the recipient. Therefore, some forms of other-oriented hope may reveal only the desired outcomes of the hoping agent as opposed to the person to whom the hope applies. In the classic example, a parent’s hope for a child may not be hope that is held by the child him- or herself, and therefore may be experienced as a significant source of undue pressure and stress by the child. Such coercive hope is, in turn, likely to be harmful to the relationship between the person harbouring the other-oriented hope and the target of that hope. […] In an extreme form, other-oriented hope bears resemblance to other-oriented perfectionism. Hewitt and Flett (2004) argue that perfectionism can be directed toward the self or others. In the former case, perfectionism involves expectations placed upon oneself for unreasonably high performance, whereas in the latter case, perfectionism involves expecting others to uphold an unreasonably high standard and expressing criticism when others fail to meet this expectation. It is possible that other-oriented hope occasionally takes the form of other-oriented expectations for perfection. For example, a parent may hope that a child performs well in school, but this could take the form of an overly demanding standard of achievement that is difficult or impossible for the child to attain, creating distress in the child’s life and conflict within the parent-child relationship.”

“McGeer (2004) argues for responsive hope being an optimal point between wishful hope, on the one hand (i.e., desire but too little agency, as in wishful thinking) and willful hope, on the other hand (desire but too much agency, as in an incautious or unrealistic pursuit of one’s dreams). To expand on McGeer’s views, responsive other-oriented hope would fall between wishful other-oriented hope, on the one hand (i.e., desires aimed at others but divorced from an action-orientation toward the fulfillment of such desires), and willful other-oriented hope, on the other hand (i.e., desire for, and overzealous facilitation of, others’ future outcomes, ignoring whether such actions are in the other’s best interest or are endorsed by the other). […] Like self-oriented hope, other-oriented hope can be contested and, in extreme instances, such hope may impede coping, such as by encouraging ongoing denial among family members of the objective circumstances faced by their loved one. Hoping against hope for others may, at times, be more costly than beneficial.”

“Acceptance toward others may be exhibited through not judging others, being tolerant of others who are perceived as different than oneself, being willing to engage with others, and not avoiding others who might be predicted to displease us or upset us. It would appear, therefore, that acceptance, like hope, can be directed toward the self or toward others. Interestingly, acceptance of the self and acceptance of others are included, respectively, in measures of psychological well-being and social well-being (Keyes 2005), suggesting that both self-acceptance and other-acceptance are considered key aspects of psychological health.”

“Davis and Asliturk (2011) review research showing that a realistic orientation toward future outcomes, in which one considers both positive and negative possibilities, is associated with coping more effectively with adversity.”

“Weis and Speridakos (2011) conducted a meta-analysis on 27 studies that employed strategies to enhance hope among both mental health clients and community members. They reported modest effects of such psychotherapy on measures of hope and life satisfaction, but not on measures of psychological distress. The authors caution that effects were relatively small in comparison to other psychoeducational or psychotherapeutic interventions.”

October 8, 2015 Posted by | books, Psychology | Leave a comment

The Nature of Statistical Evidence

Here’s my goodreads review of the book.

As I’ve observed many times before, a wordpress blog like mine is not a particularly nice place to cover mathematical topics involving equations and lots of Greek letters, so the coverage below will be more or less purely conceptual; don’t take this to mean that the book doesn’t contain formulas. Some parts of the book look like this:

That of course makes the book hard to blog, also for other reasons than just the fact that it’s typographically hard to deal with the equations. In general it’s hard to talk about the content of a book like this one without going into a lot of details outlining how you get from A to B to C – usually you’re only really interested in C, but you need A and B to make sense of C. At this point I’ve sort of concluded that when covering books like this one I’ll only cover some of the main themes which are easy to discuss in a blog post, and I’ve concluded that I should skip coverage of (potentially important) points which might also be of interest if they’re difficult to discuss in a small amount of space, which is unfortunately often the case. I should perhaps observe that although I noted in my goodreads review that in a way there was a bit too much philosophy and a bit too little statistics in the coverage for my taste, you should definitely not take that objection to mean that this book is full of fluff; a lot of that philosophical stuff is ‘formal logic’ type stuff and related comments, and the book in general is quite dense. As I also noted in the goodreads review I didn’t read this book as carefully as I might have done – for example I skipped a couple of the technical proofs because they didn’t seem to be worth the effort – and I’d probably need to read it again to fully understand some of the minor points made throughout the more technical parts of the coverage; so that’s of course a related reason why I don’t cover the book in a great amount of detail here – it’s hard work just to read the damn thing, to talk about the technical stuff in detail here as well would definitely be overkill even if it would surely make me understand the material better.

I have added some observations from the coverage below. I’ve tried to clarify beforehand which question/topic the quote in question deals with, to ease reading/understanding of the topics covered.

On how statistical methods are related to experimental science:

“statistical methods have aims similar to the process of experimental science. But statistics is not itself an experimental science, it consists of models of how to do experimental science. Statistical theory is a logical — mostly mathematical — discipline; its findings are not subject to experimental test. […] The primary sense in which statistical theory is a science is that it guides and explains statistical methods. A sharpened statement of the purpose of this book is to provide explanations of the senses in which some statistical methods provide scientific evidence.”

On mathematics and axiomatic systems (the book goes into much more detail than this):

“It is not sufficiently appreciated that a link is needed between mathematics and methods. Mathematics is not about the world until it is interpreted and then it is only about models of the world […]. No contradiction is introduced by either interpreting the same theory in different ways or by modeling the same concept by different theories. […] In general, a primitive undefined term is said to be interpreted when a meaning is assigned to it and when all such terms are interpreted we have an interpretation of the axiomatic system. It makes no sense to ask which is the correct interpretation of an axiom system. This is a primary strength of the axiomatic method; we can use it to organize and structure our thoughts and knowledge by simultaneously and economically treating all interpretations of an axiom system. It is also a weakness in that failure to define or interpret terms leads to much confusion about the implications of theory for application.”

It’s all about models:

“The scientific method of theory checking is to compare predictions deduced from a theoretical model with observations on nature. Thus science must predict what happens in nature but it need not explain why. […] whether experiment is consistent with theory is relative to accuracy and purpose. All theories are simplifications of reality and hence no theory will be expected to be a perfect predictor. Theories of statistical inference become relevant to scientific process at precisely this point. […] Scientific method is a practice developed to deal with experiments on nature. Probability theory is a deductive study of the properties of models of such experiments. All of the theorems of probability are results about models of experiments.”

But given a frequentist interpretation you can test your statistical theories with the real world, right? Right? Well…

“How might we check the long run stability of relative frequency? If we are to compare mathematical theory with experiment then only finite sequences can be observed. But for the Bernoulli case, the event that frequency approaches probability is stochastically independent of any sequence of finite length. […] Long-run stability of relative frequency cannot be checked experimentally. There are neither theoretical nor empirical guarantees that, a priori, one can recognize experiments performed under uniform conditions and that under these circumstances one will obtain stable frequencies.” [related link]

What should we expect to get out of mathematical and statistical theories of inference?

“What can we expect of a theory of statistical inference? We can expect an internally consistent explanation of why certain conclusions follow from certain data. The theory will not be about inductive rationality but about a model of inductive rationality. Statisticians are used to thinking that they apply their logic to models of the physical world; less common is the realization that their logic itself is only a model. Explanation will be in terms of introduced concepts which do not exist in nature. Properties of the concepts will be derived from assumptions which merely seem reasonable. This is the only sense in which the axioms of any mathematical theory are true […] We can expect these concepts, assumptions, and properties to be intuitive but, unlike natural science, they cannot be checked by experiment. Different people have different ideas about what “seems reasonable,” so we can expect different explanations and different properties. We should not be surprised if the theorems of two different theories of statistical evidence differ. If two models had no different properties then they would be different versions of the same model […] We should not expect to achieve, by mathematics alone, a single coherent theory of inference, for mathematical truth is conditional and the assumptions are not “self-evident.” Faith in a set of assumptions would be needed to achieve a single coherent theory.”

On disagreements about the nature of statistical evidence:

“The context of this section is that there is disagreement among experts about the nature of statistical evidence and consequently much use of one formulation to criticize another. Neyman (1950) maintains that, from his behavioral hypothesis testing point of view, Fisherian significance tests do not express evidence. Royall (1997) employs the “law” of likelihood to criticize hypothesis as well as significance testing. Pratt (1965), Berger and Selke (1987), Berger and Berry (1988), and Casella and Berger (1987) employ Bayesian theory to criticize sampling theory. […] Critics assume that their findings are about evidence, but they are at most about models of evidence. Many theoretical statistical criticisms, when stated in terms of evidence, have the following outline: According to model A, evidence satisfies proposition P. But according to model B, which is correct since it is derived from “self-evident truths,” P is not true. Now evidence can’t be two different ways so, since B is right, A must be wrong. Note that the argument is symmetric: since A appears “self-evident” (to adherents of A) B must be wrong. But both conclusions are invalid since evidence can be modeled in different ways, perhaps useful in different contexts and for different purposes. From the observation that P is a theorem of A but not of B, all we can properly conclude is that A and B are different models of evidence. […] The common practice of using one theory of inference to critique another is a misleading activity.”

Is mathematics a science?

“Is mathematics a science? It is certainly systematized knowledge much concerned with structure, but then so is history. Does it employ the scientific method? Well, partly; hypothesis and deduction are the essence of mathematics and the search for counter examples is a mathematical counterpart of experimentation; but the question is not put to nature. Is mathematics about nature? In part. The hypotheses of most mathematics are suggested by some natural primitive concept, for it is difficult to think of interesting hypotheses concerning nonsense syllables and to check their consistency. However, it often happens that as a mathematical subject matures it tends to evolve away from the original concept which motivated it. Mathematics in its purest form is probably not natural science since it lacks the experimental aspect. Art is sometimes defined to be creative work displaying form, beauty and unusual perception. By this definition pure mathematics is clearly an art. On the other hand, applied mathematics, taking its hypotheses from real world concepts, is an attempt to describe nature. Applied mathematics, without regard to experimental verification, is in fact largely the “conditional truth” portion of science. If a body of applied mathematics has survived experimental test to become trustworthy belief then it is the essence of natural science.”

Then what about statistics – is statistics a science?

“Statisticians can and do make contributions to subject matter fields such as physics, and demography but statistical theory and methods proper, distinguished from their findings, are not like physics in that they are not about nature. […] Applied statistics is natural science but the findings are about the subject matter field not statistical theory or method. […] Statistical theory helps with how to do natural science but it is not itself a natural science.”

I should note that I am, and have for a long time been, in broad agreement with the author’s remarks on the nature of science and mathematics above. Popper, among many others, discussed this topic a long time ago e.g. in The Logic of Scientific Discovery and I’ve basically been of the opinion that (‘pure’) mathematics is not science (‘but rather ‘something else’ … and that doesn’t mean it’s not useful’) for probably a decade. I’ve had a harder time coming to terms with how precisely to deal with statistics in terms of these things, and in that context the book has been conceptually helpful.

Below I’ve added a few links to other stuff also covered in the book:
Propositional calculus.
Kolmogorov’s axioms.
Neyman-Pearson lemma.
Radon-Nikodyn theorem. (not covered in the book, but the necessity of using ‘a Radon-Nikodyn derivative’ to obtain an answer to a question being asked was remarked upon at one point, and I had no clue what he was talking about – it seems that the stuff in the link was what he was talking about).
A very specific and relevant link: Berger and Wolpert (1984). The stuff about Birnbaum’s argument covered from p.24 (p.40) and forward is covered in some detail in the book. The author is critical of the model and explains in the book in some detail why that is. See also: On the foundations of statistical inference (Birnbaum, 1962).

October 6, 2015 Posted by | books, mathematics, papers, philosophy, science, statistics | 4 Comments

A few lectures

This one was mostly review for me, but there was also some new stuff and it was a ‘sort of okay’ lecture even if I was highly skeptical about a few points covered. I was debating whether to even post the lecture on account of those points of contention, but I figured that by adding a few remarks below I could justify doing it. So below a few skeptical comments relating to content covered in the lecture:

a) 28-29 minutes in he mentions that the cutoff for hypertension in diabetics is a systolic pressure above 130. Here opinions definitely differ, and opinions about treatment cutoffs differ; in the annual report from the Danish Diabetes Database they follow up on whether hospitals and other medical decision-making units are following guidelines (I’ve talked about the data on the blog, e.g. here), and the BP goal of involved decision-making units evaluated is currently whether diabetics with systolic BP above 140 receive antihypertensive treatment. This recent Cochrane review concluded that: “At the present time, evidence from randomized trials does not support blood pressure targets lower than the standard targets in people with elevated blood pressure and diabetes” and noted that: “The effect of SBP targets on mortality was compatible with both a reduction and increase in risk […] Trying to achieve the ‘lower’ SBP target was associated with a significant increase in the number of other serious adverse events”.

b) Whether retinopathy screenings should be conducted yearly or biennially is also contested, and opinions differ – this is not mentioned in the lecture, but I sort of figure maybe it should have been. There’s some evidence that annual screening is better (see e.g. this recent review), but the evidence base is not great and clinical outcomes do not seem to differ much in general; as noted in the review, “Observational and economic modelling studies in low-risk patients show little difference in clinical outcomes between screening intervals of 1 year or 2 years”. To stratify based on risk seems desirable from a cost-effectiveness standpoint, but how to stratify optimally seems to not be completely clear at the present point in time.

c) The Somogyi phenomenon is highly contested, and I was very surprised about his coverage of this topic – ‘he’s a doctor lecturing on this topic, he should know better’. As the wiki notes: “Although this theory is well known among clinicians and individuals with diabetes, there is little scientific evidence to support it.” I’m highly skeptical, and I seriously question the advice of lowering insulin in the context of morning hyperglycemia. As observed in Cryer’s text: “there is now considerable evidence against the Somogyi hypothesis (Guillod et al. 2007); morning hyperglycemia is the result of insulin lack, not post-hypoglycemic insulin resistance (Havlin and Cryer 1987; Tordjman et al. 1987; Hirsch et al. 1990). There is a dawn phenomenon—a growth hormone–mediated increase in the nighttime to morning plasma glucose concentration (Campbell et al. 1985)—but its magnitude is small (Periello et al. 1991).”

I decided not to embed this lecture in the post mainly because the resolution is unsatisfactorily low so that a substantial proportion of the visual content is frankly unintelligible; I figured this would bother others more than it did me and that a semi-satisfactory compromise solution in terms of coverage would be to link to the lecture, but not embed it here. You can hear what the lecturer is saying, which was enough for me, but you can’t make out stuff like effect differences, p-values, or many of the details in the graphic illustrations included. Despite the title of the lecture on youtube, the lecture actually mainly consists of a brief overview of pharmacological treatment options for diabetes.

If you want to skip the introduction, the first talk/lecture starts around 5 minutes and 30 seconds into the video. Note that despite the long running time of this video the lectures themselves only take about 50 minutes in total; the rest of it is post-lecture Q&A and discussion.

October 3, 2015 Posted by | diabetes, Lectures, mathematics, medicine | Leave a comment

The Origin of Species

I figured I ought to blog this book at some point, and today I decided to take out the time to do it. This is the second book by Darwin I’ve read – for blog content dealing with Darwin’s book The Voyage of the Beagle, see these posts. The two books are somewhat different; Beagle is sort of a travel book written by a scientist who decided to write down his observations during his travels, whereas Origin is a sort of popular-science research treatise – for more details on Beagle, see the posts linked above. If you plan on reading both the way I did I think you should aim to read them in the order they are written.

I did not rate the book on goodreads because I could not think of a fair way to rate the book; it’s a unique and very important contribution to the history of science, but how do you weigh the other dimensions? I decided not to try. Some of the people reviewing the book on goodreads call the book ‘dry’ or ‘dense’, but I’d say that I found the book quite easy to read compared to quite a few of the other books I’ve been reading this year and it doesn’t actually take that long to read; thus I read a quite substantial proportion of the book during a one day trip to Copenhagen and back. The book can be read by most literate people living in the 21st century – you do not need to know any evolutionary biology to read this book – but that said, how you read the book will to some extent depend upon how much you know about the topics about which Darwin theorizes in his book. I had a conversation with my brother about the book a short while after I’d read it, and I recall noting during that conversation that in my opinion one would probably get more out of reading this book if one has at least some knowledge of geology (for example some knowledge about the history of the theory of continental drift – this book was written long before the theory of plate tectonics was developed), paleontology, Mendel’s laws/genetics/the modern synthesis and modern evolutionary thought, ecology and ethology, etc. Whether or not you actually do ‘get more out of the book’ if you already know some stuff about the topics about which Darwin speaks is perhaps an open question, but I think a case can certainly be made that someone who already knows a bit about evolution and related topics will read this book in a different manner than will someone who knows very little about these topics. I should perhaps in this context point out to people new to this blog that even though I hardly consider myself an expert on these sorts of topics, I have nevertheless read quite a bit of stuff about those things in the past – books like this, this, this, this, this, this, this, this, this, this, this, this, this, this, and this one – so I was reading the book perhaps mainly from the vantage point of someone at least somewhat familiar both with many of the basic ideas and with a lot of the refinements of these ideas that people have added to the science of biology since Darwin’s time. One of the things my knowledge of modern biology and related topics had not prepared me for was how moronic some of the ideas of Darwin’s critics were at the time and how stupid some of the implicit alternatives were, and this is actually part of the fun of reading this book; there was a lot of stuff back then which even many of the people presumably held in high regard really had no clue about, and even outrageously idiotic ideas were seemingly taken quite seriously by people involved in the debate. I assume that biologists still to this day have to spend quite a bit of time and effort dealing with ignorant idiots (see also this), but back in Darwin’s day these people were presumably to a much greater extent taken seriously even among people in the scientific community, if indeed they were not themselves part of the scientific community.

Darwin was not right about everything and there’s a lot of stuff that modern biologists know which he had no idea about, so naturally some mistaken ideas made their way into Origin as well; for example the idea of the inheritance of acquired characteristics (Lamarckian inheritance) occasionally pops up and is implicitly defended in the book as a credible complement to natural selection, as also noted in Oliver Francis’ afterword to the book. On a general note it seems that Darwin did a better job convincing people about the importance of the concept of evolution than he did convincing people that the relevant mechanism behind evolution was natural selection; at least that’s what’s argued in wiki’s featured article on the history of evolutionary thought (to which I have linked before here on the blog).

Darwin emphasizes more than once in the book that evolution is a very slow process which takes a lot of time (for example: “I do believe that natural selection will always act very slowly, often only at long intervals of time, and generally on only a very few of the inhabitants of the same region at the same time”, p.123), and arguably this is also something about which he is part right/part wrong because the speed with which natural selection ‘makes itself felt’ depends upon a variety of factors, and it can be really quite fast in some contexts (see e.g. this and some of the topics covered in books like this one); though you can appreciate why he held the views he did on that topic.

A big problem confronted by Darwin was that he didn’t know how genes work, so in a sense the whole topic of the ‘mechanics of the whole thing’ – the ‘nuts and bolts’ – was more or less a black box to him (I have included a few quotes which indirectly relate to this problem in my coverage of the book below; as can be inferred from those quotes Darwin wasn’t completely clueless, but he might have benefited greatly from a chat with Gregor Mendel…) – in a way a really interesting thing about the book is how plausible the theory of natural selection is made out to be despite this blatantly obvious (at least to the modern reader) problem. Darwin was incidentally well aware there was a problem; just 6 pages into the first chapter of the book he observes frankly that: “The laws governing inheritance are quite unknown”. Some of the quotes below, e.g. on reciprocal crosses, illustrate that he was sort of scratching the surface, but in the book he never does more than that.

Below I have added some quotes from the book.

“Certainly no clear line of demarcation has as yet been drawn between species and sub-species […]; or, again, between sub-species and well-marked varieties, or between lesser varieties and individual differences. These differences blend into each other in an insensible series; and a series impresses the mind with the idea of an actual passage. […] I look at individual differences, though of small interest to the systematist, as of high importance […], as being the first step towards such slight varieties as are barely thought worth recording in works on natural history. And I look at varieties which are in any degree more distinct and permanent, as steps leading to more strongly marked and more permanent varieties; and at these latter, as leading to sub-species, and to species. […] I attribute the passage of a variety, from a state in which it differs very slightly from its parent to one in which it differs more, to the action of natural selection in accumulating […] differences of structure in certain definite directions. Hence I believe a well-marked variety may be justly called an incipient species […] I look at the term species as one arbitrarily given, for the sake of convenience, to a set of individuals closely resembling each other, and that it does not essentially differ from the term variety, which is given to less distinct and more fluctuating forms. The term variety, again, in comparison with mere individual differences, is also applied arbitrarily, and for mere convenience’ sake. […] the species of large genera present a strong analogy with varieties. And we can clearly understand these analogies, if species have once existed as varieties, and have thus originated: whereas, these analogies are utterly inexplicable if each species has been independently created.”

“Owing to [the] struggle for life, any variation, however slight and from whatever cause proceeding, if it be in any degree profitable to an individual of any species, in its infinitely complex relations to other organic beings and to external nature, will tend to the preservation of that individual, and will generally be inherited by its offspring. The offspring, also, will thus have a better chance of surviving, for, of the many individuals of any species which are periodically born, but a small number can survive. I have called this principle, by which each slight variation, if useful, is preserved, by the term of Natural Selection, in order to mark its relation to man’s power of selection. We have seen that man by selection can certainly produce great results, and can adapt organic beings to his own uses, through the accumulation of slight but useful variations, given to him by the hand of Nature. But Natural Selection, as we shall hereafter see, is a power incessantly ready for action, and is as immeasurably superior to man’s feeble efforts, as the works of Nature are to those of Art. […] In looking at Nature, it is most necessary to keep the foregoing considerations always in mind – never to forget that every single organic being around us may be said to be striving to the utmost to increase in numbers; that each lives by a struggle at some period of its life; that heavy destruction inevitably falls either on the young or old, during each generation or at recurrent intervals. Lighten any check, mitigate the destruction ever so little, and the number of the species will almost instantaneously increase to any amount. The face of Nature may be compared to a yielding surface, with ten thousand sharp wedges packed close together and driven inwards by incessant blows, sometimes one wedge being struck, and then another with greater force. […] A corollary of the highest importance may be deduced from the foregoing remarks, namely, that the structure of every organic being is related, in the most essential yet often hidden manner, to that of all other organic beings, with which it comes into competition for food or residence, or from which it has to escape, or on which it preys.”

“Under nature, the slightest difference of structure or constitution may well turn the nicely-balanced scale in the struggle for life, and so be preserved. How fleeting are the wishes and efforts of man! how short his time! And consequently how poor will his products be, compared with those accumulated by nature during whole geological periods. […] It may be said that natural selection is daily and hourly scrutinising, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. We see nothing of these slow changes in progress, until the hand of time has marked the long lapses of ages, and then so imperfect is our view into long past geological ages, that we only see that the forms of life are now different from what they formerly were.”

“I have collected so large a body of facts, showing, in accordance with the almost universal belief of breeders, that with animals and plants a cross between different varieties, or between individuals of the same variety but of another strain, gives vigour and fertility to the offspring; and on the other hand, that close interbreeding diminishes vigour and fertility; that these facts alone incline me to believe that it is a general law of nature (utterly ignorant though we be of the meaning of the law) that no organic being self-fertilises itself for an eternity of generations; but that a cross with another individual is occasionally perhaps at very long intervals — indispensable. […] in many organic beings, a cross between two individuals is an obvious necessity for each birth; in many others it occurs perhaps only at long intervals; but in none, as I suspect, can self-fertilisation go on for perpetuity.”

“as new species in the course of time are formed through natural selection, others will become rarer and rarer, and finally extinct. The forms which stand in closest competition with those undergoing modification and improvement, will naturally suffer most. […] Whatever the cause may be of each slight difference in the offspring from their parents – and a cause for each must exist – it is the steady accumulation, through natural selection, of such differences, when beneficial to the individual, which gives rise to all the more important modifications of structure, by which the innumerable beings on the face of this earth are enabled to struggle with each other, and the best adapted to survive.”

“Natural selection, as has just been remarked, leads to divergence of character and to much extinction of the less improved and intermediate forms of life. On these principles, I believe, the nature of the affinities of all organic beings may be explained. It is a truly wonderful fact – the wonder of which we are apt to overlook from familiarity – that all animals and all plants throughout all time and space should be related to each other in group subordinate to group, in the manner which we everywhere behold – namely, varieties of the same species most closely related together, species of the same genus less closely and unequally related together, forming sections and sub-genera, species of distinct genera much less closely related, and genera related in different degrees, forming sub-families, families, orders, sub-classes, and classes. The several subordinate groups in any class cannot be ranked in a single file, but seem rather to be clustered round points, and these round other points, and so on in almost endless cycles. On the view that each species has been independently created, I can see no explanation of this great fact in the classification of all organic beings; but, to the best of my judgment, it is explained through inheritance and the complex action of natural selection, entailing extinction and divergence of character […] The affinities of all the beings of the same class have sometimes been represented by a great tree. I believe this simile largely speaks the truth. The green and budding twigs may represent existing species; and those produced during each former year may represent the long succession of extinct species. At each period of growth all the growing twigs have tried to branch out on all sides, and to overtop and kill the surrounding twigs and branches, in the same manner as species and groups of species have tried to overmaster other species in the great battle for life. The limbs divided into great branches, and these into lesser and lesser branches, were themselves once, when the tree was small, budding twigs; and this connexion of the former and present buds by ramifying branches may well represent the classification of all extinct and living species in groups subordinate to groups. Of the many twigs which flourished when the tree was a mere bush, only two or three, now grown into great branches, yet survive and bear all the other branches; so with the species which lived during long-past geological periods, very few now have living and modified descendants. From the first growth of the tree, many a limb and branch has decayed and dropped off; and these lost branches of various sizes may represent those whole orders, families, and genera which have now no living representatives, and which are known to us only from having been found in a fossil state. As we here and there see a thin straggling branch springing from a fork low down in a tree, and which by some chance has been favoured and is still alive on its summit, so we occasionally see an animal like the Ornithorhynchus or Lepidosiren, which in some small degree connects by its affinities two large branches of life, and which has apparently been saved from fatal competition by having inhabited a protected station. As buds give rise by growth to fresh buds, and these, if vigorous, branch out and overtop on all sides many a feebler branch, so by generation I believe it has been with the great Tree of Life, which fills with its dead and broken branches the crust of the earth, and covers the surface with its ever branching and beautiful ramifications.”

“No one has been able to point out what kind, or what amount, of difference in any recognisable character is sufficient to prevent two species crossing. It can be shown that plants most widely different in habit and general appearance, and having strongly marked differences in every part of the flower, even in the pollen, in the fruit, and in the cotyledons, can be crossed. […] By a reciprocal cross between two species, I mean the case, for instance, of a stallion-horse being first crossed with a female-ass, and then a male-ass with a mare: these two species may then be said to have been reciprocally crossed. There is often the widest possible difference in the facility of making reciprocal crosses. Such cases are highly important, for they prove that the capacity in any two species to cross is often completely independent of their systematic affinity, or of any recognisable difference in their whole organisation. On the other hand, these cases clearly show that the capacity for crossing is connected with constitutional differences imperceptible by us, and confined to the reproductive system. […] fertility in the hybrid is independent of its external resemblance to either pure parent. […] The foregoing rules and facts […] appear to me clearly to indicate that the sterility both of first crosses and of hybrids is simply incidental or dependent on unknown differences, chiefly in the reproductive systems, of the species which are crossed. […] Laying aside the question of fertility and sterility, in all other respects there seems to be a general and close similarity in the offspring of crossed species, and of crossed varieties. If we look at species as having been specially created, and at varieties as having been produced by secondary laws, this similarity would be an astonishing fact. But it harmonizes perfectly with the view that there is no essential distinction between species and varieties. […] the facts briefly given in this chapter do not seem to me opposed to, but even rather to support the view, that there is no fundamental distinction between species and varieties.”

“Believing, from reasons before alluded to, that our continents have long remained in nearly the same relative position, though subjected to large, but partial oscillations of level, I am strongly inclined to…” (…’probably get some things wrong…’, US)

“In considering the distribution of organic beings over the face of the globe, the first great fact which strikes us is, that neither the similarity nor the dissimilarity of the inhabitants of various regions can be accounted for by their climatal and other physical conditions. Of late, almost every author who has studied the subject has come to this conclusion. […] A second great fact which strikes us in our general review is, that barriers of any kind, or obstacles to free migration, are related in a close and important manner to the differences between the productions of various regions. […] A third great fact, partly included in the foregoing statements, is the affinity of the productions of the same continent or sea, though the species themselves are distinct at different points and stations. It is a law of the widest generality, and every continent offers innumerable instances. Nevertheless the naturalist in travelling, for instance, from north to south never fails to be struck by the manner in which successive groups of beings, specifically distinct, yet clearly related, replace each other. […] We see in these facts some deep organic bond, prevailing throughout space and time, over the same areas of land and water, and independent of their physical conditions. The naturalist must feel little curiosity, who is not led to inquire what this bond is.  This bond, on my theory, is simply inheritance […] The dissimilarity of the inhabitants of different regions may be attributed to modification through natural selection, and in a quite subordinate degree to the direct influence of different physical conditions. The degree of dissimilarity will depend on the migration of the more dominant forms of life from one region into another having been effected with more or less ease, at periods more or less remote; on the nature and number of the former immigrants; and on their action and reaction, in their mutual struggles for life; the relation of organism to organism being, as I have already often remarked, the most important of all relations. Thus the high importance of barriers comes into play by checking migration; as does time for the slow process of modification through natural selection. […] On this principle of inheritance with modification, we can understand how it is that sections of genera, whole genera, and even families are confined to the same areas, as is so commonly and notoriously the case.”

“the natural system is founded on descent with modification […] and […] all true classification is genealogical; […] community of descent is the hidden bond which naturalists have been unconsciously seeking, […] not some unknown plan or creation, or the enunciation of general propositions, and the mere putting together and separating objects more or less alike.”

September 27, 2015 Posted by | biology, books, evolution, genetics, Geology | Leave a comment

Mathematically Speaking

This is a book full of quotes on the topic of mathematics. As is always the case for books full of quotations, most of the quotes in this book aren’t very good, but occasionally you come across a quote or two that enable you to justify reading on. I’ll likely include some of the good/interesting quotes in the book in future ‘quotes’ posts. Below I’ve added some sample quotes from the book. I’ve read roughly three-fifths of the book so far and I’m currently hovering around a two-star rating on goodreads.

“Since authors seldom, if ever, say what they mean, the following glossary is offered to neophytes in mathematical research to help them understand the language that surrounds the formulas …

ANALOGUE. This is an a. of: I have to have some excuse for publishing it.
APPLICATIONS. This is of interest in a.: I have to have some excuse for publishing it.
COMPLETE. The proof is now c.: I can’t finish it. […]
DIFFICULT. This problem is d.: I don’t know the answer. (Cf. Trivial)
GENERALITY. Without loss of g.: I have done an easy special case. […]
INTERESTING. X’s paper is I.: I don’t understand it.
KNOWN. This is a k. result but I reproduce the proof for convenience of the reader: My paper isn’t long enough. […]
NEW. This was proved by X but the following n. proof may present points of interest: I can’t understand X.
NOTATION. To simplify the n.: It is too much trouble to change now.
OBSERVED. It will be o. that: I hope you have not noticed that.
OBVIOUS. It is o.: I can’t prove it.
READER. The details may be left to the r.: I can’t do it. […]
STRAIGHTFORWARD. By a s. computation: I lost my notes.
TRIVIAL. This problem is t.: I know the answer (Cf. Difficult).
WELL-KNOWN. The result is w.: I can’t find the reference.” (Pétard, H. [Pondiczery, E.S.]).

Here are a few quotes similar to the ones above, provided by a different, unknown source:
“BRIEFLY: I’m running out of time, so I’ll just write and talk faster. […]
HE’S ONE OF THE GREAT LIVING MATHEMATICIANS: He’s written 5 papers and I’ve read 2 of them. […]
I’VE HEARD SO MUCH ABOUT YOU: Stalling a minute may give me time to recall who you are. […]
QUANTIFY: I can’t find anything wrong with your proof except that it won’t work if x is a moon of Jupiter (popular in applied math courses). […]
SKETCH OF A PROOF: I couldn’t verify all the details, so I’ll break it down into the parts I couldn’t prove.
YOUR TALK WAS VERY INTERESTING: I can’t think of anything to say about your talk.” (‘Unknown’)

“Mathematics is neither a description of nature nor an explanation of its operation; it is not concerned with physical motion or with the metaphysical generation of quantities. It is merely the symbolic logic of possible relations, and as such is concerned with neither approximate nor absolute truth, but only with hypothetical truth. That is, mathematics determines which conclusions will follow logically from given premises. The conjunction of mathematics and philosophy, or of mathematics and science is frequently of great service in suggesting new problems and points of view.” (Carl Boyer)

“It’s the nature of mathematics to pose more problems than it can solve.” (Ivars Peterson)

“the social scientist who lacks a mathematical mind and regards a mathematical formula as a magic recipe, rather than as the formulation of a supposition, does not hold forth much promise. A mathematical formula is never more than a precise statement. It must not be made into a Procrustean bed […] The chief merit of mathematization is that it compels us to become conscious of what we are assuming.” (Bertrand de Jouvenel)

“As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” (Albert Einstein)

“[Mathematics] includes much that will neither hurt one who does not know it nor help one who does.” (J. B. Mencke)

“Pure mathematics consists entirely of asseverations to the extent that, if such and such a proposition is true of anything, then such and such another proposition is true of anything. It is essential not to discuss whether the first proposition is really true, and not to mention what the anything is, of which it is supposed to be true … If our hypothesis is about anything, and not about some one or more particular things, then our deductions constitute mathematics. Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.” (Bertrand Russell)

“Mathematical rigor is like clothing; in its style it ought to suit the occasion, and it diminishes comfort and restricts freedom of movement if it is either too loose or too tight.” (G. F. Simmons).

“at a great distance from its empirical source, or after much “abstract” inbreeding, a mathematical subject is in danger of degeneration. At the inception the style is usually classical; when it shows signs of becoming baroque, then the danger signal is up … In any event, whenever this stage is reached, the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas.” (John von Neumann)

September 26, 2015 Posted by | books, mathematics, quotes | Leave a comment

Cognitive Psychology (I)

I could theoretically write a lot of posts about this handbook, but I’m probably not going to do that. As I’ve mentioned before I own a physical copy of this book, and blogging physical books is a pain in the neck compared to blogging e-books – this is one of the main reasons why I’m only now starting to blog the book, despite having finished it some time ago.

The book is a 600+ pages long handbook (752 pages if you include glossary, index etc.), and it has 16 chapters on various topics. Though I’m far from sure, I’d estimate that I spent something like 50 hours on the book altogether so far – 3 hours per chapter on average – and that’s just for ‘reading the pages’, so to speak; if I do decide to blog this book in any amount of detail, the amount of time spent on the material in there will go up quite a bit.

So what’s the book about – what is ‘cognitive psychology’? Here are a few remarks on these topics from the preface and the first chapter:

“the leading contemporary approach to human cognition involves studying the brain as well as behaviour. We have used the term “cognitive psychology” in the title of this book to refer to this approach, which forms the basis for our coverage of human cognition. Note, however, that the term “cognitive neuroscience” is often used to describe this approach. […] Note that the distinction between cognitive psychology and cognitive neuroscience is often blurred – the term ‘cognitive psychology” can be used in a broader sense to include cognitive neuroscience. Indeed, it is in that broader sense that it is used in the title of this book.”

The first chapter – about ‘approaches to human cognition’ – is a bit dense, but I decided to talk a little about it anyway because it seemed like a good way to give you some idea about what the book is about and which sort of content you’ll encounter in it. In the chapter the authors outline four different approaches to human cognition and talk about each of these in a bit of detail. Experimental cognitive psychology is an approach which basically limits itself to behavioural evidence. What they term cognitive neuroscience is an approach using evidence from both behaviour and the brain (that can be accomplished by having people do stuff while their brain activity is being monitored). Cognitive neuropsychology is an approach where you try to use data from brain-damaged individuals to help understand how normal cognition works. The last approach, computational cognitive science, I recently dealt with in the Science of Reading handbook – this approach involves constructing computational models to understand/simulate specific aspects of human cognition. All four approaches are used throughout the book to obtain a greater understanding of the topics covered.

The introductory chapter also gives the reader some information about what the brain looks like and how it’s structured, adds some comments about distinctions between various forms of processing, such as bottom-up processing and top-down processing and serial processing and parallel processing, and adds information about common techniques used to study brain activity in neuroscience (single-unit recording, event-related potentials, positron emission tomography, fMRI, efMRI, magnetoencephalography, and transcranial magnetic stimulation). I don’t want to go too much into the specifics of all those topics here, but I should note that I was unaware of the existence of TMS (transcranial magnetic stimulation) research methodologies and that it sounds like an interesting approach; basically what people do when they use this approach is to use magnetic pulses to try to (briefly, for a short amount of time) disrupt the functioning of some area of the brain and then evaluate performance on cognitive tasks performed while the brain area in question is disrupted – if people perform more poorly on a given task when the brain area in question is disrupted by the magnetic field, it might indicate that the brain area is involved in that task. For various reasons it’s not unproblematic to interpret the results of TMS research and there are various limitations to the application of this method, but this is experimental manipulation of a kind I’d basically assumed did not exist in this field before I started out reading the book.

It’s noted in the first chapter that: “much research in cognitive psychology suffers from a relative lack of ecological validity […] and paradigm specificity (findings do not generalise from one paradigm to others). The same limitations apply to cognitive neuroscience since cognitive neuroscientists generally use tasks previously developed by cognitive psychologists. Indeed, the problem of ecological validity may be greater in cognitive neuroscience.” In the context of cognitive neuropsychology, there are also various problems which I’m reasonably sure I’ve talked about here before – for example brain damage is rarely conveniently localized to just one brain area the researcher happens to be interested in, and the use of compensatory strategies by individuals with brain damage may cause problems with interpretation. Small sample sizes and large patient heterogeneities within these samples also do not help. As for the last approach, computational cognitive science, the problems mentioned are probably mostly the ones you’d expect; the models developed are rarely used to make new predictions because they’re often too general to really make them at all easy to evaluate one way or the other (lots of free parameters you can fit however you like), and despite their complexity they tend to ignore a lot of presumably highly relevant details.

The above was an outline of some stuff covered in the first chapter. The book as mentioned has 16 chapters. ‘Part 1’ deals with visual perception and attention – there’s a lot of stuff about that kind of thing in the book, almost 200 pages – and includes chapters about ‘basic processes in visual perception’, ‘object and face recognition’, ‘perception, motion, and action’, and ‘attention and performance’. Part 2 deals with memory, including chapters about ‘learning, memory, and forgetting’, ‘long-term memory systems’ and ‘everyday memory’. That part I found interesting and I hope I’ll manage to find the time to cover some of that stuff here later on. Part 3 deals with language and includes chapters about ‘reading and speech perception’, ‘language comprehension’, and ‘language production’. I recall wondering a long time ago on this blog if people doing research on those kinds of topics distinguished between language production and language comprehension; it’s pretty obvious that they do.. Part 5 deals with ‘thinking and reasoning’ and includes chapters about ‘problem solving and expertise’, ‘judgment and decision making’, and ‘inductive and deductive reasoning’. Interestingly the first of these chapters talks quite a bit about chess, because chess expertise is one of the research areas people have looked at when looking at the topic of expertise. I may decide to talk about these things later on, but I’m not sure I’ll cover the stuff in part 5 in much detail because Gigerenzer (whose research the authors discuss in chapter 13) covers some related topics in his book Simply Rational, which I’m currently reading, and I frankly like his coverage better (I should perhaps clarify in light of the previous remarks that Gigerenzer does not cover chess, but rather talks about other topics also covered in that section – the coverage overlap relates to Gigerenzer’s work on heuristics). The last part of the book has a chapter on cognition and emotion and a chapter about consciousness.

As you read the chapters, the authors start out by outlining some key features/distinctions of interest. They talk about what the specific theory/hypothesis/etc. is about, then they talk about the research results, and then they give their own evaluation of the research and conclude the coverage with outlining some limitations of the available research. Multiple topics are covered this way – presentation, research, evaluation, limitations – in each chapter, and when multiple competing hypotheses/approaches have been presented the evaluations will highlight strengths and weaknesses of each approach. Along the way you’ll encounter boxes at the bottom of the pages with bolded ‘key terms’ and definitions of those terms, as well as figures and tables with research results and illustrations of brain areas involved; key terms are also bolded in the text, so even if you don’t completely destroy the book by painting all over the pages with highlighters of different colours the way I do, it should be reasonably easy to navigate the content on a second reading. Usually the research on a given topic will be divided into sections if multiple approaches have been used to elucidate problems of interest; so there’ll be one section dealing with cognitive neuropsychology research, and another section about the cognitive neuroscience results. All chapters end with a brief outline of key terms/models/approaches encountered in the chapter and some of the main results discussed. The book is well structured. Coverage is in my opinion a bit superficial, which is one of the main reasons why I only gave the book three stars, and the authors are not always as skeptical as I’d have liked them to be – I did not always agree with the conclusions they drew from the research they discussed in the chapters, and occasionally I think they miss alternative explanations or misinterpret what the data is telling us. Some of the theoretical approaches they discuss in the text I frankly considered (/next to) worthless and a waste of time. It’s been a while since I finished the book and of course I don’t recall details as well as I’d like, but from what I remember and what I’ve gathered from a brief skim again while writing the post it’s far from a terrible book and on a general note it covers some interesting stuff – we’ll see how much of it I’ll manage to talk about here on the blog in the time to come. Regardless of how much more time I’ll be able to devote to the book here on the blog, this post should at least have given you some idea about which topics are covered in the book and how they’re covered.

September 24, 2015 Posted by | books, Psychology | Leave a comment


i. “If we keep an open mind, too much is likely to fall into it.” (Natalie Clifford Barney)

ii. “The advantage of love at first sight is that it delays a second sight.” (-ll-)

iii. “They used to call it the ‘Great War’. But I’ll be damned if I could tell you what was so ‘great’ about it. They also called it ‘the war to end all wars’…’cause they figured it was so big and awful that the world’d just have to come to its senses and make damn sure we never fought another one ever again.
That woulda been a helluva nice story.
But the truth’s got an ugly way of killin’ nice stories.” (Max Brooks)

iv. “Bromidic though it may sound, some questions don’t have answers, which is a terribly difficult lesson to learn.” (Katharine Graham)

v. “Cynicism is an unpleasant way of saying the truth.” (Lillian Hellman)

vi. “Lonely people, in talking to each other can make each other lonelier.” (-ll-)

vii. “When they [Hugh Walpole and Arnold Bennett] had gone, Plum [P. G. Wodehouse] and Guy [Guy Bolton] looked at each other with that glassy expression in their eyes which visiting literary men so often induce. They were feeling a little faint.
‘These authors!’ said Guy […Bolton, the author].
‘One really ought to meet them only in their books’, said Plum.” (quote from the book ‘Bring on the Girls’, written by Wodehouse and Bolton… The humour in this book is delightfully ‘meta’ at times. See also my review of the book here).

viii. “Illness must be considered to be as natural as health.” (William Saroyan)

ix. “An age is called Dark not because the light fails to shine, but because people refuse to see it.” (James Michener)

x. “I am terrified of restrictive religious doctrine, having learned from history that when men who adhere to any form of it are in control, common men like me are in peril.” (-ll-)

xi. “You can safely assume you’ve created God in your own image when it turns out that God hates all the same people you do.” (Anne Lamott)

xii. “People don’t ever seem to realise that doing what’s right’s no guarantee against misfortune.” (William McFee)

xiii. “If once a man indulges himself in murder, very soon he comes to think little of robbing; and from robbing he comes next to drinking and Sabbath-breaking, and from that to incivility and procrastination. Once begun upon this downward path, you never know where you are to stop. Many a man has dated his ruin from some murder or other that perhaps he thought little of at the time.” (Thomas De Quincey)

xiv. “In many walks of life, a conscience is a more expensive encumbrance than a wife or a carriage.” (-ll-)

xv. “A promise is binding in the inverse ratio of the numbers to whom it is made.” (-ll-)

xvi. “No safety without risk, and what you risk reveals what you value.” (Jeanette Winterson)

xvii. “When was the last time you looked at anything, solely, and concentratedly, and for its own sake? Ordinary life passes in a near blur. If we go to the theatre or the cinema, the images before us change constantly, and there is the distraction of language. Our loved ones are so well known to us that there is no need to look at them, and one of the gentle jokes of married life is that we do not.” (-ll-)

xviii. “Because we don’t know when we will die, we get to think of life as an inexhaustible well. Yet everything happens only a certain number of times, and a very small number really. How many more times will you remember a certain afternoon of your childhood, some afternoon that is so deeply a part of your being that you can’t even conceive of your life without it? Perhaps four or five times more, perhaps not even that. How many more times will you watch the full moon rise? Perhaps twenty. And yet it all seems limitless.” (Paul Bowles)

xix. “Praise out of season, or tactlessly bestowed, can freeze the heart as much as blame.” (Pearl S. Buck)

xx. “You cannot make yourself feel something you do not feel, but you can make yourself do right in spite of your feelings.” (-ll-).

September 15, 2015 Posted by | quotes | Leave a comment

Cost-effectiveness analysis in health care (III)

This will be my last post about the book. Yesterday I finished reading Darwin’s Origin of Species, which was my 100th book this year (here’s the list), but I can’t face blogging that book at the moment so coverage of that one will have to wait a bit.

In my second post about this book I had originally planned to cover chapter 7 – ‘Analysing costs’ – but as I didn’t like to spend too much time on the post I ended up cutting it short. This omission of coverage in the last post means that some themes to be discussed below are closely related to stuff covered in the second post, whereas on the other hand most of the remaining material, more specifically the material from chapters 8, 9 and 10, deal with decision analytic modelling, a quite different topic; in other words the coverage will be slightly more fragmented and less structured than I’d have liked it to be, but there’s not really much to do about that (it doesn’t help in this respect that I decided to not cover chapter 8, but doing that as well was out of the question).

I’ll start with coverage of some of the things they talk about in chapter 7, which as mentioned deals with how to analyze costs in a cost-effectiveness analysis context. They observe in the chapter that health cost data are often skewed to the right, for several reasons (costs incurred by an individual cannot be negative; for many patients the costs may be zero; some study participants may require much more care than the rest, creating a long tail). One way to address skewness is to use the median instead of the mean as the variable of interest, but a problem with this approach is that the median will not be as useful to policy-makers as will be the mean; as the mean times the population of interest will give a good estimate of the total costs of an intervention, whereas the median is not a very useful variable in the context of arriving at an estimate of the total costs. Doing data transformations and analyzing transformed data is another way to deal with skewness, but their use in cost effectiveness analysis have been questioned for a variety of reasons discussed in the chapter (to give a couple of examples, data transformation methods perform badly if inappropriate transformations are used, and many transformations cannot be used if there are data points with zero costs in the data, which is very common). Of the non-parametric methods aimed at dealing with skewness they discuss a variety of tests which are rarely used, as well as the bootstrap, the latter being one approach which has gained widespread use. They observe in the context of the bootstrap that “it has increasingly been recognized that the conditions the bootstrap requires to produce reliable parameter estimates are not fundamentally different from the conditions required by parametric methods” and note in a later chapter (chapter 11) that: “it is not clear that boostrap results in the presence of severe skewness are likely to be any more or less valid than parametric results […] bootstrap and parametric methods both rely on sufficient sample sizes and are likely to be valid or invalid in similar circumstances. Instead, interest in the bootstrap has increasingly focused on its usefulness in dealing simultaneously with issues such as censoring, missing data, multiple statistics of interest such as costs and effects, and non-normality.” Going back to the coverage in chapter 7, in the context of skewness they also briefly touch upon the potential use of a GLM framework to address this problem.

Data is often missing in cost datasets. Some parts of their coverage of these topics was to me but a review of stuff already covered in Bartholomew. Data can be missing for different reasons and through different mechanisms; one distinction is among data missing completely at random (MCAR), missing at random (MAR) (“missing data are correlated in an observable way with the mechanism that generates the cost, i.e. after adjusting the data for observable differences between complete and missing cases, the cost for those with missing data is the same, except for random variation, as for those with complete data”), and not missing at random (NMAR); the last type is also called non-ignorably missing data, and if you have that sort of data the implication is that the costs of those in the observed and unobserved groups differ in unpredictable ways, and if you ignore the process that drives these differences you’ll probably end up with a biased estimator. Another way to distinguish between different types of missing data is to look at patterns within the dataset, where you have:
“*univariate missingness – a single variable in a dataset is causing a problem through missing values, while the remaining variables contain complete information
*unit non-response – no data are recorded for any of the variables for some patients
*monotone missing – caused, for example, by drop-out in panel or longitudinal studies, resulting in variables observed up to a certain time point or wave but not beyond that
*multivariate missing – also called item non-response or general missingness, where some but not all of the variables are missing for some of the subjects.”
The authors note that the most common types of missingness in cost information analyses are the latter two. They discuss some techniques for dealing with missing data, such as complete-case analysis, available-case analysis, and imputation, but I won’t go into the details here. In the last parts of the chapter they talk a little bit about censoring, which can be viewed as a specific type of missing data, and ways to deal with it. Censoring happens when follow-up information on some subjects is not available for the full duration of interest, which may be caused e.g. by attrition (people dropping out of the trial), or insufficient follow up (the final date of follow-up might be set before all patients reach the endpoint of interest, e.g. death). The two most common methods for dealing with censored cost data are the Kaplan-Meier sample average (-KMSA) estimator and the inverse probability weighting (-IPW) estimator, both of which are non-parametric interval methods. “Comparisons of the IPW and KMSA estimators have shown that they both perform well over different levels of censoring […], and both are considered reasonable approaches for dealing with censoring.” One difference between the two is that the KMSA, unlike the IPW, is not appropriate for dealing with censoring due to attrition unless the attrition is MCAR (and it almost never is), because the KM estimator, and by extension the KMSA estimator, assumes that censoring is independent of the event of interest.

The focus in chapter 8 is on decision tree models, and I decided to skip that chapter as most of it is known stuff which I felt no need to review here (do remember that I to a large extent use this blog as an extended memory, so I’m not only(/mainly?) writing this stuff for other people..). Chapter 9 deals with Markov models, and I’ll talk a little bit about those in the following.

“Markov models analyse uncertain processes over time. They are suited to decisions where the timing of events is important and when events may happen more than once, and therefore they are appropriate where the strategies being evaluated are of a sequential or repetitive nature. Whereas decision trees model uncertain events at chance nodes, Markov models differ in modelling uncertain events as transitions between health states. In particular, Markov models are suited to modelling long-term outcomes, where costs and effects are spread over a long period of time. Therefore Markov models are particularly suited to chronic diseases or situations where events are likely to recur over time […] Over the last decade there has been an increase in the use of Markov models for conducting economic evaluations in a health-care setting […]

A Markov model comprises a finite set of health states in which an individual can be found. The states are such that in any given time interval, the individual will be in only one health state. All individuals in a particular health state have identical characteristics. The number and nature of the states are governed by the decisions problem. […] Markov models are concerned with transitions during a series of cycles consisting of short time intervals. The model is run for several cycles, and patients move between states or remain in the same state between cycles […] Movements between states are defined by transition probabilities which can be time dependent or constant over time. All individuals within a given health state are assumed to be identical, and this leads to a limitation of Markov models in that the transition probabilities only depend on the current health state and not on past health states […the process is memoryless…] – this is known as the Markovian assumption”.

The note that in order to build and analyze a Markov model, you need to do the following: *define states and allowable transitions [for example from ‘non-dead’ to ‘dead’ is okay, but going the other way is, well… For a Markov process to end, you need at least one state that cannot be left after it has been reached, and those states are termed ‘absorbing states’], *specify initial conditions in terms of starting probabilities/initial distribution of patients, *specify transition probabilities, *specify a cycle length, *set a stopping rule, *determine rewards, *implement discounting if required, *analysis and evaluation of the model, and *exploration of uncertainties. They talk about each step in more detail in the book, but I won’t go too much into this.

Markov models may be governed by transitions that are either constant over time or time-dependent. In a Markov chain transition probabilities are constant over time, whereas in a Markov process transition probabilities vary over time (/from cycle to cycle). In a simple Markov model the baseline assumption is that transitions only occur once in each cycle and usually the transition is modelled as taking place either at the beginning or the end of cycles, but in reality transitions can take place at any point in time during the cycle. One way to deal with the problem of misidentification (people assumed to be in one health state throughout the cycle even though they’ve transfered to another health state during the cycle) is to use half-cycle corrections, in which an assumption is made that on average state transitions occur halfway through the cycle, instead of at the beginning or the end of a cycle. They note that: “the important principle with the half-cycle correction is not when the transitions occur, but when state membership (i.e. the proportion of the cohort in that state) is counted. The longer the cycle length, the more important it may be to use half-cycle corrections.” When state transitions are assumed to take place may influence factors such as cost discounting (if the cycle is long, it can be important to get the state transition timing reasonably right).

When time dependency is introduced into the model, there are in general two types of time dependencies that impact on transition probabilities in the models. One is time dependency depending on the number of cycles since the start of the model (this is e.g. dealing with how transition probabilities depend on factors like age), whereas the other, which is more difficult to implement, deals with state dependence (curiously they don’t use these two words, but I’ve worked with state dependence models before in labour economics and this is what we’re dealing with here); i.e. here the transition probability will depend upon how long you’ve been in a given state.

Below I mostly discuss stuff covered in chapter 10, however I also include a few observations from the final chapter, chapter 11 (on ‘Presenting cost-effectiveness results’). Chapter 10 deals with how to represent uncertainty in decision analytic models. This is an important topic because as noted later in the book, “The primary objective of economic evaluation should not be hypothesis testing, but rather the estimation of the central parameter of interest—the incremental cost-effectiveness ratio—along with appropriate representation of the uncertainty surrounding that estimate.” In chapter 10 a distinction is made between variability, heterogeneity, and uncertainty. Variability has also been termed first-order uncertainty or stochastic uncertainty, and pertains to variation observed when recording information on resource use or outcomes within a homogenous sample of individuals. Heterogeneity relates to differences between patients which can be explained, at least in part. They distinguish between two types of uncertainty, structural uncertainty – dealing with decisions and assumptions made about the structure of the model – and parameter uncertainty, which of course relates to the precision of the parameters estimated. After briefly talking about ways to deal with these, they talk about sensitivity analysis.

“Sensitivity analysis involves varying parameter estimates across a range and seeing how this impacts on he model’s results. […] The simplest form is a one-way analysis where each parameter estimate is varied independently and singly to observe the impact on the model results. […] One-way sensitivity analysis can give some insight into the factors influencing the results, and may provide a validity check to assess what happens when particular variables take extreme values. However, it is likely to grossly underestimate overall uncertainty, and ignores correlation between parameters.”

Multi-way sensitivity analysis is a more refined approach, in which more than one parameter estimate is varied – this is sometimes termed scenario analysis. A different approach is threshold analysis, where one attempts to identify the critical value of one or more variables so that the conclusion/decision changes. All of these approaches are deterministic approaches, and they are not without problems. “They fail to take account of the joint parameter uncertainty and correlation between parameters, and rather than providing the decision-maker with a useful indication of the likelihood of a result, they simply provide a range of results associated with varying one or more input estimates.” So of course an alternative has been developed, namely probabilistic sensitivity analysis (-PSA), which already in the mid-80es started to be used in health economic decision analyses.

“PSA permits the joint uncertainty across all the parameters in the model to be addressed at the same time. It involves sampling model parameter values from distributions imposed on variables in the model. […] The types of distribution imposed are dependent on the nature of the input parameters [but] decision analytic models for the purpose of economic evaluation tend to use homogenous types of input parameters, namely costs, life-years, QALYs, probabilities, and relative treatment effects, and consequently the number of distributions that are frequently used, such as the beta, gamma, and log-normal distributions, is relatively small. […] Uncertainty is then propagated through the model by randomly selecting values from these distributions for each model parameter using Monte Carlo simulation“.

September 7, 2015 Posted by | econometrics, economics, medicine, statistics | Leave a comment

Loneliness (III)

The last part of the book was disappointing, as the coverage was generally weak and chapter 13 even basically devolved into a self-help chapter; I dislike self-help books immensely. I gave the book 2 stars on goodreads, but ended up significantly closer to one star than three. The truth of the matter is that if the book had been covering a different topic in which I had only had a more fleeting interest, there’s no way I’d have read it to the end.

A few observations from the last part of the book below.

“In 2006 we set out to test the impact of loneliness on responses to inequitable treatment. Our strategy involved a game in which the researcher designates one player as “proposer” and the other as “decider” and gives the proposer ten dollars. The proposer must split the money with the decider—along whatever lines he can get the decider to accept. If the decider rejects the proposal, neither player gets any money. […] It will probably come as no surprise that most people are sensitive to whether or not another person is dealing with them fairly, and that they agree to accept more fair offers than unfair ones. They do this even when, as in our experiment, rejecting an offer leaves them with no reward but their pride and their sense of right and wrong. Lonely players generally followed this pattern, and lonely and non-lonely participants in our game accepted comparable numbers of fair offers. However, lonely players accepted more unfair offers than did nonlonely players. They went along more often when their partner treated them unfairly, even though both lonely and nonlonely players rated the offers as equally and profoundly unfair.

This willingness to endure exploitation even when we have a clear sense that the other person is treating us unfairly does not bode well for our chances of achieving satisfying social connections in the long run, and it can place lonely individuals at greater risk of being scammed, or at least disappointed. Over time, the bad experiences that follow can contribute to the lonely person’s impression that, when you come right down to it, betrayal or rejection is lurking around every corner—a perception that plays into fear, hostility, learned helplessness, and passive coping. […] With an impaired ability to discriminate, persevere, and self-regulate, the lonely, both as children and as adults, often engage in extremes. Sometimes, in an effort to belong, they allow themselves to be pushed around, as in our “proposer/decider” game, when a lonely adult feels resentment, but goes ahead and accepts unfair offers. […] At other times, fear might lead […] to almost paranoid levels of self-protection […] whether driven by loneliness or by other factors, it is usually maladaptive to allow yourself to be taken advantage of. […] the most adaptive strategy is to maintain both the ability to detect cheating or betrayal and the ability to carefully modulate one’s response. The dysregulation caused by loneliness consigns us to the extremes of either suffering passively (responding too little) or being “difficult” (responding too intensely).”

“Among bonobos, if a low-ranking female commits some offense against a dominant female’s child, or grabs a piece of food that an older female had her eye on, or fails to surrender ground when a matriarch moves in to groom a male, the higher-ranking female may refuse to share food with or to accept grooming from her subordinate. This kind of rebuke can throw the younger animal into a tantrum right in front of the cold and rejecting elder. The affront is so stressful that it makes the subordinate physically sick, often causing her to vomit at the feet of her nemesis. It appears that apes do not enjoy social rejection any more than humans do.”

“The solution to loneliness is not quantity but quality of relationships. Human connections have to be meaningful and satisfying for each of the people involved, and not according to some external measure. Moreover, relationships are necessarily mutual and require fairly similar levels of intimacy and intensity on both sides. Even casual chitchat […] needs to proceed at a pace that is comfortable for everyone. Coming on too strong, oblivious to the other person’s response, is the quickest way to push someone away. So part of selection is sensing which prospective relationships are promising, and which would be climbing the wrong tree. Loneliness makes us very attentive to social signals. The trick is to be sufficiently calm and “in the moment” to interpret those signals accurately.”

“The kinds of connections — pets, computers — we substitute for human contact are called “parasocial relationships.” You can form a parasocial relationship with television characters, with people you “meet” online, or with your Yorkshire terrier. Is this an effective way to fill the void when connection with other humans, face to face, is thwarted?

The Greeks […] used the term “anthropomorphism” […] to describe the projection of specifically human attributes onto nonhuman entities. Increasing the strength of anthropomorphic beliefs appears to be a useful tactic for coping with loneliness, divorce, widowhood, or merely being single.16 Pet owners project all sorts of human attributes onto their animal companions, and elderly people who have pets appear to be buffered somewhat from the negative impact of stressful life events. They visit their doctors less often than do their petless age-mates. Individuals diagnosed with AIDS are less likely to become depressed if they own a pet. […] whether it’s a god, a devil, an animal, a machine […], a landmark, or a piece of cast-off sports equipment, the anthropomorphized being becomes a social surrogate, and the same neural systems that are activated when we make judgments about other humans are activated when we assess these parasocial relationships.21 […] Our parasocial relationships follow certain patterns based on aspects of our human relationships. People with insecure, anxious attachment styles are more likely than those with secure attachment styles to form perceived social bonds with television characters. They are also more likely than those with secure attachment styles to report an intensification of religious belief over a given time period, including sudden religious conversions later in life. […] Many proponents of technology tell us that computer-mediated social encounters will fill the void left by the decline of community in the real world. […] Studies have shown that the richer the medium […] the more it fosters social cohesion. This may be why, for those who do choose to connect electronically, multiplayer sites […] are becoming popular meeting places. […] forming connections with pets or online friends or even God is a noble attempt by an obligatorily gregarious creature to satisfy a compelling need. But surrogates can never make up completely for the absence of the real thing.”

August 31, 2015 Posted by | books, Psychology | Leave a comment

Peter Svidler Banter Blitz (post for the chess enthusiasts only)

This will be a brief post, but I forgot to add this link in my recent random stuff/open thread post and it’s arguably important enough to deserve a post of its own, so instead of adding a link to the old post – with the inherent risk of some people who’d be interested missing it – I’ll add a link here.

Svidler has won the Russian Championship 7 (7!) times, he’s the current number 19 in the world of the active players and has been a top player for as long as I can remember, he’s very charismatic and an excellent communicator, and, most importantly, he’s recently started producing videos where he plays short time control games against people on chess24 while at the same time explaining his thoughts and ideas along the way. Blitz games with people giving live commentary isn’t a new thing, people like IM Sielecki has done this stuff for years, but long sessions like the one above with a player as strong as Svidler definitely is.

Each session (the link above is just to his latest session – he’s produced others before (I’m sure you can find them via google or the chess24 website, I’m unfortunately too lazy to look up the links myself)) lasts somewhere between an hour and a half and two hours. In terms of ‘chess as entertainment’, it does not get much better than this.

August 30, 2015 Posted by | Chess | Leave a comment

Loneliness (II)

Here’s my first post about the book. I’d probably have liked the book better if I hadn’t read the Cognitive Psychology text before this one, as knowledge from that book has made me think a few times in specific contexts that ‘that’s a bit more complicated than you’re making it out to be’ – as I also mentioned in the first post, the book is a bit too popular science-y for my taste. I have been reading other books in the last few days – for example I started reading Darwin a couple of days ago – and so I haven’t really spent much time on this one since my first post; however I have read the first 10 chapters (out of 14) by now, and below I’ve added a few observations from the chapters in the middle.

“In 1958, in a now-legendary, perhaps infamous experiment, the psychologist Harry Harlow of the University of Wisconsin removed newborn rhesus monkeys from their mothers. He presented these newborns instead with two surrogates, one made of wire and one made of cloth […]. Either stand-in could be rigged with a milk bottle, but regardless of which “mother” provided food, infant monkeys spent most of their time clinging to the one made of cloth, running to it immediately when startled or upset. They visited the wire mother only when that surrogate provided food, and then, only for as long as it took to feed.2

Harlow found that monkeys deprived of tactile comfort showed significant delays in their progress, both mentally and emotionally. Those deprived of tactile comfort and also raised in isolation from other monkeys developed additional behavioral aberrations, often severe, from which they never recovered. Even after they had rejoined the troop, these deprived monkeys would sit alone and rock back and forth. They were overly aggressive with their playmates, and later in life they remained unable to form normal attachments. They were, in fact, socially inept — a deficiency that extended down into the most basic biological behaviors. If a socially deprived female was approached by a normal male during the time when hormones made her sexually receptive, she would squat on the floor rather than present her hindquarters. When a previously isolated male approached a receptive female, he would clasp her head instead of her hindquarters, then engage in pelvic thrusts. […] Females raised in isolation became either incompetent or abusive mothers. Even monkeys raised in cages where they could see, smell, and hear — but not touch — other monkeys developed what the neuroscientist Mary Carlson has called an “autistic-like syndrome,” with excessive grooming, self-clasping, social withdrawal, and rocking. As Carlson told a reporter, “You were not really a monkey unless you were raised in an interactive monkey environment.””

In the authors’ coverage of oxytocin’s various roles in human- and animal social interaction they’re laying it on a bit thick in my opinion, and the less than skeptical coverage there leads me to also be somewhat skeptical of their coverage of the topic of mirror neurons, also on account of stuff like this. However I decided to add a little of the coverage of this topic anyway:

“In the 1980s the neurophysiologist Giacomo Rizzolatti began experimenting with macaque monkeys, running electrodes directly into their brains and giving them various objects to handle. The wiring was so precise that it allowed Rizzolatti and his colleagues to identify the specific monkey neurons that were activated at any moment.

When the monkeys carried out an action, such as reaching for a peanut, an area in the premotor cortex called F5 would fire […]. But then the scientists noticed something quite unexpected. When one of the researchers picked up a peanut to hand it to the monkey, those same motor neurons in the monkey’s brain fired. It was as if the animal itself had picked up the peanut. Likewise, the same neurons that fired when the monkey put a peanut in its mouth would fire when the monkey watched a researcher put a peanut in his mouth. […] Rizzolatti gave these structures the name “mirror neurons.” They fire even when the critical point of the action—the person’s hand grasping the peanut, for instance — is hidden from view behind some object, provided that the monkey knows there is a peanut back there. Even simply hearing the action — a peanut shell being cracked — can trigger the response. In all these instances, it is the goal rather than the observed action itself that is being mirrored in the monkey’s neural response. […] Rizzolatti and his colleagues confirmed the role of goals […] by performing brain scans while people watched humans, monkeys, and dogs opening and closing their jaws as if biting. Then they repeated the scans while the study subjects watched humans speak, monkeys smack their lips, and dogs bark.9 When the participants watched any of the three species carrying out the biting motion, the same areas of their brains were activated that activate when humans themselves bite. That is, observing actions that could reasonably be performed by humans, even when the performers were monkeys or dogs, activated the appropriate portion of the mirror neuron system in the human brain. […] the mirror neuron system isn’t simply “monkey see, monkey do,” or even “human see, human do.” It functions to give the observing individual knowledge of the observed action from a “personal” perspective. This “personal” understanding of others’ actions, it appears, promotes our understanding of and resonance with others.”

“In a study of how people monitor social cues, when researchers gave participants facts related to interpersonal or collective social ties presented in a diary format, those who were lonely remembered a greater proportion of this information than did those who were not lonely. Feeling lonely increases a person’s attentiveness to social cues just as being hungry increases a person’s attentiveness to food cues.28 […] They [later] presented images of twenty-four male and female faces depicting four emotions — anger, fear, happiness, and sadness — in two modes, high intensity and low intensity. The faces appeared individually for only one second, during which participants had to judge the emotional timbre. The higher the participants’ level of loneliness, the less accurate their interpretation of the facial expressions.”

“As we try to determine the meaning of events around us, we humans are not particularly good at knowing the causes of our own feelings or behavior. We overestimate our own strengths and underestimate our faults. We overestimate the importance of our contribution to group activities, the pervasiveness of our beliefs within the wider population, and the likelihood that an event we desire will occur.3 A At the same time we underestimate the contribution of others, as well as the likelihood that risks in the world apply to us. Events that unfold unexpectedly are not reasoned about as much as they are rationalized, and the act of remembering itself […] is far more of a biased reconstruction than an accurate recollection of events. […] Amid all the standard distortions we engage in, […] loneliness also sets us apart by making us more fragile, negative, and self-critical. […] One of the distinguishing characteristics of people who have become chronically lonely is the perception that they are doomed to social failure, with little if any control over external circumstances. Awash in pessimism, and feeling the need to protect themselves at every turn, they tend to withdraw, or to rely on the passive forms of coping under stress […] The social strategy that loneliness induces — high in social avoidance, low in social approach — also predicts future loneliness. The cynical worldview induced by loneliness, which consists of alienation and little faith in others, in turn, has been shown to contribute to actual social rejection. This is how feeling lonely creates self-fulfilling prophesies. If you maintain a subjective sense of rejection long enough, over time you are far more likely to confront the actual social rejection that you dread.8 […] In an effort to protect themselves against disappointment and the pain of rejection, the lonely can come up with endless numbers of reasons why a particular effort to reach out will be pointless, or why a particular relationship will never work. This may help explain why, when we’re feeling lonely, we undermine ourselves by assuming that we lack social skills that in fact, we do have available.”

“Because the emotional system that governs human self-preservation was built for a primitive environment and simple, direct dangers, it can be extremely naïve. It is impressionable and prefers shallow, social, and anecdotal information to abstract data. […] A sense of isolation can make [humans] feel unsafe. When we feel unsafe, we do the same thing a hunter-gatherer on the plains of Africa would do — we scan the horizon for threats. And just like a hunter-gatherer hearing an ominous sound in the brush, the lonely person too often assumes the worst, tightens up, and goes into the psychological equivalent of a protective crouch.”

“One might expect that a lonely person, hungry to fulfill unmet social needs, would be very accepting of a new acquaintance, just as a famished person might take pleasure in food that was not perfectly prepared or her favorite item on the menu. However, when people feel lonely they are actually far less accepting of potential new friends than when they feel socially contented.17 Studies show that lonely undergraduates hold more negative perceptions of their roommates than do their nonlonely peers.”

August 30, 2015 Posted by | biology, books, Psychology, Zoology | Leave a comment

Random Stuff / Open Thread

This is not a very ‘meaty’ post, but it’s been a long time since I had one of these and I figured it was time for another one. As always links and comments are welcome.

i. The unbearable accuracy of stereotypes. I made a mental note of reading this paper later a long time ago, but I’ve been busy with other things. Today I skimmed it and decided that it looks interesting enough to give it a detailed read later. Some remarks from the summary towards the end of the paper:

“The scientific evidence provides more evidence of accuracy than of inaccuracy in social stereotypes. The most appropriate generalization based on the evidence is that people’s beliefs about groups are usually moderately to highly accurate, and are occasionally highly inaccurate. […] This pattern of empirical support for moderate to high stereotype accuracy is not unique to any particular target or perceiver group. Accuracy has been found with racial and ethnic groups, gender, occupations, and college groups. […] The pattern of moderate to high stereotype accuracy is not unique to any particular research team or methodology. […] This pattern of moderate to high stereotype accuracy is not unique to the substance of the stereotype belief. It occurs for stereotypes regarding personality traits, demographic characteristics, achievement, attitudes, and behavior. […] The strong form of the exaggeration hypothesis – either defining stereotypes as exaggerations or as claiming that stereotypes usually lead to exaggeration – is not supported by data. Exaggeration does sometimes occur, but it does not appear to occur much more frequently than does accuracy or underestimation, and may even occur less frequently.”

I should perhaps note that this research is closely linked to Funder’s research on personality judgment, which I’ve previously covered on the blog here and here.

ii. I’ve spent approximately 150 hours on altogether at this point (having ‘mastered’ ~10.200 words in the process). A few words I’ve recently encountered on the site: Nescience (note to self: if someone calls you ‘nescient’ during a conversation, in many contexts that’ll be an insult, not a compliment) (Related note to self: I should find myself some smarter enemies, who use words like ‘nescient’…), eristic, carrel, oleaginous, decal, gable, epigone, armoire, chalet, cashmere, arrogate, ovine.

iii. why p = .048 should be rare (and why this feels counterintuitive).

iv. A while back I posted a few comments on SSC and I figured I might as well link to them here (at least it’ll make it easier for me to find them later on). Here is where I posted a few comments on a recent study dealing with Ramadan-related IQ effects, a topic which I’ve covered here on the blog before, and here I discuss some of the benefits of not having low self-esteem.

On a completely unrelated note, today I left a comment in a reddit thread about ‘Books That Challenged You / Made You See the World Differently’ which may also be of interest to readers of this blog. I realized while writing the comment that this question is probably getting more and more difficult for me to answer as time goes by. It really all depends upon what part of the world you want to see in a different light; which aspects you’re most interested in. For people wondering about where the books about mathematics and statistics were in that comment (I do like to think these fields play some role in terms of ‘how I see the world‘), I wasn’t really sure which book to include on such topics, if any; I can’t think of any single math or stats textbook that’s dramatically changed the way I thought about the world – to the extent that my knowledge about these topics has changed how I think about the world, it’s been a long drawn-out process.

v. Chess…

People who care the least bit about such things probably already know that a really strong tournament is currently being played in St. Louis, the so-called Sinquefield Cup, so I’m not going to talk about that here (for resources and relevant links, go here).

I talked about the strong rating pools on ICC not too long ago, but one thing I did not mention when discussing this topic back then was that yes, I also occasionally win against some of those grandmasters the rating pool throws at me – at least I’ve won a few times against GMs by now in bullet. I’m aware that for many ‘serious chess players’ bullet ‘doesn’t really count’ because the time dimension is much more important than it is in other chess settings, but to people who think skill doesn’t matter much in bullet I’d say they should have a match with Hikaru Nakamura and see how well they do against him (if you’re interested in how that might turn out, see e.g. this video – and keep in mind that at the beginning of the video Nakamura had already won 8 games in a row, out of 8, against his opponent in the first games, who incidentally is not exactly a beginner). The skill-sets required do not overlap perfectly between bullet and classical time control games, but when I started playing bullet online I quickly realized that good players really require very little time to completely outplay people who just play random moves (fast). Below I have posted a screencap I took while kibitzing a game of one of my former opponents, an anonymous GM from Germany, against whom I currently have a 2.5/6 score, with two wins, one draw, and three losses (see the ‘My score vs CPE’ box).

Kibitzing GMs(click to view full size).

I like to think of a score like this as at least some kind of accomplishment, though admittedly perhaps not a very big one.

Also in chess-related news, I’m currently reading Jesús de la Villa’s 100 Endgames book, which Christof Sielecki has said some very nice things about. A lot of the stuff I’ve encountered so far is stuff I’ve seen before, positions I’ve already encountered and worked on, endgame principles I’m familiar with, etc., but not all of it is known stuff and I really like the structure of the book. There are a lot of pages left, and as it is I’m planning to read this book from cover to cover, which is something I usually do not do when I read chess books (few people do, judging from various comments I’ve seen people make in all kinds of different contexts).

Lastly, a lecture:

August 25, 2015 Posted by | biology, books, Chess, Lectures, personal, Psychology, statistics | Leave a comment

Cost-effectiveness analysis in health care (II)

Here’s my first post about the book.

Like in the first post I cannot promise I have not already covered the topics I’m about to cover in this post before on the blog. In this post I’ll include and discuss material from two chapters of the book: the chapters on how to measure, value, and analyze health outcomes, and the chapter on how to define, measure, and value costs. In the last part of the post I’ll also talk a little bit about some research related to the coverage which I’ve recently looked at in a different context.

In terms of how to measure health outcomes the first thing to note is that there are lots and lots of different measures (‘thousands’) that are used to measure aspects of health. The symptoms causing problems for an elderly man with an enlarged prostate are not the same symptoms as the ones which are bothering a young child with asthma, and so it can be very difficult to ‘standardize’ across measures (more on this below).

A general distinction in this area is that between non-preference-based measures and preference-based measures. Many researchers working with health data are mostly interested in measuring symptoms, and metrics which do (‘only’) this would be examples of non-preference-based measures. Non-preference based measures can again be subdivided into disease- and symptom-specific measures, and non-disease-specific/generic measures; an example of the latter would be the SF-36, ‘the most widely used and best-known example of a generic or non-disease-specific measure of general health’.

Economists will often want to put a value on symptoms or quality-of-life states, and in order to do this you need to work with preference-based measures – there are a lot of limitations one confronts when dealing with non-preference-based measures. Non-preference based measures tend for example to be very different in design and purpose (because asthma is not the same thing as, say, bulimia), which means that there is often a lack of comparability across measures. It is also difficult to know how to properly trade off various dimensions included when using such metrics (for example pain relief can be the result of a drug which also increases nausea, and it’s not perfectly clear when you use such measures whether such a change is to be considered desirable or not); similar problems occur when taking the time dimension into account, where problems with aggregation over time and how to deal with this pop up. Various problems related to weighting are recurring problems; for example a question can be asked when using such measures which symptoms/dimensions included are more important? Are they all equally important? This goes for both the weighting of various different domains included in the metric, and for how to weigh individual questions within a given domain. Many non-preference-based measures contain an implicit equal-interval assumption, so that a move from (e.g.) level one to level two on the metric (e.g. from ‘no pain at all’ to ‘a little’) is considered the same as a move from (e.g.) level three to level four (e.g. ‘quite a bit’ to ‘very much’), and it’s not actually clear that the people who supply the information that goes into these metrics would consider such an approach to be a correct reflection of how they perceive these things. Conceptually related to the aggregation problem mentioned above is the problem that people may have different attitudes toward short-term and long-term health effects/outcomes, but non-preference-based measures usually give equal weight to a health state regardless of the timing of the health state. The issue of some patients dying is not addressed at all when using these measures, as they do not contain information about mortality; which may be an important variable. For all these reasons the authors argue in the text that:

“In summary, non-preference-based health status measures, whether disease specific or generic, are not suitable as outcome measures in economic evaluation. Instead, economists require a measure that combines quality and quantity of life, and that also incorporates the valuations that individuals place on particular states of health.
The outcome metric that is currently favoured as meeting these requirements and facilitating the widest possible comparison between alternative uses of health resources is the quality-adjusted life year“.

Non-preference-based tools may be useful, but you will usually need to go ‘further’ than those to be able to handle the problems economists will tend to care the most about. Some more observations from the chapter below:

“the most important challenge [when valuing health states] is to find a reliable way of quantifying the quality of life associated with any particular health state. There are two elements to this: describing the health state, which […] could be either a disease-specific description or a generic description intended to cover many different diseases, and placing a valuation on the health state. […] these weights or valuations are related to utility theory and are frequently referred to as utilities or utility values.
Obtaining utility values almost invariably involves some process by which individuals are given descriptions of a number of health states and then directly or indirectly express their preferences for these states. It is relatively simple to measure ordinal preferences by asking respondents to rank-order different health states. However, these give no information on strength of preference and a simple ranking suffers from the equal interval assumption […]; as a result they are not suitable for economic evaluation. Instead, analysts make use of cardinal preference measurement. Three main methods have been used to obtain cardinal measures of health state preferences: the rating scale, the time trade-off, and the standard gamble. […] The large differences typically observed between RS [rating scale] and TTO [time trade-off] or SG [standard gamble] valuations, and the fact that the TTO and SG methods are choice based and therefore have stronger foundations in decision theory, have led most standard texts and guidelines for technology appraisal to recommend choice-based valuation methods [The methods are briefly described here, where the ‘VAS’ corresponds to the rating scale method mentioned – the book covers the methods in much more detail, but I won’t go into those details here].”

“Controversies over health state valuation are not confined to the valuation method; there are also several strands of opinion concerning who should provide valuations. In principle, valuations could be provided by patients who have had first-hand experience of the health state in question, or by experts such as clinicians with relevant scientific or clinical expertise, or by members of the public. […] there is good evidence that the valuations made by population samples and patients frequently vary quite substantially [and] the direction of difference is not always consistent. […] current practice has moved towards the use of valuations obtained from the general public […], an approach endorsed by recent guidelines in the UK and USA explicitly recommend that population valuations are used”.

Given the very large number of studies which have been based on non-preference based instruments, it would be desirable for economists working in this field to somehow ‘translate’ the information contained in those studies so that this information can also be used for cost-effectiveness evaluations. As a result of this an increasing number of so-called ‘mapping studies’ have been conducted over the years, the desired goal of which is to translate the non-preference based measures into health state utilities, allowing outcomes and effects derived from the studies to be expressed in terms of QALYs. There’s more than one way to try to get from a non-preference based metric to a preference-based metric and the authors describe three approaches in some detail, though I’ll not discuss those approaches or details here. They make this concluding assessment of mapping studies in the text:

“Mapping studies are continuing to proliferate, and the literature on new mapping algorithms and methods, and comparisons between approaches, is expanding rapidly. In general, mapping methods seem to have reasonable ability to predict group mean utility scores and to differentiate between groups with or without known existing illness. However, they all seem to predict increasingly poorly as health states become more serious. […] all forms of mapping are ‘second best’, and the existence of a range of techniques should not be taken as an argument for relying on mapping instead of obtaining direct preference-based measurements in prospectively designed studies.”

I won’t talk too much about the chapter on how to define, measure and value costs, but I felt that a few observations from the chapter should be included in the coverage:

“When asking patients to complete resource/time questionnaires (or answer interview questions), a particularly important issue is deciding on the optimum recall period. Two types of recall error can be distinguished: simply forgetting an entire episode, or incorrectly recalling when it occurred. […] there is a trade-off between recall bias and complete sampling information. […] the longer the period of recall the greater is the likelihood of recall error, but the shorter the recall period the greater is the problem of missing information.”

“The range of patient-related costs included in economic valuations can vary considerably. Some studies include only the costs incurred by patients in travelling to a hospital or clinic for treatment; others may include a wider range of costs including over-the-counter purchases of medications or equipment. However, in some studies a much broader approach is taken, in which attempts are made to capture both the costs associated with treatments and the consequences of illness in terms of absence from or cessation of work.”

An important note here which I thought I should add is that whereas many people unfamiliar with this field may translate ‘medical costs of illness’ with ‘the money that is paid to the doctor(s)’, direct medical costs will in many cases drastically underestimate the ‘true costs’ of disease. To give an example, Ferber et al. (2006) when looking at the costs of diabetes included two indirect cost components in their analysis – inability to work, and early retirement – and concluded that these two cost components made up approximately half of the total costs of diabetes. I think there are reasons to be skeptical of the specific estimate on account of the way it is made (for example if diabetics are less productive/earn less than the population in general, which seems likely if the disease is severe enough to cause many people to withdraw prematurely from the labour market, the cost estimate may be argued to be an overestimate), but on the other hand there are multiple other potentially important indirect cost components they do not include in the calculation, such as e.g. disease-related lower productivity while at work (for details on this, see e.g. this paper – that cost component may also be substantial in some contexts) and things like spousal employment spill-over effects (it is known from related research – for an example, see this PhD dissertation – that disease may impact on the retirement decisions of the spouse of the individual who is sick, not just the individual itself, but effects here are likely to be highly context-dependent and to vary across countries). Another potentially important variable in an indirect cost context is informal care provision. Here’s what they authors say about that one:

“Informal care is often provided by family members, friends, and volunteers. Devoting time and resources to collecting this information may not be worthwhile for interventions where informal care costs are likely to form a very small part of the total costs. However, in other studies non-health-service costs could represent a substantial part of the total costs. For instance, dementia is a disease where the burden of care is likely to fall upon other care agencies and family members rather than entirely on the health and social care services, in which case considering such costs would be important.
To date [however], most economic evaluations have not considered informal care costs.”

August 23, 2015 Posted by | books, diabetes, economics, medicine | Leave a comment

Promoting the unknown, a continuing series

August 22, 2015 Posted by | music | Leave a comment

Loneliness (I)

I’m currently reading this book by John Cacioppo and William Patrick. It’s a bit too soft/popular science-y for my taste, but the material is interesting.

Below some observations from the book’s part one:

“Serving as a prompt to restore social bonds, loneliness increases the sensitivity of our receptors for social signals. At the same time, because of the deeply rooted fear it represents, loneliness disrupts the way those signals are processed, diminishing the accuracy of the message that actually gets through. When we are persistently lonely, this dual influence — higher sensitivity, less accuracy — can leave us misconstruing social signals that others do not even detect, or if they detect, interpret quite differently.

Reading and interpreting social cues is for any of us, at any time, a demanding and cognitively complex activity, which is why our minds embrace any shortcut that simplifies the job. […] We [all] invariably take cognitive shortcuts, but when we are lonely, the social expectations and snap judgments we create are generally pessimistic. We then use them to construct a bulwark against the negative evaluations and ultimate rejection that the fearful nature of loneliness encourages us to anticipate.”

“When we feel socially connected […] we tend to attribute success to our own actions and failure to bad luck. When we feel socially isolated and depressed, we tend to reverse this useful illusion and turn even small errors into catastrophes—at least in our own minds. Meanwhile, we use the same everyday cognitive shortcuts to try to barricade ourselves against criticism and responsibility for our screw-ups. The net result is that, over time, if we get stuck in loneliness, this complex pattern of behavior can contribute to our isolation from other people. […] What makes loneliness especially insidious is that it contains this Catch-22: Real relief from loneliness requires the cooperation of at least one other person, and yet the more chronic our loneliness becomes, the less equipped we may be to entice such cooperation. Other negative states, such as hunger and pain, that motivate us to make changes to modify unpleasant or aversive conditions can be dealt with by simple, individual action. When you feel hungry, you eat. […] But when the unpleasant state is loneliness, the best way to get relief is to form a connection with someone else. Each of the individuals involved must be willing to connect, must be free to do so, and must agree to more or less the same timetable. Frustration with the difficulty imposed by these terms can trigger hostility, depression, despair, impaired skills in social perception, as well as a sense of diminished personal control. This is when failures of self-regulation, combined with the desire to mask pain with whatever pleasure is readily available, can lead to unwise sexual encounters, too much to drink, or a sticky spoon in the bottom of an empty quart of ice cream. Once this negative feedback loop starts rumbling through our lives, others may start to view us less favorably because of our self-protective, sometimes distant, sometimes caustic behavior. This, in turn, merely reinforces our pessimistic social expectations. Now others really are beginning to treat us badly, which seems like adding insult to injury, which spins the cycle of defensive behavior and negative social results even further downhill.”

“In 2002 our team at the University of Chicago began collecting longitudinal data on a representative sample of middle-aged and older citizens in the greater Chicago metropolitan area. We subjected these volunteers to numerous physiological and psychological measurements, including the UCLA Loneliness Scale. […] When we analyzed the diets of these older adults, what they ate week after week, month after month in real life [we found that] older adults who felt lonely in their daily lives had a substantially higher intake of fatty foods. […] we found that the calories of fat they consumed increased by 2.56 percent for each standard deviation increase in loneliness as measured by the UCLA Loneliness Scale.12

I must admit I found this finding in particular quite interesting, and surprising:

“In another study, researchers asked participants either to describe a personal problem to an assigned partner, or to adopt the role of listener while the partner described his or her problem.17 Lonely individuals, when specifically requested to take the helping role, were just as socially skilled as the others. They were active listeners, they offered assistance to their partners, and they stayed with the conversation longer than those who were describing their troubles. So we retain the ability to be socially adept when we feel lonely. […] [However] [d]espite their display of skill in the experiment, the lonely participants consistently rated themselves as being less socially adept than other people.”

“factor analysis tells us that loneliness and depression are, in fact, two distinct dimensions of experience.10 […] Loneliness reflects how you feel about your relationships. Depression reflects how you feel, period. Although both are aversive, uncomfortable states, loneliness and depression are in many ways opposites. Loneliness, like hunger, is a warning to do something to alter an uncomfortable and possibly dangerous condition. Depression makes us apathetic. Whereas loneliness urges us to move forward, depression holds us back. But where depression and loneliness converge is in a diminished sense of personal control, which leads to passive coping. This induced passivity is one of the reasons that, despite the pain and urgency that loneliness imposes, it does not always lead to effective action. Loss of executive control leads to lack of persistence, and frustration leads to what the psychologist Martin Seligman has termed “learned helplessness.””

“For our cross-sectional analysis, we went back to the large population of Ohio State students that had supplied volunteers for our dichotic listening test. We refined our sample down to 135 participants, 44 of them high in loneliness, 46 average, and 45 low in loneliness, with each subset equally divided between men and women.16 […] this study population gave us a clear picture of the full psychological drama accompanying loneliness as it occurs in the day-to-day lives of a great many people observed during a specific period of time. The cluster of characteristics we found were the ones we had anticipated: depressed affect, shyness, low self-esteem, anxiety, hostility, pessimism, low agreeableness, neuroticism, introversion, and fear of negative evaluation. […] Analysis of the longitudinal data from our middle-aged and older adults showed that a person’s degree of loneliness in the first year of the study predicted changes in that person’s depressive symptoms during the next two years.21 The lonelier that people were at the beginning, the more depressive affect they experienced in the following years, even after we statistically controlled for their depressive feelings in the first year. We also found that a person’s level of depressive symptoms in the first year of the study predicted changes in that person’s loneliness during the next two years. Those who felt depressed withdrew from others and became lonelier over time.”

“In 1988 an article in Science reviewed [research on loneliness], and that meta-analysis indicated that social isolation is on a par with high blood pressure, obesity, lack of exercise, or smoking as a risk factor for illness and early death.4 For some time the most common explanation for this sizeable effect has been the “social control hypothesis.” This theory holds that, in the absence of a spouse or close friends who might provide material help or a more positive influence, individuals may have a greater tendency to gain weight, to drink too much, or to skip exercise. […] But epidemiological research done on the heels of the analysis published in Science determined that the health effect associated with isolation was statistically too large and too dramatic to be attributed entirely to differences in behavior.”

However behaviour does matter:

“we found that the health-related behaviors of lonely young people were no worse than those of socially embedded young people. In terms of alcohol consumption, their behavior was, in fact, more restrained and healthful. […] our study of older adults did [however] indicate that, by middle age, time had taken its toll, and the health habits of the lonely had indeed become worse than those of socially embedded people of similar age and circumstances.21 Although lonely young adults were no different from others in their exercise habits, measured either by frequency of activity or by total hours per week, the picture changed with our middle-aged and older population. Socially contented older adults were thirty-seven percent more likely than lonely older adults to have engaged in some type of vigorous physical activity in the previous two weeks. On average they exercised ten minutes more per day than their lonelier counterparts.”

“It may be that the decline in healthful behavior in the lonely can be partially explained by the impairment in executive function, and therefore in self-regulation, that we saw in individuals induced to feel socially rejected. Doing what is good for you, rather than what merely feels good in the moment, requires disciplined self-regulation. Going for a run might feel good when you’re finished, but for most of us, getting out the door in the first place requires an act of willpower. The executive control required for such discipline is compromised by loneliness, and loneliness also tends to lower self-esteem. If you perceive that others see you as worthless, you are more likely to engage in self-destructive behaviors and less likely to take good care of yourself.

Moreover, for lonely older adults, it appears that emotional distress about loneliness, combined with a decline in executive function, leads to attempts to manage mood by smoking, drinking, eating too much, or acting out sexually. Exercise would be a far better way to try to achieve a lift in mood, but disciplined exercise, again, requires executive control. Getting down to the gym or the yoga class three times a week also is much easier if you have friends you enjoy seeing there who reinforce your attempts to stay in shape.”

“Our surveys with the undergraduates at Ohio State showed that lonely and non-lonely young adults did not differ in their exposure to major life stressors, or in the number of major changes they had endured in the previous twelve months. […] However, among the older adults we studied, we found that those who were lonelier also reported larger numbers of objective stressors as being “current” in their lives. It appears that, over time, the “self-protective” behavior associated with loneliness leads to greater marital strife, more run-ins with neighbors, and more social problems overall. […] Even setting aside the larger number of objective stressors in their lives, the lonely express greater feelings of helplessness and threat. In our studies, the lonely, both young and old, perceived the hassles and stresses of everyday life to be more severe than did their non-lonely counterparts, even though the objective stressors they encountered were essentially the same. Compounding the problem, the lonely found the small social uplifts of everyday life to be less intense and less gratifying. […] when people feel lonely, they are far less likely to see any given stressor as an invigorating challenge. Instead of responding with realistic optimism and active engagement, they tend to respond with pessimism and avoidance. They are more likely to cope passively, which means enduring without attempting to change the situation.”

“We found loneliness to be associated with higher traces of the stress hormone epinephrine in the morning urine of older adults.30 Other studies have shown that the allostatic load of feeling lonely also affects the body’s immune and cardiovascular function. Years ago, a classic test with medical students showed that the stress of exams could have a dramatic dampening effect on the immune response, leaving the students more vulnerable to infections. Further studies showed that lonely students were far more adversely affected than those who felt socially contented.”

“One clearly demonstrable consequence of social alienation and isolation for physiological resilience and recovery occurs in the context of the quintessential restorative behavior — sleep. […] when we asked participants to wear a device called the “nightcap” to record changes in the depth and quality of their sleep, we found that total sleep time did not differ across the groups. However, lonely young adults reported taking longer to fall sleep and also feeling greater daytime fatigue.39 Our studies of older adults yielded similar findings, and longitudinal analyses confirmed that it was loneliness specifically that was associated with changes in daytime fatigue. Even though the lonely got the same quantity of sleep as the nonlonely, their quality of sleep was greatly diminished.40″

August 18, 2015 Posted by | books, Psychology | Leave a comment

Cost-effectiveness analysis in health care (I)

Yesterday’s SMBC was awesome, and I couldn’t help myself from including it here (click to view full size):


In a way the three words I chose to omit from the post title are rather important in order to know which kind of book this is – the full title of Gray et al.’s work is: Applied Methods of … – but as I won’t be talking much about the ‘applied’ part in my coverage here, focusing instead on broader principles etc. which will be easier for people without a background in economics to follow, I figured I might as well omit those words from the post titles. I should also admit that I personally did not spend much time on the exercises, as this did not seem necessary in view of what I was using the book for. Despite not having spent much time on the exercises myself, I incidentally did reward the authors for including occasionally quite detailed coverage of technical aspects in my rating of the book on goodreads; I feel confident from the coverage that if I need to apply some of the methods they talk about in the book later on, the book will do a good job of helping me get things right. All in all, the book’s coverage made it hard for me not to give it 5 stars – so that was what I did.

I own an actual physical copy of the book, which makes blogging it more difficult than usual; I prefer blogging e-books. The greater amount of work involved in covering physical books is also one reason why I have yet to talk about Eysenck & Keane’s Cognitive Psychology text here on the blog, despite having read more than 500 pages of that book (it’s not that the book is boring). My coverage of the contents of both this book and the Eysenck & Keane book will (assuming I ever get around to blogging the latter, that is) be less detailed than it could have been, but on the other hand it’ll likely be very focused on key points and observations from the coverage.

I have talked about cost-effectiveness before here on the blog, e.g. here, but in my coverage of the book below I have not tried to avoid making points or including observations which I’ve already made elsewhere on the blog; it’s too much work to keep track of such things. With those introductory remarks out of the way, let’s move on to some observations made in the book:

“In cost-effectiveness analysis we first calculate the costs and effects of an intervention and one or more alternatives, then calculate the differences in cost and differences in effect, and finally present these differences in the form of a ratio, i.e. the cost per unit of health outcome effect […]. Because the focus is on differences between two (or more) options or treatments, analysts typically refer to incremental costs, incremental effects, and the incremental cost-effectiveness ratio (ICER). Thus, if we have two options a and b, we calculate their respective costs and effects, then calculate the difference in costs and difference in effects, and then calculate the ICER as the difference in costs divided by the difference in effects […] cost-effectiveness analyses which measure outcomes in terms of QALYs are sometimes referred to as cost-utility studies […] but are sometimes simply considered as a subset of cost-effectiveness analysis.”

“Cost-effectiveness analysis places no monetary value on the health outcomes it is comparing. It does not measure or attempt to measure the underlying worth or value to society of gaining additional QALYs, for example, but simply indicates which options will permit more QALYs to be gained than others with the same resources, assuming that gaining QALYs is agreed to be a reasonable objective for the health care system. Therefore the cost-effectiveness approach will never provide a way of determining how much in total it is worth spending on health care and the pursuit of QALYs rather than on other social objectives such as education, defence, or private consumption. It does not permit us to say whether health care spending is too high or too low, but rather confines itself to the question of how any given level of spending can be arranged to maximize the health outcomes yielded.
In contrast, cost-benefit analysis (CBA) does attempt to place some monetary valuation on health outcomes as well as on health care resources. […] The reasons for the more widespread use of cost-effectiveness analysis compared with cost-benefit analysis in health care are discussed extensively elsewhere, […] but two main issues can be identified. Firstly, significant conceptual or practical problems have been encountered with the two principal methods of obtaining monetary valuations of life or quality of life: the human capital approach […] and the willingness to pay approach […] Second, within the health care sector there remains a widespread and intrinsic aversion to the concept of placing explicit monetary values on health or life. […] The cost-benefit approach should […], in principle, permit broad questions of allocative efficiency to be addressed. […] In contrast, cost-effectiveness analysis can address questions of productive or production efficiency, where a specified good or service is being produced at the lowest possible cost – in this context, health gain using the health care budget.”

“when working in the two-dimensional world of cost-effectiveness analysis, there are two uncertainties that will be encountered. Firstly, there will be uncertainty concerning the location of the intervention on the cost-effectiveness plane: how much more or less effective and how much more or less costly it is than current treatment. Second, there is uncertainty concerning how much the decision-maker is willing to pay for health gain […] these two uncertainties can be presented together in the form of the question ‘What is the probability that this intervention is cost-effective?’, a question which effectively divides our cost-effectiveness plane into just two policy spaces – below the maximum acceptable line, and above it”.

“Conventionally, cost-effectiveness ratios that have been calculated against a baseline or do-nothing option without reference to any alternatives are referred to as average cost-effectiveness ratios, while comparisons with the next best alternative are described as incremental cost-effectiveness ratios […] it is quite misleading to calculate average cost-effectiveness ratios, as they ignore the alternatives available.”

“A life table provides a method of summarizing the mortality experience of a group of individuals. […] There are two main types of life table. First, there is a cohort life table, which is constructed based on the mortality experience of a group of individuals […]. While this approach can be used to characterize life expectancies of insects and some animals, human longevity makes this approach difficult to apply as the observation period would have to be sufficiently long to be able to observe the death of all members of the cohort. Instead, current life tables are normally constructed using cross-sectional data of observed mortality rates at different ages at a given point in time […] Life tables can also be classified according to the intervals over which changes in mortality occur. A complete life table displays the various rates for each year of life; while an abridged life table deals with greater periods of time, for example 5 year age intervals […] A life table can be used to generate a survival curve S(x) for the population at any point in time. This represents the probability of surviving beyond a certain age x (i.e. S(x)=Pr[X>x]). […] The chance of a male living to the age of 60 years is high (around 0.9) [in the UK, presumably – US] and so the survival curve is comparatively flat up until this age. The proportion dying each year from the age of 60 years rapidly increases, so the curve has a much steeper downward slope. In the last part of the survival curve there is an inflection, indicating a slowing rate of increase in the proportion dying each year among the very old (over 90 years). […] The hazard rate is the slope of the survival curve at any point, given the instantaneous chance of an individual dying.”

“Life tables are a useful tool for estimating changes in life expectancies from interventions that reduce mortality. […] Multiple-cause life tables are a way of quantifying outcomes when there is more than one mutually exclusive cause of death. These life tables can estimate the potential gains from the elimination of a cause of death and are also useful in calculating the benefits of interventions that reduce the risk of a particular cause of death. […] One issue that arises when death is divided into multiple causes in this type of life table is competing risk. […] competing risk can arise ‘when an individual can experience more than one type of event and the occurrence of one type of event hinders the occurrence of other types of events’. Competing risks affect life tables, as those who die from a specific cause have no chance of dying from other causes during the remainder of the interval […]. In practice this will mean that as soon as one cause is eliminated the probabilities of dying of other causes increase […]. Several methods have been proposed to correct for competing risks when calculating life tables.”

“the use of published life-table methods may have limitations, especially when considering particular populations which may have very different risks from the general population. In these cases, there are a host of techniques referred to as survival analysis which enables risks to be estimated from patient-level data. […] Survival analysis typically involves observing one or more outcomes in a population of interest over a period of time. The outcome, which is often referred to as an event or endpoint could be death, a non-fatal outcome such as a major clinical event (e.g. myocardial infarction), the occurrence of an adverse event, or even the date of first non-compliance with a therapy.”

“A key feature of survival data is censoring, which occurs whenever the event of interest is not observed within the follow-up period. This does not mean that the event will not occur some time in the future, just that it has not occurred while the individual was observed. […] The most common case of censoring is referred to as right censoring. This occurs whenever the observation of interest occurs after the observation period. […] An alternative form of censoring is left censoring, which occurs when there is a period of time when the individuals are at risk prior to the observation period.
A key feature of most survival analysis methods is that they assume that the censoring process is non-informative, meaning that there is no dependence between the time to the event of interest and the process that is causing the censoring. However, if the duration of observation is related to the severity of a patient’s disease, for example if patients with more advanced illness are withdrawn early from the study, the censoring is likely to be informative and other techniques are required”.

“Differences in the composition of the intervention and control groups at the end of follow-up may have important implications for estimating outcomes, especially when we are interested in extrapolation. If we know that the intervention group is older and has a lower proportion of females, we would expect these characteristics to increase the hazard mortality in this group over their remaining lifetimes. However, if the intervention group has experienced a lower number of events, this may significantly reduce the hazard for some individuals. They may also benefit from a past treatment which continues to reduce the hazard of a primary outcome such as death. This effect […] is known as the legacy effect“.

“Changes in life expectancy are a commonly used outcome measure in economic evaluation. […] Table 4.6 shows selected examples of estimates of the gain in life expectancy for various interventions reported by Wright and Weinstein (1998) […] Gains in life expectancy from preventative interventions in populations of average risk generally ranged from a few days to slightly more than a year. […] The gains in life expectancy from preventing or treating disease in persons at elevated risk [this type of prevention is known as ‘secondary-‘ and/or ‘tertiary prevention’ (depending on the circumstances), as opposed to ‘primary prevention’ – the distinction between primary prevention and more targeted approaches is often important in public health contexts, because the level of targeting will often interact with the cost-effectiveness dimensionUS] are generally greater […one reason why this does not necessarily mean that targeted approaches are always better is that search costs will often be an increasing function of the level of targeting – US]. Interventions that treat established disease vary, with gains in life-expectancy ranging from a few months […] to as long as nine years […] the point that Wright and Weinstein (1998) were making was not that absolute gains vary, but that a gain in life expectancy of a month from a preventive intervention targeted at population at average risk and a gain of a year from a preventive intervention targeted at populations at elevated risk could both be considered large. It should also be noted that interventions that produce a comparatively small gain in life expectancy when averaged across the population […] may still be very cost-effective.”

August 17, 2015 Posted by | books, economics, medicine, statistics | Leave a comment


i. “Ideas have consequences, and totally erroneous ideas are likely to have destructive consequences.” (Steve Allen)

ii. “I always pass on good advice. It is the only thing to do with it. It is never of any use to oneself.” (Oscar Wilde, An Ideal Husband)

iii. “[Sir Robert Chiltern:] No one should be entirely judged by their past.
[Lady Chiltern, sadly:] One’s past is what one is. It is the only way by which people should be judged.” (-ll-)

iv. “Extremists think “communication” means agreeing with them.” (Leo Rosten)

v. “The purpose of life is not to be happy at all. It is to be useful, to be honorable. It is to be compassionate. It is to matter, to have it make some difference that you lived.” (-ll-)

vi. “Don’t commit suicide, because you might change your mind two weeks later.” (Art Buchwald)

vii. “I honestly believe it is better to know nothing than to know what ain’t so.” (Josh Billings)

viii. “Better make a weak man your enemy than your friend.” (-ll-)

ix. “I hate plays. I’ve never seen the point of paying money to watch people shout a lot and pretend to die, and now that I’m the father of three young children I don’t have to.” (Tim Moore)

x. “Any given generation gives the next generation advice that the given generation should have been given by the previous generation but now it’s too late.” (Roy Blount, Jr.)

xi. “People don’t necessarily want or need to be done unto as you would have them do unto you. They want to be done unto as they want to be done unto.” (-ll-)

xii. “From the moment I picked up your book until I laid it down, I was convulsed with laughter. Someday I intend reading it.” (Groucho Marx, on S. J. Perelman’s novel Dawn Ginsbergh’s Revenge)

xiii. “I find television very educational. Every time someone switches it on I go into another room and read a good book.” (Groucho Marx)

xiv. “This is my perspective and has always been my perspective on life: I have a very grim, pessimistic view of it. I always have, since I was a little boy. It hasn’t gotten worse with age or anything. I do feel that it’s a grim, painful, nightmarish, meaningless experience, and that the only way that you can be happy is if you tell yourself some lies and deceive yourself.” (Woody Allen)

xv. “It’s not that I’m afraid to die, I just don’t want to be there when it happens.” (-ll-)

xvi. “Nine-tenths of the value of a sense of humor in writing is not in the things it makes one write but in the things it keeps one from writing. […] without knowing what is funny, one is constantly in danger of being funny without knowing it.” (Robert Benchley)

xvii. “Only the mediocre are always at their best.” (Jean Giradoux)

xviii. “The world is divided into people who do things and people who get the credit. Try, if you can, to belong to the first class. There’s far less competition.” (Dwight Morrow)

xix. We are all inclined to judge ourselves by our ideals; others by their acts.  (-ll-)

xx. Who speaks the truth stabs Falsehood to the heart. (James Russell Lowell)

August 16, 2015 Posted by | quotes | 2 Comments

Partner Violence (II)

As mentioned in my first post about the book, I realized late in the writing process that I’d be unable to cover it in one post, so this post will not cover nearly as much of the book as did the first post and it will not be particularly long. However some of the observations in the last part I found interesting, so I wanted to talk a little bit about them here.

“The definition of violence indicates that the aggressor is the one who deliberately hurts the partner, and the victim is the one deliberately hurt by the partner. The definition is indifferent to the reasons leading up to the act of violence and its goals. I collaborated in a study that examined how partners perceive the violence between them […] In some cases, the research participants argued that the injury was extremely mild. In other cases, they claimed that the injury was not intentional. Some cases combined both arguments. But even when intentionally hurtful behavior was acknowledged, the tendency to reject responsibility and blame was still identified. In such cases, it was argued that the intentionally hurtful behavior is not to be considered as violence if the offender was not an aggressor or if the offended was not a victim. Such cases emphasize that examining behavior in terms of intentional injury to identify violence produces inadequate results; the causality sequence and the conduct of the offender and offended during the incident should also be examined. […] intentional infliction is insufficient to establish violence. […] Despite the limitations […] of the definition of violence as an intentional hurtful behavior, it was […] and still is used by numerous studies to design the individual behavioral observation unit of partner violence.”

On a related note, this part was really interesting to me:

“I had the opportunity to hold a series of sessions with adolescents at the ages of 12–16 within the framework of a project for coping with school violence, conducted in 2007. […] One of the sessions addressed boys’ and girls’ methods of initiating a dating relationship. The students mentioned that when a boy likes a girl, is attracted to her and would like to have an intimate relationship with her, he can approach her and make a direct intimate proposition. If she accepts, then “everyone is happy,” but if she turns him down, then “it is a huge embarrassment.” The session participants explained that such rejection is usually a difficult, humiliating, and intimidating experience, and therefore, many are deterred from initiating in this way. Many boys and girls avoid a direct, clear, and unequivocal approach and prefer other, more indirect methods of “checking” the other party’s willingness to start a relationship with them. These methods often employ violence, which can be interpreted as expressions of either hostility or affection. For example, the boy can playfully grab the girl’s hand while pinning her against a wall. If the girl chooses a hostile, nonreceptive response, the boy will interpret this as evidence that she is not interested in a relationship with him and in most cases will retreat. If the girl chooses to respond playfully or display vague affection and receptiveness, the boy can interpret this as an invitation. A negative response on behalf of the girl will not be experienced as rejection by the boy because he did not express his interest clearly. A positive, tolerant response by the girl can encourage the boy to continue approaching her, maybe with less aggression next time. The students considered this behavior to be an acceptable and reasonable method of dating initiation. […] It is a widespread behavior which many people, and not only adolescents, do not regard as violent behavior (Playful Violence) (Denzin, 1984). Such behaviors are especially frequent among youth and […] may include holding/grasping/pinning down, pushing, and shoving by boys, and pushing, pinching, hair pulling, and mild blows by girls.” (my bold).

Part of why the above observations were interesting to me was that during my own childhood/youth I had no idea such behaviour was an approach tactic, and I was at a loss to explain such behaviours the few times in the past that I observed such behaviours myself. While reading the chapter I suddenly came to realize that I may have been the target of such behaviour myself during my childhood (let’s just say that one particular sequence of events which I had a great deal of difficulty making sense of in the past makes a lot more sense in light of the above observations). My lack of awareness of the relevant social dynamics embedded in such interactions of course means that my response to the approach behaviour may not have been the response I would have employed had I known about these things (due to being completely clueless, I probably treated that girl very badly. Oh well, as Rochefoucauld’s aptly put it: ‘Il n’y a guère d’homme assez habile pour connaître tout le mal qu’il fait’).

“Although most of the quantitative research [on violence] is based on data regarding individual single violent behaviors isolated from the immediate situational context, in many cases, the analyses, the interpretations, and conclusions are performed as if the behaviors are sequenced (the hurtful behaviors of one party are regarded as a defensive response to the violence of the other party). This is similar to looking at a series of photos set in no particular order while trying to make sense of the timeline of the incident that they describe. […] Defining the boundaries of a conflict (where it starts and ends) is crucial to the identification of the relevant interactions to be studied.”

“Swann, Pelham, and Roberts (1987) argued that, as a rule, individuals simplify their interactions by forming, arranging, and perceiving them in “discrete causal chunks.” These chunks affect individuals’ awareness of the effect of their actions upon others, and the effect of others’ actions upon themselves. They form “self-causal chunks” when they believe that their behavior has affected others. They form “other-causal chunks” when they believe that others have affected their behavior. It is likely that in partner violence, most individuals feel that they are responding rather than initiating (Winstok, 2008).”

“Same-gender involvement in conflicts may enhance status, and avoiding a same-gender conflict may diminish it. On the other hand, involvement in conflicts with the opposite gender might work the other way around. For example, a man who avoids aggressive conflict with another man can be regarded as weak or cowardly. A man who gets involved in aggressive conflict with a woman can be regarded as a bully, which is also an indication of weakness and cowardice. […] Women in general are aware of men’s chivalry code by which they are expected not to hurt women (Felson, 2002; Felson & Feld, 2009). […] Men’s chivalry code commitment and their female partners’ awareness of it may increase men’s vulnerability in partner conflicts.”

The comments and results below relate to repondents’ answers to questions dealing with how they thought they would respond in various different conflict contexts (involving their own partner, or strangers of both gender), with a specific focus on the (hypothetical) willingness to escalate, not actual observed conflict behaviour, so you may take the responses with a grain of salt – however I think they are still interesting:

“First, let me begin with the escalatory intention of men in response to the verbal aggression of various aggressors: the highest escalation level was toward male strangers and lower toward female strangers; the lowest escalation level was toward their female partners. The same rates with larger values were found also for the escalatory intentions of men in response to physical aggression by the various opponent types. As to the escalatory intentions of women in response to verbal aggression, the highest level was toward their male partners, and a little less so, but not significantly, toward female strangers. The lowest escalation intention level was toward male strangers. The same rates with similar values were also found in the women’s escalatory intentions in response to physical aggression of various opponents. The most important finding of these comparisons is that relative to the escalation levels of research participants toward strangers, the escalation levels of men toward their partners’ aggression was the lowest, and of women, the highest. […] in intimate relationships, women’s tendency was more escalatory than men’s. […] escalatory intentions of men are more affected by the severity of aggression toward them than those of women. This study provides initial evidence of the lack of gender symmetry in escalatory intentions. In partner conflicts, women tend to escalate more than men.”

August 15, 2015 Posted by | books, personal, Psychology | Leave a comment

Partner Violence (I)

I gave the book one star on goodreads – this may have been too harsh, but if you want to know why I did it, I wrote a reasonably long goodreads review explaining some of my reasons for doing this.

Below I have added some observations and quotes from the book. When I started out writing the post my intention was to cover the book in one post, however I realized quite late in the process that this was not feasible, so you can expect me to cover the rest of the book later on (I decided not to cover the rest because there’s some stuff in the last chapters which I thought was really quite interesting, and I did not want these things to get lost in a very long post, and/or covered in too little detail). After I’ve now written this blog post, I’m actually strongly considering changing my goodreads rating to a two star evaluation; this is a very selective account of the material covered in the book, but it did actually include quite a lot of interesting observations. Given the length of the post I decided to bold a few key observations from the book’s coverage (the bolded sections below are not bolded in the book).

“let us focus on the empirical evidence regarding the differences in aggressive tendencies within the couple. The research in this area is led by two groups with opposing outlooks. One is dubbed “feminist scholars,” who view the problem as asymmetric in terms of gender: they maintain that intimate violence is perpetrated by the man against his female partner […] In this case, using the term “asymmetry” reflects the notion that a significant difference exists between men’s tendency toward violence against their female partners and women’s tendency toward violence against their male partners. The second group is referred to as “family violence scholars,” who view the problem of partner violence as gender symmetric: the violence is perpetrated by both men and women […]. They use the term “symmetry” to convey the idea that a significant (not necessarily equal) proportion of both genders use violence in their intimate relationships. […] for feminist scholars, gender is a primary significant factor in predicting partner violence, whereas for family violence scholars, gender is secondary and marginal. […] The only fact on which both approaches agree is that the rates of injury caused by male violence are higher than those caused by female violence […] there is broad agreement that the results of partner violence are more severe for women than for men […]. Most family violence scholars do not view this information as a relevant factor in challenging their approach to the role of gender in partner violence because they focus their attention on aggression. They do not consider victimization to be a straightforward derivative of aggression but rather an issue that warrants independent empirical testing.”

The cumulative empirical evidence, mostly presented by family violence scholars, supports gender symmetry of violence in intimate relationships. […] An examination of research findings on the gender aspects of partner violence leads many scholars, specifically of family violence, to the conclusion that gender plays a minor, secondary role in the problem: both men and women use violence in their intimate relationships and for the same reasons. Despite the empirical evidence, it is widely accepted that in intimate relationships, the violence is perpetrated by men against their female partners.”

“It is my opinion, based on conversations with social workers treating partner violence, that in Israel, much like in other parts of the Western world, feminist thinking is predominant in intervention. Men’s violence against women is the major, if not the only, problem focused on and addressed by practitioners. Even if the practitioners acknowledge female partner violence, they regard it as marginal and inherently different from male partner violence. Practice, guided by feminist thinking, leads many professionals to assume the following: (1) in partner violence, the woman is the victim and (2) the main goal of intervention in partner violence is to stop the man from perpetrating any kind of mild or severe violence against the woman. These assumptions dictate several widely accepted intervention principles: (1) the treatment must serve primarily what is perceived to be the woman’s needs and wishes; (2) the treatment must change the man’s behavior. The response to the man’s perceived needs is secondary and marginal in the process; and (3) the woman’s treatment is best provided by a woman and not a man.”

“A considerable group of family violence scholars believes that violence against women is a particular case (unique or not) of partner violence. […] They have difficulty understanding why feminist scholars can make theoretical arguments on the one hand and then object to them being empirically tested on the other.” One of the reasons may be this one: “a significant group of feminist scholars view the link between politics and research as unbreakable, and in this reality, feel free to emphasize their association with the feminist agenda. They even regard the seemingly apolitical position of family violence scholars as double standard and a sham, because they do not believe that research can be devoid of politics.”

[An] association between threats and battering among intimate partners has been extensively documented” (…so if your partner threatens you, it seems like a good idea to take the threat seriously).

“It is incorrect to assume that emotions drive people to behave irrationally, and that if one wants to make a rational decision, emotions must be set aside. Not only are rational choices not devoid of emotions but they also play a vital role in the process of choosing an action to attain a certain goal — from focusing attention on details most relevant in a situation, to choosing the most suitable behavior to achieve the goals called for in that situation […] Emotions are a central component of decision making […]. They help to focus attention on details, such as what the opponent is saying, his/her tone of voice, what his/her facial expression and body gestures convey, what means of defense and offence are available and the possible escape routes. Anger might focus attention on the details most relevant in the case of a fight, whereas fear might focus attention on the details more relevant to flight […] Emotions speed up the information collection process, because they switch it on to automatic, or semi-automatic, pilot mode. […] [Anger and fear are] emotions that [have] received special attention in the study of violence [, as they have been] found to be highly relevant to the development of conflict […]. Fear is future oriented and emerges when a negative event is perceived as possible or imminent. On the other hand, anger is past oriented and emerges when a negative event has already occurred […] Anger is associated with the tendency to fight, whereas fear is associated with the tendency to flight […]. Studies have shown that anger boosts the frequency and severity of aggression […], whereas fear inhibits them […] As in many other fields, men and women differ in the case of emotional experience […]. Women tend to experience emotions more intensely than men […] and this includes negative emotions […]. Campbell (1999) suggested that fear is the mechanism that considers costs. When men and women face the same risks, women would experience fear with greater intensity than men. […] gender differences in the experience of anger are less evident than in experiences of fear (Winstok, 2007). […] interviews [with violent offenders] taught me that we are not dealing with loss of control, but rather with a temporary, voluntary forfeit of control. […] The ability to control the loss of control seriously contradicts the suggestion of irrationality.”

“The study of deterrence in partner violence is mainly focused on men using violence against their female partners. It is maintained that men would avoid violent behavior if they perceive its cost as severe and certain […]. In this context, the first line of deterrence is based on women’s willingness and readiness to act against their violent partners and includes seeking the support of informal and formal agents, and/or leaving the violent partner. […] I conducted a study of a sample of 218 men […]. It examined the association between men’s evaluation of their partner’s willingness to breach the dyadic boundaries in response to aggression, and their evaluation of their own tendency to use aggression against their partner. Findings indicated that the men tended to restrain aggression if they evaluated that in response, their partners would involve informal and formal agents, or would even leave them. Based on these findings, it can be hypothesized that such actions by women threaten, deter, and restrain men’s aggressive tendencies.”

“In most cases, the combination of causes that bring about partner violence is not completely known or clear. Therefore, evaluations of the probability of the occurrence of future violence are based, at least in part, on behavioral history. Predictions based solely on behavioral history are prone to false-negative and false-positive errors, at least in cases in which the unknown causes of past violent behavior have changed. One critical example is that this approach will always fail to predict the first time that violence is used. […] interviews with men and women who were perpetrators or victims of partner violence demonstrate that violence is often part of a behavioral move rather than a single action. The move is based on a series of behaviors resulting from several cycles of information processing […] Studying one incident of violent behavior rather than a series of incidents resembles an attempt to understand a branch (interaction between partners), a tree (an incident), and a forest (a series of incidents) by looking merely at the leaves. […] The term “escalation” is at the core of the discussion on conflict dynamics. Most often, in the context of partner conflicts, escalation describes a trend of increasing aggression severity. The term can describe escalation of aggressive acts within a specific conflict, or escalation of aggression across relationship periods (from one incident to the next) […] It is commonly argued that once partner violence erupts, it continues until the end of the relationship (by separation or death) and increases over time (in frequency, intensity, and form), especially when the violence is against women […] Although these arguments sound plausible, they are not supported by research findings […] [Only] in a small portion of cases [does violence] increase over time. […] in a given conflict, violence is the outcome of escalation. This has led many to believe that from one conflict to the next, escalation itself escalates. Despite evidence showing that most cases of partner violence subside over time […] such statements as “once a batterer, always a batterer” and “violence increases over time” are still frequent and widespread.”

“Those who use violence, as compared to those who do not, invest less time and effort in collecting situational cues, and assign higher value to internal rather than external cues while interpreting a situation. Their attention is more focused on aggressive than on nonaggressive cues. They rely more than others on cues that appear at the end of a social interaction and less on those at its beginning […] Studies of children provide a strong support for a link between the types of responses they generate to particular situations and the behavior that they exhibit in those situations. Aggressive children access a fewer number of responses to social situations than do their peers […] They also access responses that are more aggressive than those accessed by peers for provocation, group entry, object acquisition, and friendship initiation situations”.

“When I started studying partner violence, I expected to be able to identify the aggressor and the victim easily. I was surprised to find that these definitions are often blurred, and this is an understatement. Men and women who used violence against their partners often perceived themselves to be the victims, and not the aggressors.” [I should note that this notion comes across as much less far-fetched/outrageous than you’d think once you read a few of the cases included in the book]. […] Dynamics of partner conflict is a direct result of a series of interactions between the partners. It takes a short step from here to maintain that violence in escalatory conflicts is a result of actions and reactions by both parties. Hence, an examination of these interactions, that is, causal analysis, may lead to the blurring of the distinction between victim and aggressor. For those who associate causality with guilt and accountability, this blur is problematic because they need the clear distinction to allocate guilt and accountability. This, in my view, is why no real attempts are made by scholars to study escalatory dynamics. Their moral stance against violence goes beyond their obligation to examine and propose approaches for effective coping with the problem.”

“Violence […] is age related. […] The use of violence is common in very young children [and] [i]ncreasing evidence indicates that from adolescence onward, the use of interpersonal violence tends to decrease in various life contexts […] A cross study by Straus, Gelles, and Steimetz (1980), examining four age groups (18–30, 31–50, 51–65, 65, and up) in the general population found that with the increase in the age of the partners, the violence between them decreases. Short and mid-range longitudinal studies (3–10 years) […] as well as studies that analyzed life paths […] identified similar trends: over time, there was significant decrease in the incidence of partner violence. These studies contradict the perception that partner violence persists and even escalates over time. […] no single typical pattern of partner violence over time exists. Violence between intimate partners can become more moderate, can subside, can continue at a steady severity level and, at times, can escalate. However, accumulating evidence indicates that in most cases, in the short term, violence can escalate, and in the long term, it can cease. It is clear that changes in violence patterns over time (severity and frequency) are not random. Conflicts that escalate to violence in which the aggressor draws “positive” results that exceed negative ones may encourage the said party to continue using this tactic. Negative outcomes may encourage the aggressor to increase the severity of violence or stop using it and look for alternative tactics […] Conflict opportunities on the one hand and the perception of violence as an effective or noneffective means of dealing with conflict on the other, shape the problem to a large extent.”

“Many of the studies reporting comparable rates of violence perpetration by men and women do not examine contextual factors, such as who initiated the violence, who was injured, whether the violence was in self-defense, and the psychological impact of victimization […] when contextual factors are examined, a complex picture of gender dynamics […] begins to emerge […] [For example, in Allen and Swan (2009)] the scholars found that women’s use of mild violence exceeds that of men.”

“Studies [have] showed that violence can be a result of [both] low self-control and restraint capability […] as well as a means of achieving some desired goals […]. As the need to control the partner increases and the capability for self-control and restraint decreases, violence erupts and becomes increasingly severe. The use of violence at one level of severity (e.g., verbal aggression), increases the probability that another level of violence, of higher severity, will be used as well (e.g., threatening with physical violence). […] escalation to and of violence is a tactic that ensures minimum investment in achieving a goal, whether it is eventually achieved or not. Escalatory dynamics deteriorates the conflict because it increases the severity of violence, but at the same time, it also puts on the breaks, as it ensures that the violence ceases when it becomes of no value. […] By using mild violence that becomes increasingly severe, the aggressor demonstrates the possibility of imminent severe danger to the victim. Thus, the aggressor ensures that the victim complies long before the threat is fully executed.”

“when force is used according to the tit for tat principle, it [may escalate]. [Research] findings […] support the suggestion that people are more sensitive to the force exerted on them by others than to the force they exert upon others. If we replace the term ‘force’ with ‘injury,’ this would read: people are more sensitive to the injury exerted on them by others than to the injury they exert upon others. In light of this sensitivity gap in interpersonal conflicts, the injured party wishing to retaliate with an equally severe injury (balancing) may generate a more serious injury. This sensitivity gap works the same way on the second party and will cause him/her to retaliate with a more serious injury, even when attempting an equally severe (balancing) response. In this fashion, the actions and injuries escalate. […] Hurt that is perceived as unfair will be evaluated as more severe than an identical hurt (in terms of form, intensity, and duration) that is perceived as fair. It can be assumed that those who hurt their partners believe, at least at the moment of perpetration, that their action is justifiable. […] Whereas the offender perceives the offense as justified at the time of offending, the offended will probably not take it as such. Such perception gaps between the partners regarding the actions taken during their conflict may [also] promote escalation.”

August 8, 2015 Posted by | books, Psychology | Leave a comment


Get every new post delivered to your Inbox.

Join 356 other followers