Bioterrorism and Infectious Agents

I read this book over the weekend. I gave it three stars on goodreads but seriously considered giving it four stars.

The book is from 2005, meaning that some parts of it, particularly I assume those related to diagnostics (genotyping etc.), presumably are a bit dated – progress in vaccine development may also have occurred in the meantime, I wouldn’t know but some authors assumed such a development would be likely in their coverage. Most of the stuff covered is, I think, still as relevant today as it was when it was written.

The book is a Springer publication and contains 10 chapters on various topics related to bioterrorism and specific infectious disease agents which may be used for that purpose. Most chapters deal with specific agents or classes of agents which have the potential to be used in a bioterrorism setting, and only the last two chapters deal with more general topics – the first one of these addresses the bioterrorism setting more generally than do the previous chapters (“When the agent used in a biological attack is known, response to such an attack is considerably simplified. The first eight chapters of this text deal with agent-specific concerns and strategies for dealing with infections due to the intentional release of these agents. A larger problem arises when the identity of an agent is not known. […] in some cases, an attack may be threatened or suspected, but it may remain unclear as to whether such an attack has actually occurred. Moreover, it may be unclear whether casualties are due to a biological agent, a chemical agent, or even a naturally occurring infectious disease process or toxic exposure […] This chapter provides a framework for dealing with outbreaks of unknown origin and etiology. Furthermore, it addresses several related concerns and topics not covered elsewhere in this text.”), whereas the last one very briefly addresses ‘The Economics of Planning and Preparing for Bioterrorism’.

An implicit assumption I’d made before reading this book regarding the bioterrorism setting is that in such a setting we’d know that bioterrorism was taking place – it would be obvious because of all those sick people. But it is far from clear that this would always be the case. Most of the agents have incubation periods measured in days or weeks, and even after symptoms present it may be difficult to realize what’s going on because these diseases are not commonly seen in clinical practise and may be confused with other more common conditions. An aerosolized agent introduced into an environment with a large number of people could infect a lot of people who’d not display symptoms until much later. It’d be difficult to figure out what was going on. A long incubation period incidentally doesn’t necessarily mean the disease isn’t severe; it may well mean that once you get symptoms of a severity that’ll lead you to seek medical attention you’re already screwed. An example:

“Symptoms and physical findings are nonspecific in the beginning of [anthrax] infection. The clinical presentation is usually biphasic. The initial stage begins with the onset of myalgia, malaise, fatigue, nonproductive cough, occasional sensation of retrosternal pressure, and fever. […] anthrax symptoms insidiously mimic flu-like symptoms in the beginning […] In some patients, a brief period of apparent recovery follows. Other patients may progress directly to the second, fulminant stage of illness. The second stage develops suddenly with the onset of acute respiratory distress, hypoxemia, and cyanosis. Death sometimes occurs within hours […] The disease progression from the first manifestation of symptoms until death appears to have a considerable range from a few hours […] to 11 days”

While reading the book, and especially in the beginning, I was a bit surprised more effort was not put into covering the topics briefly addressed in chapter 9 especially (the ‘unknown etiology’ chapter above), but actually the coverage that was chosen matches quite well what they state that they set out to do. The book is written for health professionals: “this volume will provide health care workers with up-to-date important reviews by world-renowned experts on infectious and biological agents that could be used for bioterrorism”. Mostly the book is about the infectious agents and how people affected by these agents may present and what can and should be done in terms of treatment/monitoring/isolation etc., so it makes sense that this work does not include a lot of stuff on what might be termed more general risk management aspects, response modelling, coordination problems and so on; there is a little bit on that stuff in the last chapters, but not much. I’d be very surprised if there are not other books/works published which deal with the risk- and decision-management aspects of this kind of stuff in much more detail (especially given the existence of books like this one).

The fact that the book is written for health professionals (“Emergency physicians, Public Health personnel, Internists, Infectious Disease specialists, Microbiologists, Critical care specialists, and even General practitioners”) means that if you’re not a health professional some of this stuff will be stuff you won’t understand. Patients will not be described as having double vision (they’ll have diplopia), and they won’t be described as ‘sweating a lot’ (they’ll be diaphoretic). The authors assume that when they tell you that the suggested treatment may result in hemolytic anemia you’ll know what that implies, and that you know what G-CSF stands for in the context of adjunctive melioidosis treatment. Usage of abbreviations/acronyms which are not explained is incidentally part of the reason why this book would never get five stars from me; using acronyms without telling you once what the letters stand for is a capital offence in my book. Even if you don’t know much about medicine you’ll learn about exposure routes of various substances/diseases (is person-to-person transmission something I should be worried about? Is it airborne?), symptoms (to some extent – you’ll understand some of the words without looking up the medical terms), prognosis in case of exposure, existence (or lack thereof) of a vaccine/treatment, etc. You’ll also learn a little about the history of some of the substances in question; some of them have been used in warfare before, and extensive research has been conducted on quite a few of them during the Cold War, where both the US and the Soviet Union worked on weaponization of some of these substances.

The 8 chapters on specific biological agents/diseases deal with anthrax, plague, tularemia, melioidosis and glanders, smallpox, hemorrhagic fever viruses, botulism, and ricin. None of these things are nice and you can certainly justify covering them in a book like this. The US Centers for Disease Control and Prevention classifies 6 biological agents as ‘Category A’ biowarfare agents, which is the highest risk category and include agents which “can be easily disseminated or transmitted from person-to-person, can cause high mortality, and have the potential for major public health impact. This category includes agents like smallpox, anthrax, plague, botulinum toxin, and Ebola hemorrhagic fever.” All category A agents are covered in this book, as are a few category B agents. The fact that agents such as ricin (“A dose the size of a few grains of table salt can kill an adult human”) are included in the B category, rather than category A, provides you with a bit of context as to of how awful the agents belonging in the A category are. Many of the agents are not just terrible because they kill a lot of people; some of them will also cause really severe and prolonged morbidity in case people survive. A few examples:

“Patients who require mechanical ventilation, respectively, need average periods of 58 days (type A) and 26 days (type B) for weaning (Hughes et al., 1981). Recovery may not begin for as long as 100 days (Colerbatch et al., 1989).” (Botulism. You may not be able to breathe on your own for a month or two.)

“Smallpox is disfiguring. Older texts suggested removing mirrors from patients’ rooms (Dixon, 1962).”

“Deafness is a very common and often permanent result of LASV [Lassa virus] infection, occurring in approximately 30% of patients (Cummins et al., 1990a).”

“Following parenteral treatment, prolonged oral antibiotics are needed to prevent relapse […] The proportion of patients who relapse can be reduced to less than 10%, and probably less than 5%, if appropriate antibiotics are given for 20 weeks.” (They’re talking about melioidosis victims. You may need to treat these people for months to prevent them from relapsing, and some will relapse even if you do. Melioidosis isn’t unique in this respect: “all persons exposed to a bioterrorist incident involving anthrax should be administered one of these [post-exposure prophylaxis] regimens at the earliest possible opportunity. Adherence to the antibiotic prophylaxis program must be strict, as disease can result at any point within 30–60 days after exposure if antibiotics are stopped.”)

Even the class A agents may be said to some extent to belong on a spectrum. Anthrax doesn’t really transmit from person to person, so the total death toll would mostly be limited to people directly exposed to the agent during an attack (‘mostly’ because e.g. people handling the bodies may be exposed to anthrax spores as well). Pneumonic plague is, well, different. Sometimes the very high virulence of an agent may actually implicitly be an argument against using the agent as a biological weapon in some contexts: “F. tularensis is less desirable than other organisms as a weapon because it does not have a stable spore phase and is difficult to handle without infecting those processing and dispersing the pathogen (Cunha, 2002).”

Especially disconcerting in the context of an attack is the idea of wide-spread panic following release of one of these agents, causing health services to become overextended and unable to help actual victims – they do address this topic in the book:

“An announced or threatened bioterroism attack can provoke fear, uncertainty, and anxiety in the population, resulting in overwhelming numbers of patients seeking medical evaluation for unexplained symptoms, and demanding antidotes for feared exposure. Such a scenario could also follow a covert release when the resulting epidemic is characterized as the consequence of a bioterror attack. Symptoms due to anxiety and autonomic arousal, and side effects of postexposure antibiotic prophylaxis may suggest prodromal disease due to biological agent exposure, and pose challenges in differential diagnosis. This “behavioral contagion” is best prevented by risk communication from health and government authorities that includes a realistic assessment of the risk of exposure, information about the resulting disease, and what to do and whom to contact for suspected exposure. Risk communication must be timely, accurate, consistent, and well coordinated.”

One thing I should perhaps note in this context is that anthrax is not the only one of these agents which ‘for practical purposes’ do not transmit from person-to-person (e.g., “Only two well-documented instances of person-to-person spread are recorded in the [melioidosis] literature”), and that some of those that do actually require quite a bit of exposure to transfer successfully – the ‘everybody who stands next to someone with the Incurable Cough of Death disease and get coughed at will die horribly within 24 hours and we have no cure’-situation will never happen because such diseases don’t exist. On a related note, the faster a disease kills/incapacitates you, the less time the infected individual has to actively transfer it to other people; so even severe and fast-acting diseases will often be self-limiting to some extent. On a related note, “With the exception of smallpox, pneumonic plague, and, to a lesser degree, certain viral hemorrhagic fevers, the agents in the Centers for Disease Control and Prevention’s (CDC’s) categories A and B […] are not contagious via the respiratory route.”

I could cover this book in a lot of detail, but I decided to limit my coverage to talking about the stuff above and then add a few remarks about smallpox and plague here, because I figure these two sort of deserve to be covered when dealing with a book like this.

First, plague. This is not just a disease of the past:

“Improved sanitation, hygiene, and modern disease control methods have, since the early 20th century, steadily diminished the impact of plague on public health, to the point that an average of 2,500 cases is now reported annually […] The plague bacillus is, however, entrenched in rodent populations in scattered foci on all inhabited continents except Australia […] and eliminating these natural transmission cycles is unfeasible. Furthermore, although treatment with antimicrobials has reduced the case fatality ratio of bubonic plague to 10% or less, the fatality ratio for pneumonic plague remains high. A review of 420 reported plague cases in the US in the period 1949–2000 identified a total of 55 cases of plague pneumonia, of which 22 (40.0%) were fatal”

Note that even though the annual number of cases is relatively low, you don’t have to go back to Medieval times to find a rather severe outbreak costing millions of lives:

The third (Modern) pandemic began in southwestern China in the mid-19th, struck Hong Kong in 1894, and was soon carried by rat-infested steamships to port cities on all inhabited continents, including several in the United States (US) (Link, 1955; Pollitzer, 1954). By 1930, the third pandemic had caused more than 26 million cases and 12 million deaths.”

This is a terrible disease, so of course people have thought about weaponizing it:

“Biological warfare research programs begun by the Soviet Union (USSR) and the US during the Second World War intensified during the Cold War, and in the 1960s both nations had active programs to “weaponize” Y. pestis. In 1970, a World Health Organization (WHO) expert committee on biological warfare warned of the dangers of plague as a weapon, noting that the causative agent was highly infective, that it could be easily grown in large quantities and stored for later use, and that it could be dispersed in a form relatively resistant to desiccation and other adverse environmental conditions […] Models developed by this expert committee predicted that the intentional release of 50 kg of aerosolized Y. pestis over a city of 5 million would, in its primary effects, cause 150,000 cases of pneumonic plague and 36,000 deaths. It was further postulated that, without adequate precautions, an initial outbreak of pneumonic plague involving 50% of a population could result in infection of 90% of the rest of the population in 20–30 days and could cause a case fatality ratio of 60–70%. The work of this committee provided a basis for the 1972 international Biological Weapons and Toxins Convention prohibiting biological weapons development and maintenance, and that went into effect in 1975 […] It is now known that, despite signing this accord, the USSR continued an aggressive clandestine program of research and development that had begun decades earlier, stockpiling battle-ready plague weapons (Alibek, 1999). The Soviets prepared Y. pestis in liquid and dry forms as aerosols to be released by bomblets, and plague was considered by them as one of the most important strategic weapons in their arsenal. […] It is assumed that a terrorist attack would most likely use a Y. pestis aerosol, possibly resulting in large numbers of severe and fatal primary and secondary pneumonic plague cases. Especially given plague’s notoriety, even a limited event would likely cause public panic, create large numbers of the “worried-well,” foster irrational evasive behavior, and quickly place an overwhelming stress on medical and other emergency response elements working to save lives and bring about control of its spread”

“Several simulations of a plague attack have been conducted in the US […] these have involved all levels of government, numerous agencies, and a wide range of first responders […] Two of these […] were based on coordinated national and local responses to simulated plague attacks. During these simulations, critical deficiencies in emergency response became obvious, including the following: problems in leadership, authority, and decision-making; difficulties in prioritization and distribution of scarce resources; failures to share information; and overwhelmed health care facilities and staff. The need to formulate in advance sound principles of disease containment, and the administrative and legal authority to carry them out without creating confusing new government procedures were glaringly obvious […] In the US, several “sniffing devices” to detect aerosolized microbial pathogens have been developed and tested. The Department of Homeland Security and the Environmental Detection Agency have deployed a PCR-based detection system named BioWatch to continuously monitor filtered air in major cities for Y. pestis and other select agents.”

One of the ‘interesting’ aspects is how the effect of such an attack might be magnified by an attack using conventional weapons as well targeting the likely first responders. Imagine the bombing of local hospitals combined with a plague outbreak and widespread panic plus lack of coordination at the higher decision making level – societal collapse combined with pneumonic plague seems like a combination that could really elevate the body count.

Okay, lastly: Smallpox. Before going into the details I have express my opinion on this matter: If a person works towards releasing smallpox in order to infect other human beings (and so reintroduce the disease), that person is in my book an enemy of the human race who should be shot on sight. No trial, just kill him (or her).

“Smallpox […] is one of the six pathogens considered a serious threat for biological terrorism […] Smallpox has several attributes that make it a potential threat. It can be grown in large amounts. It spreads via the respiratory route. It has a 30% mortality rate. […] In summary, variola has several virologic attributes that make it attractive as a terrorist weapon. It is easy to grow. It can be lyophilized to protect it from heat. It can be aerosolized. Its genome is large and theoretically amendable to modification.”

“The clinical illness and fatality rate roughly parallel the density of the skin lesions. When lesions are sparse, cases are unlikely to die and probably are not efficient transmitters. However, their mobility may allow them to have enough social interaction to result in transmission […] As lesions become denser and confluent, the fatality rate increases, the amount of virus in the respiratory secretions increases, and patients are more infectious […] Hemorrhagic smallpox has a fatality rate of nearly 100%, and patients are highly infectious. About 1–5% of unvaccinated patients with V. major get hemorrhagic smallpox […] They are usually very sick, usually unable to get out of bed and thus may not transmit efficiently. The clinical presentation (from mild to discrete to confluent to hemorrhagic) is a function of the host response, not the virus. The clinical types do not breed true, in that transmission from any patient can give rise to any of the clinical presentations, and the virus is the same.”

“The individual lesions undergo a slow and predictable evolution. […] By about the 3rd day, the macules become papular, and the papules progress to fluid-filled vesicles by about the 5th day. These vesicles become large, hard, tense pustules by about the seventh or eighth day. […] The pustules are “in” the skin, not just “on” the skin. They are deep-seated […] About the 8th or 9th day, the lesions begin to dry up and umbilicate. By about 2 weeks after the onset of the rash, lesions are scabbing. About 3 weeks after onset, the scabs begin to separate, leaving pitted and depigmented scars. The causes of death from smallpox are not well elucidated. Massive viral toxemia probably causes a sepsis cascade. Cardiovascular shock may be part of the agonal syndrome. In hemorrhagic cases, disseminated intravascular coagulation probably occurs. Antibacterial agents are not helpful. Loss of fluid and proteins from the exudative rash probably contribute to death. Modern medical care might reduce the fatality rate, but there is no way to prove that contention […] There is no proven therapy. No data exist to show whether modern supportive care could reduce the death rate.”

“When smallpox is known to be circulating, the clinical presentation and characteristic rash make diagnosis fairly easy. Diagnosis can be difficult when smallpox is not high on the index of suspicion. Initial cases after a covert bioterrorist attack will probably be missed, at least until the 4th or 5th day of the rash. Transmission may have already taken place by this time. […] Smallpox does not ordinarily spread rapidly. Transmission requires prolonged face-to-face contact, such as that which occurs among family members or caregivers. Transmission is most efficient when the index patient is less than 6 feet from the recipient, so that the large-droplet respiratory secretions can be inhaled […] Since virus is not secreted from the respiratory tract until the end of the prodrome, patients are usually bedridden when they become infectious and usually do not transmit the disease widely. […] No historical evidence exists that smallpox was an effective bioweapon […] what has been written into historical texts and some medical journals may have been fueled more by fear than plausibility.”

“Smallpox virus currently exists legally in only two laboratories: the CDC in Atlanta and at the State Research Center for Virology and Biotechnology in the Novosibirsk region of Russia. Possession of smallpox virus in any place other than these two laboratories is illegal by international convention. A former Deputy Director of the Soviet Union’s bioweapons program has written that, during the cold war, their laboratories produced smallpox in large amounts, and made efforts to adapt it for loading into intercontinental missiles (Alibek, 1999). Scientists defecting from the former Soviet Union, or leaving Russia seeking work in other nations, may have illegally carried stocks of the virus to “rogue” nations (Alibek, 1999; Gellman, 2002; Mangold et al., 1998; Warrick, 2002). There is no publicly accessible proof that such defectors actually transported smallpox out of Russia, but no way of disproving that they did. […] Terrorists with access to a modern virus laboratory might genetically modify smallpox in ways similar to the published manipulations of ectromelia [mousepox] […] Genetically altered strains might pose problems of transmission; alteration of pathogenicity might have unknown effects on the transmissibility of the virus. Experienced intelligence observers feel that terrorists would avoid creating a strain with enhanced virulence. Such strains could devastate developing countries with poor public health systems, and a widespread outbreak would quickly spread to such countries (Johnson et al., 2003). Natural smallpox could similarly boomerang. Terrorists with the ability to manufacture it would realize that an effective attack might cause widespread disease in nations harboring their colleagues. Many such nations have poor public health systems and little vaccine, and would be more devastated than the nation initially attacked”

“The United States stopped routine vaccination in 1972. It could be resumed if the threat of smallpox becomes considerable. Only in a scenario where smallpox becomes widespread would it be wise to resume mass vaccination. […] The current CDC smallpox response strategy is based on pre-exposure vaccination of carefully screened members of first response teams, epidemiologic response teams, and clinical response teams at designated facilities. […] Readiness to control an outbreak resulting from an attack entails a high index of suspicion among clinicians, a good network of diagnostic laboratory capabilities, and a plan for use of surveillance and isolation techniques to quickly contain outbreaks. […] Resumption of widespread vaccination is dangerous and unnecessary.”

Vaccinations are not dangerous because they may cause smallpox to reappear, but rather because there are some other risks involved when getting the vaccination. It’s important to note that Variola major is not the active ingredient in the vaccinations used – rather the vaccinia virus is used, a virus belonging to the same family. There’s more on this stuff here.

As implied by the goodreads rating, I liked this book.


March 31, 2014 Posted by | Books, Infectious disease, Medicine, Microbiology | Leave a comment

What Did the Romans Know? An Inquiry into Science and Worldmaking (II)

I finished the book.

I did not have a lot of nice things to say about the second half of it on goodreads. I felt it was a bad idea to blog the book right after I’d finished it (I occasionally do this) because I was actually feeling angry at the author at that point, and I hope that after having now distanced myself a bit from it perhaps I’m now better able to evaluate the book.

The author is a classics professor writing about science. I must say that at this point I have now had some bad experiences with reading authors with backgrounds in the humanities writing about science and scientific history – reading this book at one point reminded me of the experience I had reading the Engelhardt & Jensen book. It also reminded me of this comic – I briefly had a ‘hmmmmm…. – Is the reason why I have a hard time following some of this stuff the simple one that the author is a fool who doesn’t know what he’s talking about?‘-experience. It’s probably not fair to judge the book as harshly as I did in my goodreads review (or to link to that comic), and this guy is a hell of a lot smarter than Engelhardt and Jensen are (which should not surprise you – classicists are smart), but I frankly felt during the second half of this work that the author was wasting my time and I get angry when people do that. He spends inordinate amounts of time discussing trivial points which to me seem only marginally related to the topic at hand – he’d argue they’re not ‘marginally related’ of course, but I’d argue that that’s at least in part because he’s picked the wrong title for his book (see also the review to which I linked in the previous post). There’s a lot of stuff in the second half about things like historiography and ontology, discussions about the proper truth concept to apply in this setting and things like that. Somewhat technical stuff, but certainly readable. I feel he’s spending lots of words and time on trivial and irrelevant points, and there are a couple of chapters where I’ve basically engaged in extensive fisking in the margin of the book. I don’t really want to cover all that stuff here.

I’ve added some observations from the second half of the book below, as well as some critical remarks. I’ve tried in this post to limit my coverage to the reasonably good stuff in there; if you get a good impression of the book based on the material included in this post I have to caution you that I did not think the book was very good. If you want to read the book because you’re curious to know more about ‘the wisdom of the ancients’, I’ll remind you that on the topic of science at least there simply is no such thing:

“Science is special because there is no ancient wisdom. The ancients were fools, by and large. I mean no disrespect, but if you wish to design a rifle by Aristotelian principles, or treat an illness via the Galenic system, you are a fool, following foolishness.”

Lehoux would, I am sure, disagree somewhat with that assessment (that the ancients were fools), in that he argues throughout the book that the ancients actually often could be argued to be reasonably justified in believing many of the things that they did. I’m not sure to which extent I agree with that assessment, but the argument he makes is not without some merit.

“That magnets attract because of sympathy had long been, and would long continue to be, the standard explanation for their efficacy. That they can be impeded by garlic is brought in to complete the pairing of forces, since strongly sympathetic things are generally also strongly antipathetic with respect to other objects. […] in both Plutarch and Ptolemy, garlic-magnets are being invoked as a familiar example to fill out the range of the powers of the two forces. Sympathy and antipathy, the author is saying, are common — just look at all the examples […] goat’s blood as an active substance is another trope of the sympathy-antipathy argument. […] washing the magnet in goat’s blood, a substance antipathetic to the kind of thing that robs magnets of their power, negates the original antipathetic power of the garlic, and so restores the magnets.[15] […] we should remember that — even for the eccentric empiricist — the test only becomes necessary under the artificial conditions I have created in this chapter.[36] We know the falsity of garlic-magnets so immediately that no test [feels necessary] […] We know exactly where the disproof lies — in experience — and we know that so powerfully as to simply leave it at that. The proof that it is false is empirical. It may be a strange kind of empirical argument that never needs to come to the lab, but it is still empirical for all that. On careful analysis we can argue that this empiricism is indirect […] Our experiences of magnets, and our experiences of garlic, are quietly but very firmly mediated by our understanding of magnets and our understanding of garlic, just as Plutarch’s experiences of those things were mediated by his own understandings. But this is exactly where we hit the big epistemological snag: our argument against the garlic-magnet antipathy is no stronger, and more importantly no more or less empirical, than Plutarch’s argument for it. […]

None of the experience claims in this chapter are disingenuous. Neither we nor Plutarch are avoiding a crucial test out of fear, credulity, or duplicity. We simply don’t need to get our hands dirty. This is in part because the idea of the test becomes problematized only when we realize that there are conflicting claims resting on identical evidential bases — only then does a crucial test even suggest itself. Otherwise, we simply have an epistemological blind spot. At the same time, we recognize (as Plutarch did) how useful and reliable our classification systems are, and so even as the challenge is raised, we remain pretty confident, deep down, about what would happen to the magnet in our kitchen. The generalized appeal to experience has a lot of force, and it still has the power to trick us into thinking that the so-called “empirically obvious” is more properly empirical than it is just obvious. […]

An important part of the point of this chapter is methodological. I have taken as my starting point a question put best by Bas van Fraassen: “Is there any rational way I could come to entertain, seriously, the belief that things are some way that I now classify as absurd?”[45] I have then tried to frame a way of understanding how we can deal with the many apparently — or even transparently — ridiculous claims of premodern science, and it is this: We should take them seriously at face value (within their own contexts). Indeed, they have the exact same epistemological foundations as many of our own beliefs about how the world works (within our own context).”

“On the ancient understanding, astrology covers a lot more ground than a modern newspaper horoscope does. It can account for everything from an individual’s personality quirks and dispositions to large-scale political and social events, to racial characteristics, crop yields, plagues, storms, and earthquakes. Its predictive and explanatory ranges include some of what is covered by the modern disciplines of psychology, economics, sociology, medicine, meteorology, biology, epidemiology, seismology, and more. […] Ancient astrology […] aspires to be […] personal, precise, and specific. It often claims that it can tell someone exactly what they are going to do, when they are going to do it, and why. It is a very powerful tool indeed. So powerful, in fact, that astrology may not leave people much room to make what they would see as their own decisions. On a strong reading of the power of the stars over human affairs, it may be the case that individuals do not have what could be considered to be free will. Accordingly, a strict determinism seems to have been associated quite commonly with astrology in antiquity.”

“Seneca […] cites the multiplicity of astrological causes as leading to uncertainty about the future and inaccuracy of prediction.[41] Where opponents of astrology were fond of parading famous mistaken predictions, Seneca preempts that move by admitting that mistakes not only can be made, but must sometimes be made. However, these are mistakes of interpretation only, and this raises an important point: we may not have complete predictive command of all the myriad effects of the stars and their combinations, but the effects are there nonetheless. Where in Ptolemy and Pliny the effects were moderated by external (i.e., nonastrological) causes, Seneca is saying that the internal effects are all-important, but impossible to control exhaustively. […] Astrology is, in the ancient discourses, both highly rational and eminently empirical. It is surprising how much evidence there was for it, and how well it sustained itself in the face of objections […] Defenders of astrology often wielded formidable arguments that need to be taken very seriously if we are to fully understand the roles of astrology in the worlds in which it operates. The fact is that most ancient thinkers who talk about it seem to think that astrology really did work, and this for very good reasons.” [Lehoux goes into a lot of detail about this stuff, but I decided against covering it in too much detail here.]

I did not have a lot of problems with the stuff covered so far, but this point in the coverage is where I start getting annoyed at the author, so I won’t cover much more of it. Here’s an example of the kind of stuff he covers in the later chapters:

“The pessimistic induction has many minor variants in its exact wording, but all accounts are agreed on the basic argument: if you look at the history of the sciences, you find many instances of successful theories that turn out to have been completely wrong. This means that the success of our current scientific theories is no grounds for supposing that those theories are right. […]

In induction, examples are collected to prove a general point, and in this case we conclude, from the fact that wrong theories have often been successful in the past, that our own successful theories may well be wrong too.”

He talks a lot about this kind of stuff in the book. Stuff like this as well. Not much in those parts about what the Romans knew, aside from reiteration and contextualization of stuff covered earlier on. A problem he’s concerned with and presumably one of the factors which motivated him to writing the book is how we might convince ourselves that our models of the world are better than those of the ancients, who also thought they had a pretty good idea about what was going on in the world – he argues this is very difficult. He also talks about Kuhn and stuff like that. As mentioned I don’t want to cover the stuff from the book I don’t like in too much detail here, and I added the quotes in the two paragraphs above mostly because they marginally relate to a point (a few points?) that I felt compelled to include here in the coverage because this stuff is important to me to underscore, on account at least in part of the fact that the author seems to be completely oblivious about it:

Science should in my opinion be full of people making mistakes and getting things wrong. This is not a condition to be avoided, this is a desirable state of affairs.

This is because scientists should be proven wrong when they are wrong. And it is because scientists should risk being proven wrong. Looking for errors, problems, mistakes – this is part of the job description.

The fact that scientists are proven wrong is not a problem, it is a consequence of the fact that scientific discovery is taking place. When scientists find out that they’ve been wrong about something, this is good news. It means we’ve learned something we didn’t know.

This line of thinking seems from my reading of Lehoux to be unfamiliar to him – the desirability of discovering the ways we’re wrong doesn’t really seem to enter the picture. Somehow Lehoux seems to think that the fact that scientists may be proven wrong later on is an argument which should make us feel less secure about our models of the world. I think this is a very wrongheaded way to think about these things, and I’d actually if anything argue the opposite – precisely because our theories might be proven wrong we have reason to feel secure in our convictions, because theories which can be proven wrong contain more relevant information about the world (‘are better’) than theories which can’t, and because theories which might in principle be proven wrong but have not yet been proven wrong despite our best attempts should be placed pretty high up there in the hierarchy of beliefs. We should feel far less secure in our convictions if there were no risk they might be proven wrong.

Without errors being continually identified and mistakes corrected we’re not learning anything new, and science is all about learning new things about the world. Science shouldn’t be thought of as being about building some big fancy building and protecting it against attacks at all costs, walking around hoping we got everything just right and that there’ll be no problems with water in the basement. Philosophers of science and historians of science in my limited experience seem often to subscribe to a model like that, implicitly, presumably in part due to the methodological differences between philosophy and science – they often seem to want to talk about the risk of getting water in the basement. I think it’s much better to not worry too much about that and instead think about science in terms of unsophisticated cavemen walking around with big clubs or hammers, smashing them repeatedly into the walls of the buildings and observing which parts remain standing, in order to figure out which building materials manage the continual assaults best.

Lastly just to reiterate: Despite being occasionally interesting this book is not worth your time.

March 28, 2014 Posted by | Books, History, Philosophy, Science | 5 Comments

What Did the Romans Know? An Inquiry into Science and Worldmaking (I)

“It is not that the Romans knew only a little and were puzzled about a whole lot, [rather] they thought — just as we do — that they had a pretty good idea of what was going on in the world.”

“The main theme of this book […] is about what it means to understand a world […] If we look to the Roman sources, we find an exceedingly rich and complex tangle — every bit as rich and complex as our own, but very, very different. Sometimes startlingly so: different entities, different laws, different tools and motivations for studying the natural world. So, too, different ways of organizing knowledge, and sometimes different ways of understanding even the most basic levels of sensory experience. This book is an inquiry into how and why the Romans saw things differently than we do, or to put it more pointedly, how and why they saw different things when they looked at the world.”

Here’s one (brief) review of the book – I disagree with the last sentence and I would not have given it 4 stars based on what I’ve read so far, but aside from these objections I cannot find much in there with which I disagree.

I’ve read half of the book at this point. If not for the fact that I hadn’t updated the blog in a while I probably would not have covered this book before I’d read it all – I’m not really sure it ‘deserves’ two blogposts. Incidentally this might be a good reminder of the fact that what you read here on the blog is not what I read in order to write these posts – the post here is based on 130 pages of academic writing and however much of it I decide to include in my coverage here on the blog, reading 130 pages actually takes a while. If you want to update a book blog frequently you need to either read some pretty interesting stuff, or you need to read a lot (preferably presumably both).

The book is sort of okay but nothing too special. In my opinion the author uses a lot of words to say not very much, but some of the points he does make are really rather interesting which is why I’m still reading. The world looked very differently to people who lived in Rome around the time of Cicero, and a lot of the ways in which their perceptions of the world differed from ours may well be surprising to the modern reader, as will surely some of the ways in which specific beliefs about the world were justified – as pointed out in the book, “relatively innocuous-looking assumptions about how phenomena are related, and how those relationships enable possibilities for interaction, can have major effects on how the world itself looks to be put together, and on what kinds of things are possible or impossible, patently obvious or patently ridiculous, in that world.”

The book’s coverage centers around the writings of people such as CiceroLucretius, Galen, Ptolemy, and Seneca, and it’s most certainly not a book about what the average guy on the street knew and thought about stuff during Roman times – such a book would be exceedingly hard to write.

Parts of the book are hard to cover here in detail due to what might be termed the contextual nature of the arguments presented, and I’ve actually decided against covering a few things which I’d sort of planned on covering here on account of not wanting to have to bother with explaining terms in the quotes with other quotes, but I have added what I believe to be a few interesting observations from the book below:

“when Cicero finally comes to laying out the details of the specific laws of the ideal state, we find the mapping out of the duties of people to gods as the first order of business. Not just any gods, but public gods, for the public good. Thus at the outset, Cicero establishes not the existence of the gods, for he thinks that is a given, but the parameters and responsibilities of the state religion […] what emerges repeatedly is an insistence that the maintenance of the official cult is absolutely central not just to the maintenance of the state as it stands, but […] to the maintenance of justice itself, and of all human society. […] Only when we come to know nature — perhaps better, Nature — can we fully understand religio, our duty to the gods, and the core of the best possible state. […] careful observation of higher-order aspects of nature (its beauty, its order) leads inevitably to proper ethical behavior, both between people, and between people and the gods. […] today, it is often taken as definitional that ancient science begins where ancient theology ends,[38] and many treatments of ancient political philosophy tend to downplay the foundational roles of the gods, even though natural-law theory is saturated with theology for most of its history. […] the gods are never very far away in ancient science.”

“the big schools of philosophy that had developed in the Hellenistic period were in large part […] dedicated to ethics as the primary focus of the school’s teaching. Many schools saw their physics and their logic as deeply connected with, and in some cases primarily as instruments in the pursuit of, ethical ends. […] Looking to Seneca’s works on nature, we find ethics front and center.”

“Ancient optics is not about light, it is about vision. The modern idea that visual information is carried in the first instance by the action and movement of light has become so ingrained for us that it is often difficult to set this assumption aside and to allow some room for the very foreign mechanisms of sight in ancient optics […] In antiquity light played some very different roles in seeing, and not every account of seeing seems to have even felt the need to invoke or explain the role of light in any detail. Perhaps the oddness of ancient light is seen most clearly in Aristotle, for whom light was nothing more than the actualization of the inherent (but passive) tendency of air to be transparent.[12] That is: air (or water) is potentially, but not always, see-through. At night, the potential transparency is unactivated, and the air is accordingly nontransparent, so we cannot see through it. Light is just the actualization of the air’s potential transparency, which thus allows visual forms to pass.
This is a very foreign idea, indeed.
Turning from physics to mathematical optics, we find virtually universal agreement on a different model. Unlike the modern model, where the eye takes in light and thence information, for ancient mathematical opticians the eye instead sends out some kind of radiative visual force that contacts objects in the world and somehow then passes information back to the eye. The details of this radiation vary from writer to writer, but the basic model is one of extramission out from the eye, rather than intromission into the eye.[13]”

March 26, 2014 Posted by | Books, History, Philosophy, Religion | Leave a comment

Pathophysiology of disease – an introduction to clinical medicine (VI)

As I’ve now finished the book this will be the last post in the series.

The way I read this book has been different from the way I usually read books; most books I read I’ll read in one go over a relatively brief amount of time. As for this one, I certainly didn’t read it in one go and I had breaks from it lasting a quite significant amount of time. I’m not really sure why I read it that way, but one obvious factor which certainly contributed is that this book is hard to read and takes a lot of mental firepower to handle.

I gave the book five stars on goodreads and added it to my list of favourites. Here’s the review I wrote on that site:

“This review got to be rather longer than usual, but I guess I don’t have a hard time justifying that on account of the nature of the book.

To get this over with from the beginning: If you have never read a medical textbook before, don’t bother with this one. You’ll learn nothing and you’ll never finish it. Unless you speak more or less fluent medical textbook you’ll have to either look up a lot of new words, or you’ll read a lot of words you’ll not understand. The fact that the book is somewhat inaccessible was the most important factor pulling me towards 4 stars. I decided to let it have 5 stars anyway in the end – given how many hours I was willing to spend on this stuff I really couldn’t justify giving it any other rating, although there are also a few other small problems which I might have punished in other contexts.

If you know enough to benefit from reading this book it’s a great book, even though I’d prefer if future doctors – which would presumably make up most of the potential readers who ‘know enough to benefit from reading it’ – read a newer version of it. But in order to read it and get something out of it, you need some basic knowledge about stuff like microbiology, histology, immunology, endocrinology, oncology, (/bio-)chemistry, genetics, pharmacology, etc. And I don’t mean basic knowledge like what you’d get from a couple of wikipedia articles – having read textbooks and/or watched medical lectures on some of these topics is a must.

On top of relevant background knowledge you need to be willing to commit at the very least something like 50 hours of spare time to reading this thing. I spent significantly more time than that, and most people probably need to do that as well if they want to actually understand most of this stuff – you certainly do if you want some of it to actually stick.

There probably exist quite a few similar medical textbooks which are more up to date and which may provide slightly better coverage. But I’m not going to read those books. I read this one. And I’m glad I did. Don’t interpret the 5 stars to mean that this is the best book on this topic – I have no way of knowing whether or not it is, though I assume it isn’t. But it is a highly informative and well-written book which covers a lot of ground and from which I learned a lot.”

The ‘covers a lot of ground’ thing can’t be overemphasized – this book has 23 chapters mainly organized in terms of organ systems. It gives you an overview of how things work in general and some of the ‘classical’ ways which they may go wrong. It does this very well, and despite being the kind of book where one chapter will cover heart disease and another chapter will cover pulmonary disease they’re very good at ‘connecting the dots’ – that disorders are often interrelated and e.g. that a failing heart will cause problems with your lungs is not something they’re neglecting to deal with. Indeed the ‘big-picture view’ the book provides made me aware of multiple connections between ‘human subsystems’ which I’d been completely unaware of, and learning about these kinds of relationships was quite fascinating.

Another fascinating aspect was how much stuff there is to know about these things. It’s quite common for me to read books where the coverage overlap to some extent with what I’ve read in other books – I’ll often prefer to read such books (though I also take steps to avoid limiting my exposure to new stuff I don’t know about too much) because the information they cover will be easier to relate to and connect to other stuff up there in my head. One chapter (or a few pages) in one book may cover material which another book spent hundreds of pages dealing with. While reading this book I very often realized that I’d covered a specific topic somewhere else, which gave me a different perspective; ‘this topic is covered in more detail in Hall‘, ‘see Sperling for much more on this topic’, ‘see also Kolonin et al.’, ‘see also Eckel‘, ‘see Holmes et al.‘, and so on and so forth – I’ve added a lot of those kinds of comments along the way. While reading this book you sort of read the big-picture version, and at various points you’re likely to come across places where you can sort of ‘zoom in’, on account of knowing a lot about that topic. What was most amazing to me in this context was how many places I couldn’t zoom in. There’s such a lot of stuff to know and learn.

I won’t cover the last chapters in much detail. The chapters I’ve read over the last few days covered disorders of the hypothalamus and pituitary gland (chapter 19), thyroid disease (chapter 20), disorders of the adrenal cortex (chapter 21), and disorders of the female (chapter 22) and male (chapter 23) reproductive tracts. A few of these chapters I think I probably paid a bit more attention to than I would have done if I had not read Sperling (see link above) in one of my ‘breaks’ from this book. One reason for this is that Sperling, or rather ‘Tuomi and Perheentupa’ as they were the ones who wrote that specific chapter in the book, spent some time and effort in the book dealing with various forms of combinations of autoimmune conditions involving type 1 diabetes as one of the components, which suddenly makes in particular the chapter on thyroid disease more relevant than it otherwise would have been. Tuomi and Perheentupa covered this stuff because: “Two fundamentally different autoimmune polyendocrine syndromes (APSs) are generally recognized, and type 1 diabetes mellitus is common in both.” The risk of me developing another autoimmune condition on top of my diabetes one should think would be low, and it sort of is (it would incidentally most likely be significantly higher if I were a female); but a key observation here is that other autoimmune conditions usually show up later in life than does the diabetes, so the higher risk I face of developing e.g. Graves’ disease and Hashimoto’s disease (both are covered in chapter 20 of the Pathophysiology text) is not yet really accounted for, and the fact that I haven’t developed any of them yet is not very relevant to my risk of developing these conditions later in life (what is relevant is that I developed diabetes very early in my life – this actually makes it less likely that other organ systems will get hit as well, though it does not make the risk go away). I’ll include a quote from the relevant chapter from Sperling below as I’m aware this was some of the stuff I did not cover when I read that book and so people may be completely in the dark about what I’m talking about:

“All combinations of adrenocortical insufficiency, thyroid disease (Graves’ disease, goitrous or atrophic thyroiditis), type 1 diabetes, celiac disease, hypogonadism, pernicious anemia (vitamin B12 malabsorption), vitiligo, alopecia, myasthenia gravis, and the collagen vascular diseases, which include at least one of the said endocrine diseases but exclude hypoparathyroidism and mucocutaneous candidiasis, are collectively called APS type 2. The co-occurrence of these diseases is presumably the result of a common genetic background. No exact incidence or prevalence figures are available, and they would probably vary with the population concerned. APS-2 is more common than APS- 1, with a general prevalence of at least 1 per 10,000. Females are affected two to four times more often than men. The highest incidence of the components is in the third to the fifth decade of life, but a substantial number of patients develop the first component disease, usually type 1 diabetes, already in the first and second decade”

Note that the uncertain, yet seemingly low, prevalence estimate is easy to misunderstand. I haven’t looked at these numbers recently and I’m not going to go look for them now, but say type 1 diabetes (-T1DM) affects 1 out of 300 people. Now combine the ‘at least 1 in 10.000′ estimate with that one and observe that roughly 2 out of 3 patients with APS-2 have T1DM and the risk a type 1 diabetic will develop another autoimmune condition is already measured in percent. These numbers incidentally downplay the actual risk – I decided to include a few examples from Sperling to illustrate. It makes sense to start with Graves’ disease as I already mentioned that one: “Graves’ disease has been reported in 9.3% of patients with type 1 diabetes (76).” Also, “Hypothyroid or hyperthyroid AITD [AutoImmune Thyroid Disease] has been observed in 10–24% of patients with type 1 diabetes”  – uncertain figures with big error bars, but not exactly low risks of no import. Especially not when considering that: “In addition, between 5% and 25% of type 1 diabetic patients without clinical thyroid disease have antibodies to thyroid microsomal antigens (TMAb) or thyroid peroxidase (TPOAb)”. Although combination forms with multiple autoimmune disorders are quite rare, they’re not actually that rare (‘not rare enough…’) when you take into account that T1DM is also, well, rare.

The stuff above was mostly just an aside explaining why I perhaps cared a bit more about the stuff covered in these last chapters than I otherwise would have, but hopefully it was an informative aside. I should note that the ‘more interesting’ stuff was not all of it more interesting on account of dealing with some elevated risk of ugly things happening to me; other parts of the last chapters were ‘particularly relevant’ because of other stuff, like the role cortisol plays in circadian variation in insulin resistance and the role ACTH-excretion plays in hypoglycemia. But I think it would take too much time and effort to go into the details of these things in this post so I’ll cut it short here.

March 22, 2014 Posted by | Books, Diabetes, Immunology, Medicine | Leave a comment

Personality Judgment: A Realistic Approach to Person Perception (2)

I’ve finished the book. I ended up at three stars on goodreads; the book has less to say about ‘the really interesting stuff’ than I’d have liked, and although part of the reason for this was that the research simply didn’t exist at the time of publication it was still a little disappointing. Funder provides some ideas in the second half about where to go look for interesting questions and their answers in this area, but actual answers were few in numbers when he published the book. I have been wondering along the way how much of this stuff has been looked at since he wrote the book – I don’t know, but I’m getting a bit curious and I may have a closer look at this stuff at a later point in time.

So anyway I ended up liking the book overall somewhat less than I thought I would while I was reading the first chapters. It is interesting, but many of the answers people reading a book like this are probably looking for in all likelihood aren’t in there. Much of the book, especially the second half of it, is centered around a simple signalling model used to conceptualize various elements which are part of the personality judgment process. The model (he calls it RAM – realistic accuracy model) is quite similar to standard signalling models known from e.g. microeconomics; you have a sender and a receiver, and you have noise as well as various variables (relevance, availability, detection and utilization) that impact the information exchange process. It should be noted that the question being asked is not whether or not information gets from A to B, but whether or not a correct inference about the sender is made by the receiver, and one might also observe that it is not critical that the sender deliberately supplies the information in question to the receiver; we often send signals about our behavioural patterns and traits that other people might use to get a better understanding of us without actually being aware of the fact that we’re doing it (‘extroverts talk in a louder voice than introverts’ – yep, in case you didn’t know they seem to do that…). He talks a lot about the model and tries to frame relevant questions so that they fit somewhere into the model, but he doesn’t do any actual work with the model, it’s just a way to present his way of thinking about these things (i.e. there are no derivations of equilibria under given conditions or stuff like that).

A few words should perhaps be included here about the variables mentioned above. Relevance relates to whether or not behaviour is relevant to personality perception. Some behaviours are more relevant to specific trait judgments than are others; you learn more about someone’s courage by observing whether or not he enters a burning building to save a child than you do by observing how he behaves in the grocery store. Situational factors play a key role here. Availability relates to whether or not the information provided becomes available to the observer. If the observer/judge is not around when trait-relevant behaviour takes place, he or she cannot use that information. On a related note, different people have different relationships with other people, and so have access to different types of information. A close friend for instance has more (relevant) information available to judge you from than does the local grocery store clerk. In general more information is available to people who have known a person for a longer amount of time and have observed the individual in a wider variety of social contexts; there’s both a quantity and a quality aspect to familiarity. As for the next variable, not all available information gets picked up on by the receiver, and so this is where the detection stage becomes relevant. Even though a friend you’ve known for a while has seen you in a lot of different contexts, that doesn’t mean much if the friend, say, didn’t pay attention. Traits we possess ourselves (or at least believe ourselves to possess) are incidentally often easier for us to detect in others; a person who prides himself on his intelligence may be more likely to look for cues of intelligence provided by the sender during a social interaction than may the person who doesn’t think of himself as being particularly intelligent, but rather prides himself on his conscientiousness (I think he mentions this in the book, but stuff like this was certainly covered in Leary & Hoyle. Note that the reverse is true as well: “Research has shown that traits that are central to a person’s self-concept or are seen by the individual as ‘‘personally relevant’’ tend to be easier for others to detect”). The last of the variables, utilization, relates to the receiver’s interpretation of the sender’s signal/observed behaviour; people often have relevant information available to them which they detect, yet misinterpret. Two major problems people encounter when trying to utilize the information provided to them which Funder mentions in this context are that the relevance of a given behaviour depends on the situational context (the exact same behaviour may in one situation be highly relevant to a given trait and in another situation be completely irrelevant), and that any given behaviour may be affected by/motivated by more than one trait at the same time. Something that doesn’t help is that personality traits vary in how easy they are for others to observe (“traits like extraversion and agreeableness are the ones most likely to become visible in overt social behavior” – this dimension is rather important when it comes to the effects related to getting to know people better: “As Paunonen (1989) showed, even less visible traits become more judgable when the judge and the target are closely acquainted. To know somebody longer is not necessarily to learn more and more about how extraverted they are. With longer acquaintance, more and more subtle aspects of personality slowly become visible.”). Naturally an implication of the model is that “any efforts to improve accuracy, to be effective, must have an effect on relevance, availability, detection, or utilization.”

Having talked about the general model Funder then proceeds to talk about moderator variables, variables that affect accuracy. These can be subdivided into four classes: Accuracy is affected by properties of the judge, properties of the target (the person who’s sending information), properties of the trait that is judged, and properties of the information supplied. As for the judge, three variables are brought up: “The capacity to detect and to utilize available cues correctly can be divided into three components: knowledge, ability, and motivation.” Other new variables are introduced when talking about the other moderator variables. Various forms of variable interactions are also covered later in the book (to take one example, people are generally poor at judging people they don’t like – this relates to the judge-target interaction term). Much of the discussion is somewhat theoretic because the research had yet to be performed when Funder wrote the book, but the discussion is helpful even so.

I’ve added a few more observations from the book below.

“Social psychologists have frequently observed that female friends spend much of their time discussing emotions and relationships, whereas male friends are more likely to engage in work or play activities or to discuss less personal matters such as sports or politics […] If this observation is combined with Andersen’s (1984) findings, that conversations that reveal more personal information yield better information on which to base personality judgments, the following prediction can be derived: Well-acquainted women ought to judge each other with more accuracy than do well-acquainted men. Data relevant to this prediction are surprisingly rare, but a sex difference in the predicted direction has reported by Harackiewicz and DePaulo (1982) as well as in a recent study by Vogt and Colvin (1998). The general (albeit small) superiority of women over men in the detection of emotional states is a long-standing staple of the literature”

“At a very basic level, there is a particularly powerful reason to expect one’s own personality to be particularly difficult to see: It is always there. Kolar, Funder, and Colvin (1996) dubbed this the ‘‘fish and water effect,’’ after the cliché that fish do not know that they are wet because they are always surrounded by water. In a similar fashion, the same personality traits that are most obvious to others might become nearly invisible to ourselves, except under the most unusual circumstances. […] In their experimental study, Kolar et al. obtained personality judgments from subjects’ close acquaintances as well as from the subjects themselves. In nearly every comparison, the acquaintances’ judgments manifested better predictive validity than did the self-judgments. For example, acquaintances’ judgments of assertiveness correlated more highly with assertive behavior measured later in the laboratory than did self-judgments of assertiveness. Although the differences were sometimes quite small, the same finding appeared for talkativeness, initiation of humor, physical attractiveness, feelings of being cheated and victimized by life, and several other traits of personality and behavior. A further study by Spain (1994) showed that the degree of difference in accuracy between the self and others depends on the criterion used. When the criterion for accuracy was the ability to predict overt, social behavior, this latter study found, self-judgments held no advantage over judgments by others (no advantage for the others was found in this study). But when the criterion was on-line reports of emotional experience, self-judgments of personality afforded better predictions than did peers’ judgments.

The bottom line seems to be this: Notwithstanding the obvious advantages of self-observation, in some ways it may be surprisingly difficult. […] Other people have a view of your social behavior that is as good as and sometimes even superior to the view you have of yourself.”

“The tendency to view different situations as similar causes a person to respond to them in a like manner, and the patterns of behavior that result are the overt manifestations of traits. The interpretation of a trait as a subjective, situational-equivalence class offers an idea about phenomenology—about what it feels like to have a trait, to the person who has it […] The answer is that ordinarily it doesn’t really feel like anything. The only subjective manifestation of a trait within a person will be his or her tendency to react and feel similarly across the situations to which the trait is relevant. […] A sociable person does not ordinarily say to him- or herself, ‘‘I am a sociable person; therefore, I shall now act in a sociable fashion.’’ Rather, he or she responds positively to the presence of others in a natural, automatic, unselfconscious way. An unsociable person, who perceives the presence of others differently, accordingly also responds differently. And a highly emotional person is too busy experiencing strong emotions to notice that his or her very emotional responsiveness may be one of his or her strongest, most characteristic and (to others) most obvious personality traits.”

“The improvement of relevance can be attempted in two ways. […] First, the judge can take care to observe the person being judged in the contexts that are most informative for the trait in question […] To judge social traits, one must observe the target person’s behavior in interpersonal situations. To judge occupational competencies, one must observe the target person’s job behavior. This seemingly obvious point is often neglected. People too often infer traits from the observation of behavior in contexts where no relevant information could be expected to occur. […] A second way to improve relevance is to do something to create the appropriate observational context. Some kind of stimulus might be created that will lead the target person to emit a behavior that is relevant to the behavior that the judge wants or needs to evaluate. This is not as unusual a tactic as might first appear. The simple act of asking someone a question is an example. […] People who are better judges of personality might, to an important degree, be those who know how to ask better questions. A good question, in this sense, is one that elicits relevant data about personality, an informative answer. […] It is also possible to set up social contexts in which more informative behaviors are likely to appear. If a situation is relaxed and informal, for example, people are more likely to be their real selves”

“The improvement of availability […] requires the judge to observe more behaviors in a wider variety of contexts. [again, remember that there’s both a quantity and quality aspect to this and that some settings may be more informative than others] […] there are at least a couple of things that a judge can do to improve detection. First, the judge can simply watch closely. […] This does not come without cost, however, so it should be done judiciously. […] In a similar vein, a distracted judge will garner less information […] Perhaps the most important thing a judge can do to improve the detection stage is to learn what is important to detect. […] Unfortunately, our knowledge of the cues that are […] informative about personality, though beginning to develop, is still far too thin. […] Even if psychologists were to gear up an intensive program for teaching people how to judge personality more accurately, on surveying the research literature they would find they still have surprisingly little of use to teach.”

“The utilization stage of accurate judgment involves thinking. The relevant and available information has been detected, and now the judge must do some interpretational work to figure out what it all means. […] Research indicates that this work is best done alone. When people get together to talk about their judgments before rendering them, apparently factors of group dynamics rather than valid inferential reasoning take control of the judgmental output. People discussing their judgments become concerned about self-presentation, saving face, politeness, making friends, achieving dominance, and a host of other issues that are irrelevant to accuracy. As a result, personality judgments are more accurate when made by individuals working alone than by those who have discussed their judgments with others first […] To optimize accuracy, these independently formulated judgments can then be combined arithmetically into an average that is much more reliable than any one of them would be.”

“[One way] to improve the intuitive judgment of personality is for anyone who would judge his or her peers to acquire as much practice and feedback as possible. Get out more, be an extravert […] The same advice applies to those who would improve their self-knowledge […] Mix with many different people in a wide range of social settings. Travel. Meet the kind of people you do not ordinarily meet. Most important, be sure to seek feedback. The lack of good feedback is the missing link in much ordinary social experience (Hammond, 1996) and may be the reason many of us are not as good judges of personality as we should be. If we give up on a new acquaintance because we think we will not like him or her over time, we lose the chance to learn whether this prediction was right. […] if we fail to let our acquaintances feel free to express themselves, perhaps because we interrupt, are easily offended, or just fail to show interest, we will be cutting ourselves off from potentially useful knowledge about what they, and people like them, are really like. Unless the people we encounter feel free to be themselves, we will never be in a position to learn about what they are really like. By the same token, if we would know ourselves, we should encourage and be open to feedback from others concerning the nature of our own personalities. […] the general perspective of RAM implies not one, but two general prescriptions for improving the accuracy of personality judgment. […] the judge needs to use the available information better, but also needs for better information to be available.”

March 20, 2014 Posted by | Books, Psychology | Leave a comment

Open Thread

This is where new readers come out of the woodwork and say ‘hi!’ And it’s where regular readers tell me about interesting stuff they’ve come across since the last Open Thread.

I had social obligations this weekend and so I haven’t done a lot of blogging-relevant stuff over the last few days. I’ve read Ishiguro’s The Remains of the Day, and although I won’t blog it here I will note that it was an awesome book.

A few links:

i. I recently watched this lecture, but I decided against embedding it here because I was very far from impressed by it. If you decide to give it a shot you should at the very least do yourself a favour and skip the first 5 minutes. You should also note that quite a bit of work has been done in related areas such as search and matching theory since the Gale–Shapley algorithm was developed.

ii. Sexual Behavior, Sexual Attraction, and Sexual Identity in the United States: Data From the 2006–2008 National Survey of Family Growth.

Some results and data from the link:

“Among adults aged 25–44, about 98% of women and 97% of men ever had vaginal intercourse, 89% of women and 90% of men ever had oral sex with an opposite-sex partner, and 36% of women and 44% of men ever had anal sex with an opposite-sex partner. Twice as many women aged 25–44 (12%) reported any same-sex contact in their lifetimes compared with men (5.8%). Among teenagers aged 15–19, 7% of females and 9% of males have had oral sex with an opposite-sex partner, but no vaginal intercourse.”

“About one-half of all STIs occur among persons aged 15–24”

“Although current HIV medications have substantially increased life expectancy (7), the medical costs are substantial, averaging approximately $20,000 per year for each person in care”

“Among women aged 15–44 in the 2006–2008 NSFG, 11% had never had any form of sexual activity with a male partner in their lives, 6.1% had sex in their lifetime but had no opposite-sex sexual activity in the past 12 months, and 69% had one male partner in the past 12 months. Nearly 8% had two partners in the past year, and about 5% had three or more partners in the past year. […] Among women aged 25–44, 1.6% never had any form of sexual activity with a male partner, 6.6% have had sex with a male but not in the past year, and 82% had one partner in the past year. Having one partner in the past 12 months was more common at older ages, presumably because more of these women are married. Having one partner in the past year was significantly more common among married (97%) or cohabiting (86%) women than those in other groups […] women aged 22–44 with less than a high school diploma were nearly twice as likely (13%) to have had two or more partners in the past 12 months as women with a bachelor’s degree or higher (7%).”

“Among women aged 15–44, the median number of male partners is 3.2 and in 2002 it was essentially the same at 3.3. For men aged 15–44, the median number of female partners was 5.6 in 2002 and remained similar at 5.1 in 2006–2008. As in 2002 when 23% of men and 9% of women reported 15 or more partners in their lifetimes, men were more likely than women to report 15 or more partners in 2006–2008 (21% of men and 8% of women). […] These results are consistent with prior findings from surveys in the United States and other countries, which all show that men on average report higher numbers of opposite-sex sexual partners than do women of the same age range […] While 11%–12% of women with lower levels of education reported 15 or more partners, 6.8% with bachelor’s degrees or higher reported 15 or more partners. For men […], the disparity by college education was smaller”

iii. The FIDE Candidates Tournament (the tournament deciding who’s to play against Magnus Carlsen in the next World Chess Championship match) has begun and a few rounds have been played. Some interesting chess so far. The official site is here. I haven’t followed the live commentary, but I’ve noted that the main commentators seem to be Danish Grandmaster Peter Heine Nielsen and his wife Viktorija Čmilytė (currently the 12th strongest female player in the world). Without having followed the coverage I can’t of course say how well they’ve done, but I’d say that picking someone like Nielsen to provide commentary seems to me like a very good idea; aside from being a ‘pretty strong player’ who’s been in the world top 100 for a decade or something like that, he’s also been one of Anand’s seconds for years – he’s currently Magnus Carlsen’s second – and if you want someone able to talk about the specific details of the various openings likely to be employed in games like these, it would probably be very hard to find someone significantly better than him.

March 17, 2014 Posted by | Open Thread | 5 Comments

Personality Judgment: A Realistic Approach to Person Perception

“This is a book about accuracy in personality judgment. It presents theory and research concerning the circumstances under which and processes by which one person might make an accurate appraisal of the psychological characteristics of another person, or even of oneself.
Accuracy is a practical topic. Its improvement would have clear advantages for organizations, for clinical psychology, and for the lives of individuals.With accurate personality judgment, organizations would become more likely to hire the right people and place them in appropriate positions. Clinical psychologists would make more accurate judgments of their clients and so serve them better. Moreover, a tendency to misinterpret the interpersonal world is an important part of some psychological disorders. If we knew more about accurate interpersonal judgment, this knowledge might help people to correct the kinds of misjudgments that can cause problems. Most important of all, if individuals made more accurate judgments of personality they might do better at choosing friends, avoiding people who cannot be trusted, and understanding their interpersonal worlds (Nowicki & Mitchell, 1998). […] This is a book about how people make judgments of what each other is like, the degree to which these judgments achieve accuracy, and the factors that make accuracy in personality judgment more and less likely.”

I’m currently reading this book by David Funder. It’s quite interesting. Much of the book so far – and I’ve about read half of it at this point – has been dealing with how the different schools of research in this field historically have approached these matters, and the various ways they’ve tried to conceptualize central issues of interest (e.g. questions such as, ‘how do we establish when people are accurate? Which criteria do we apply?’). There’s been a good deal of focus on methodological issues and how to interpret results in various contexts, and less specific focus on ‘the actual results’ (though of course some of these have been reported as well). This emphasis on methodology probably means that some people may find this book a bit boring. I’m reasonably sure Funder will proceed to the more ‘meaty’ parts in the second half and I look forward to reading the rest of the book. I think I’m currently hovering around a 4 star evaluation on goodreads. There’s a lot of good stuff in here, including incidentally some observations that made it easier for me to realize why I disliked the CBT handbook as much as I did (it pointed out some specific problems with the approaches applied in that book (/line of research) that I had not been fully aware of).

I’ve added some more observations from the book below. A lot of good stuff didn’t make it into this post. If you want to know if, say, a policeman is more likely to figure out if someone is lying or not than some random person on the street, at least judging from the material covered so far this is not the book for you (though I know there are studies covering this type of stuff which you can find on google scholar). But it’s very interesting and I’m really liking it. One of the few problems with this book is that for a research book it’s rather old (1999), but given the topics covered so far this actually matters much less than you’d think. Incidentally if my comments at the top of this paragraph made you curious about these things, you may want to see this post covering the results of a recent rather large review of studies dealing with humans’ ability to spot liars – I’ve covered this stuff before here on the blog.

Observations from the book:

“many doors in life are opened or closed to you as a function of how your personality is perceived. Someone who thinks you are cold will not date you, someone who thinks you are uncooperative will not hire you, and someone who thinks you are dishonest will not lend you money. This will be the case regardless of how warm, cooperative, or honest you might really be. […] a long tradition of research on expectancy effects shows that to a small but important degree, people have a way of living up, or down, to the impressions others have of them. Children expected to improve their academic performance to some degree will do just that […], and young women expected to be warm and friendly tend to become so […] There is another important reason to care about what others think of us: They might be right. […] The people in your social world have observed your behavior and drawn conclusions about your personality and behavior, and they can therefore be an important source of feedback about the nature of your own personality and abilities. […] looking to the natural experts in our social world is a rational way to learn more about what we are really like.”

“There are vastly more active social than personality psychologists now doing research, more social psychology training programs, and more grant money for social psychology research. […] Perhaps the most obvious difference between modern social and personality psychology is that the former is based almost exclusively on experiments, whereas the latter is usually based on correlational studies. […] In summary, over the past 50 years social psychology has concentrated on the perceptual and cognitive processes of person perceivers, with scant attention to the persons being perceived. Personality psychology has had the reverse orientation, closely examining self-reports of individuals for indications of their personality traits, but rarely examining how these people actually come off in social interaction. […] individuals trained in either social or personality psychology are often more ignorant of the other field than they should be. Personality psychologists sometimes reveal an imperfect understanding of the concerns and methods of their social psychological brethren, and they in particular fail to comprehend the way in which so much of the self-report data they gather fails to overcome the skepticism of those trained in other methods. For their part, social psychologists are often unfamiliar with basic findings and concepts of personality psychology, misunderstand common statistics such as correlation coefficients and other measures of effect size, and are sometimes breathtakingly ignorant of basic psychometric principles. This is revealed, for example, when social psychologists, assuring themselves that they would not deign to measure any entity so fictitious as a trait, proceed to construct their own self-report scales to measure individual difference constructs called schemas or strategies or construals (never a trait). But they often fail to perform the most elementary analyses to confirm the internal consistency or the convergent and discriminant validity of their new measures, probably because they do not know that they should. […] an astonishing number of research articles currently published in major journals demonstrate a complete innocence of psychometric principles. Social psychologists and cognitive behaviorists who overtly eschew any sympathy with the dreaded concept of ‘‘trait’’ freely report the use of self-report assessment instruments of completely unknown and unexamined reliability, convergent validity, or discriminant validity. It is almost as if they believe that as long as the individual difference construct is called a ‘‘strategy,’’ ‘‘schema,’’ or ‘‘implicit theory,’’ then none of these concepts is relevant. But I suspect the real cause of the omission is that many investigators are unfamiliar with these basic concepts, because through no fault of their own they were never taught them.”

“Many studies over a period of several decades have shown that the impressions others have of your personality agree to an impressive extent both with each other and with your impression of yourself. […] recent research using sophisticated data analyses has shown that the consistent effect of the person is by far the largest factor in determining behavior, overwhelming more transient influences of situational variables or person-by-situation interactions […] correlations between personality and behavior are particularly high when the predictive target is aggregates or averages of behavior rather than single instances […] In everyday life what we usually wish to predict on the basis of our personality judgments are not single acts but aggregate trends. Will the person we are trying to judge make an agreeable friend, a reliable employee, or an affectionate spouse? Each of these important outcomes is defined not by a single act at a single time, but by an average of many behaviors over a diverse range of contexts. The classic Spearman-Brown formula shows how even seemingly small correlations with single acts can compound into high correlations with the average of many acts. For example, Mischel and Peake (1982) found that inter-item correlations among the single behaviors they measured were in the range of .14 to .21, but that the coefficient alpha for the average of the behaviors they measured was .74. That is, a similar aggregate of behaviors would be expected to correlate .74 with that one. In the same vein, Epstein and O’Brien (1985) reanalyzed several classical studies in the field of personality and found in each case that although behavior seemed situationally specific at the single-item level, it was quite consistent at the level of behavioral aggregates.” [I’m familiar with this stuff at this point, but I can’t remember to which extent I included stuff like this in my coverage of Leary & Hoyle so I decided to include these observations here; there are a lot of pages in L&H about these and related matters because this kind of stuff is really important in terms of how to measure variables and interpret coefficients in these fields.]

“To evaluate the degree to which a behavior is affected by a personality variable, the routine practice is to correlate a measure of behavior with a measure of personality. But how does one evaluate the degree to which behavior is affected by a situational variable? […] this question has received surprisingly little attention over the years. Where it has been addressed, the usual practice is rather strange: The power of situations is determined by subtraction. […] Of course, this is not a legitimate practice […] the two sides of the person-situation debate have in an important way been talking past each other for a couple of decades. For the cognitive behaviorists, significant differences in behavior across conditions has been taken as conclusive proof that behavior is situationally determined and otherwise inconsistent. For personality psychologists, the maintenance of individual differences in behavior across situations demonstrates the importance of stable aspects of personality for determining what people do. It turns out that these two conclusions are not in the least incompatible. […] Behavior in general changes with the situation, and the behavior of individuals is impressively consistent across situations. These statements are not incompatible; they are both true […] [and] some behaviors are more dependent on the situation than are others.”

“[One] aim of the heuristics-and-biases approach was to compile a vast catalog of the many different ways in which human judgment is faulty (Lopes, 1991). Surprisingly often, authors slid easily from describing heuristics as useful and even necessary components of human judgment under heavy cognitive load to characterizing them as woeful ways in which otherwise rational thinking too often goes astray. This is an important change of emphasis. […] The emphasis on mistakes […] had a deep and pervasive influence throughout psychology and even beyond. Over the 20-year reign of the error paradigm, a conventional wisdom became established that people were—not to put too fine a point on it—stupid. […] However, it might not necessarily be helpful to make one’s judgments while afflicted by the kind of self-doubt a reading of some researchers on error would inflict. […] Furthermore, some writers have noted that the heuristics-and-biases approach, as typically employed, has a direct and powerful implication that seems to be quite false. The implication is that if we could eliminate all heuristics, biases, and errors from our judgment, our judgments would become more accurate. In fact, the reverse seems to be the case. Researchers on artificial intelligence find they must build heuristics and biases into their programs to allow them to function at all in environments that have any degree of complexity or unpredictability—environments, in other words, like the real world […] For example, successful elimination of the ‘‘halo’’ effect has been shown to make judgments of real individuals less accurate […] This is probably because socially desirable traits really do tend to co-occur, making the inference of one such trait from the observation of another—the halo effect—a practice that ordinarily enhances accuracy […] Other heuristics have also been found to enhance accuracy”

“When the mythic age finally arrives when research has answered all our questions, it still might turn out that overattribution to the person is more common than overattribution to the situation. But already it is clear that both kinds of error exist, and both are important. Calling just one of them ‘‘fundamental’’ is probably unwise.” [he included a lot of stuff about this one, but I decided against covering all of that here]

“the first, most obvious, and perhaps most daunting difficulty in accuracy research is the criterion problem. To study the moderators and processes of accurate judgment, a researcher needs some sort of criterion for determining the degree to which a given judgment is right or wrong. […] methodological issues concern the techniques a researcher should use to assess and statistically analyze the two criteria for accuracy that are available. To make a long story (temporarily) short, these criteria are interjudge agreement and behavioral prediction. […] Error researchers employ what Hammond (1996) called ‘‘coherence’’ criteria. These criteria include the degree to which a judgment follows the prescriptions of one or another normative model of judgment […] Accuracy researchers employ ‘‘correspondence’’ criteria. Correspondence criteria include the degree to which a judgment matches or corresponds with one or more independent indicators of reality. […] Both criteria can be and sometimes are applied to the same judgment. For example, the process by which a weather forecaster makes his or her judgments might be compared to the inferential rules that were taught in meteorology school. If the process followed by the forecaster makes logical sense and follows the rules he or she was taught, the judgment passes the coherence criterion. Alternatively, if his or her judgment is that it will rain tomorrow, one can also wait and see if it actually rains. If it does, then the judgment passes the correspondence criterion. The difference between these criteria is interesting and important because a judgment deemed correct by one criterion may be incorrect according the other. […] In an ideal world, researchers interested in accuracy would use both. […] At present, however, the two criteria are employed by areas of research that are quite separate.”

“To interact successfully with someone you really need to know accurately only about those aspects of the person that are relevant to his or her behaviors in the environments you share […] a ‘‘circumscribed accuracy,’’ […] This approach is useful, but its implications are limited […] research seems to show, perhaps surprisingly, that circumscribed accuracy is no better and is sometimes worse than generalized accuracy […] For example, people are better at judging another person’s general degree of talkativeness than at judging how talkative he or she will be specifically with them”

“different judges of the same person tend to agree in their judgments, even after fairly brief acquaintance […] And two judges who rate each other generally do not describe each other as similar to themselves […] One of Kenny’s most important empirically based conclusions is that people agree with others about what they are like (self-other agreement) because both the target and the observer base their impressions on the same information, which is the target’s behavior. That is, you see what I do, and I also see what I do, and this why we agree about what I generally do and therefore what I am like. […] judges use stereotypes as an important basis for their judgment only when they have little information about the target. In this situation, it appears, judges fill in the missing information with general stereotypes or even […] their own self-description (which itself is a sort of stereotype if applied to the judgment of others; Hoch, 1987). When you know someone well you can base your judgments on what you have seen. When you have little information, you fall back on stereotypes and self-knowledge.”

March 14, 2014 Posted by | Books, Psychology | Leave a comment

Handbook of Cognitive-Behavioral Therapies (2)

I’ve finished the book.

I almost didn’t. A few of the chapters were quite awful. Here’s what I wrote on goodreads:

“Much closer to one star than three – I was very close to giving it one star.

It starts out not terrible, but then gets worse and worse as it moves on. Some chapters are almost hilariously bad. Often all that’ll be worth knowing about a given method is one or two key ideas; you quickly realize that most of the rest is just crap and/or speculation. Some chapters have more than this, but not many, and frankly a lot of this stuff is pure bullshit.

Many of the chapters are written by partisans who don’t even try to pretend to be impartial.

I was very disappointed by this book.”

I wrote this review after having just read the last two chapters, the first of which was far from great and the latter of which was simply spectacularly bad. So I might have been a bit harder on the book than I should have been. The ‘it progressively gets worse’ model is also on second thought perhaps not completely fair. Either way I don’t think you should read this book. At least not all of it. Some chapters were not bad, it’s just that others broke the scale and tempted me to give the book a negative goodreads score. This makes it somewhat hard for me to say good things about it in general.

Actually I have been tempted while reading this handbook to add another star to Leary & Hoyle‘s work. If that’s the kind of stuff which is out there, perhaps I was too hard on those guys.

In some of the chapters of this book you’ll have to look really hard to find any formal tests of whether it even makes sense to think about the problems in the manner proposed (you’ll see lots of words spilled, but words are cheap); some of the therapy approaches have assumptions underlying them which are not flexible at all and are simply taken for granted – in some cases they’re arguably not even testable in theory. To take an example it’s always theoretically possible for you to blame your parents for your problems, and it’s not hard to come up with a therapy approach which helps you feel better by enabling you to evade responsibility for your problems by blaming your parents for them. Parent-blaming is a frequently encountered component in so-called schema-therapy approaches, though of course they don’t call it that in the book. Other times the therapists get even more creative; for example did you know that marital discord may be partly due to (societal) racism? I didn’t, I’m glad they included this important variable in their coverage. In one case I considered the proposed theoretical framework underlying/justifying the therapeutic approach frankly inconsistent – it simply makes no sense to me even in theory. Some approaches have proposed mechanisms of actions which remain either completely unexplored or at the very least seriously underexamined – the proponents seem to feel fine justifying the treatment approach solely by reference to various outcome variables arguably completely orthogonal to the methodology applied. They sometimes have no clue why it works, when it works (…if it works?). Ideas like selection bias and selective attrition naturally spring to mind, as do (as always) underpowered studies of questionable validity and publication bias, but they pretty much don’t talk about stuff like that at all. It should be noted that problems such as these are quite important to address if you want to argue, as some indeed implicitly do, that ‘regardless of whether it works ‘the way it’s supposed to’ or not, if it does work then that’s the important part.’ If the methodology is questionable it gets a lot harder to ‘just accept that it works’ because that conclusion might be wrong – and if you as a contributor to a handbook like this just choose to pretend specific problems don’t exist by not talking about them, that does not make you look good. There are a lot of problems meta-analyses do not solve.

In the coverage below I’ve tried to stay away from the low quality material and focus only on the stuff I can justify sharing here – don’t take the passages below to be representative of the book in general.

“The case formulation is an element of a hypothesis-testing empirical mode of clinical work […] The therapist begins the process by carrying out an assessment to collect information that is used to develop an initial formulation of the case. The case formulation is a hypothesis about the psychological mechanisms and other factors that cause and maintain a particular patient’s disorders and problems. The formulation is used to develop a treatment plan and to assist in obtaining the patient’s informed consent to it. After obtaining informed consent, the therapist moves forward with treatment. At every step in the treatment process […] the therapist returns repeatedly to the assessment phase; that is, the therapist collects data to monitor the process and progress of the therapy and uses those data to test the hypotheses (formulations) that underpin the intervention plan and to revise them as needed. Thus, the four elements of case formulation-driven cognitive-behavioral therapy (CBT) are (1) assessment to obtain a diagnosis and case formulation; (2) treatment planning and obtaining the patient’s informed consent to the treatment plan; (3) treatment; and (4) continuous monitoring and hypothesis-testing. […] A case formulation is important, because interventions flow from it […] a complete case formulation describes all of the patient’s symptoms, disorders, and problems, and proposes hypotheses about the mechanisms causing the disorders and problems, the precipitants of the disorders and problems, and the origins of the mechanisms. […] To understand the case fully, the therapist must know all of the problems. […] the therapist who simply focuses on the obvious problems or those on which the patient wishes to focus may miss important problems. Patients frequently wish to ignore problems such as substance abuse, self-harming behaviors, or others that can interfere with the successful treatment of the problems on which the patient does want to focus”

“numerous studies have now shown that CT (Cognitive Therapy) is associated with reductions of negative cognitions […] Garratt, Ingram, Rand, and Sawalani (2007) concluded in their review that the empirical literature is generally consistent with the hypothesis that CT results in cognitive changes that in turn predict reductions in depressive symptom severity. […] although the research designs and statistical techniques employed in most of these studies are appropriate for testing whether reductions in depressive symptoms and negative cognitions covary during CT, they do not allow for rigorous tests of the causal relations between symptoms and cognitions […] Notably, relatively few studies have included multiple assessments of both symptoms and plausible mediators […] In summary, given the research designs and data-analytic strategies employed in the majority of studies to date, only tentative conclusions can be drawn from the literature regarding the role of cognition in mediating therapeutic improvement in CT. […] Even though CT is somewhat more expensive than antidepressant medications in the short run, cost–benefit analyses to date have indicated that it pays for itself within a short time following treatment termination considering its potential to confer resistance to relapse and recurrence (Antonuccio, Thomas, & Danton, 1997; Dobson et al., 2008; Hollon et al., 2005).”

“Much of what distinguishes CT from other cognitive-behavioral therapies lies in the role assumed by the therapist and the role that he or she recommends to the patient. In the relationship, which is meant to be collaborative, the therapist and patient assume an equal share of the responsibility for solving the patient’s problems. The patient is assumed to be the expert on his or her own experience and on the meanings he or she attaches to events […] cognitive therapists do not assume to know why a certain thought was upsetting; they ask the patient.” [Other approaches don’t.]

“The purpose of scheduling activities in CT is twofold: (1) to increase the probability that the patient will engage in activities that he or she has been avoiding unwisely, and (2) to remove decision making as an obstacle in the initiation of an activity. Since the decision has been made in the therapist’s office, or in advance by the patient him- or herself, the patient need only carry out what he or she has agreed (or decided) to do. […] Since tasks that have been avoided by the patient are often exactly those that have been difficult to do, modifying the structure of these tasks is often appropriate. Large tasks […] are explicitly broken down into their smaller units […] to make them more concrete and less overwhelming. This intervention has been termed “chunking.” “Graded tasks” can also be constructed, such that easier tasks or simpler aspects of larger tasks are set out as the first to be attempted. […] Though chunking and graded task assignments may seem simplistic, it is often surprising to both patient and therapist how these simple alterations in the structure of a task change the patient’s view of the task and, subsequently, the likelihood of its being accomplished.”

“Problem-solving therapy (PST) is a positive approach to clinical intervention that focuses on training in constructive problem-solving attitudes and skills. […] Problem solving should be distinguished from solution implementation. These two processes are conceptually different and require different sets of skills. “Problem solving” refers to the process of discovering solutions to specific problems, whereas “solution implementation” refers to the process of carrying out those solutions in the actual problematic situations. […] Problem-solving skills and solution implementation skills are not always correlated; some individuals might possess poor problem-solving skills but good solution implementation skills, or vice versa.”

“A major assumption underlying the use of PST is that symptoms of psychopathology can often be understood and effectively prevented or treated if they are viewed as ineffective, maladaptive, and self-defeating coping behaviors that in turn have negative psychological and social consequences […] The most important concept in the relational/problem-solving model is “problem-solving coping,” a process that integrates all cognitive appraisal and coping activities within a general social problem-solving framework. A person who applies the problem-solving coping strategy effectively (1) perceives a stressful life event as a challenge or “problem to be solved,” (2) believes that he or she is capable of solving the problem successfully, (3) carefully defines the problem and sets a realistic goal, (4) generates a variety of alternative “solutions” or coping options, (5) chooses the “best” or most effective solution, (6) implements the solution effectively, and (7) carefully observes and evaluates the outcome. […] When the situation is appraised as changeable or controllable, then problem-focused goals are emphasized […] On the other hand, if the situation is appraised as largely unchangeable, then emotion-focused goals are emphasized (e.g., acceptance, making something good come from the problem).”

“a number of studies have suggested that an accumulation of unresolved daily problems may have a greater negative impact on well-being than the number of major negative events”

“Problem-solving ability has been found to be positively related to adaptive situational coping strategies, behavioral competence (e.g., social skills, academic performance, job performance), and positive psychological functioning (e.g., positive affectivity, self-esteem, a sense of mastery and control, life satisfaction). In addition, problem-solving deficits have been found to be associated with general psychological distress, depression, suicidal ideation, anxiety, substance abuse and addictions, offending behavior (e.g., aggression, criminal behavior), severe psychopathology (e.g., schizophrenia), health- related distress, and health-compromising behaviors. These results have been found using different measures of social problem-solving ability in a wide range of participants”

“compared to happy couples, distressed couples are characterized by a high frequency of reciprocal negative or punishing exchanges between partners, a relative scarcity of positive outcomes that each partner provides for the other, and deficits in communication and problem-solving skills […] Research has also demonstrated that partners in distressed relationships are more likely to notice selectively or “track” each other’s negative behavior […], make negative attributions about the determinants of such behavior […], hold unrealistic beliefs about intimate relationships […], and be dissatisfied with the ways that their personal standards for the relationship (e.g., regarding the amount of time and effort that they should put into their relationship) are met […] [However] some studies have indicated that increases in partners’ exchanges of positive behavior and improved communication skills have had limited impact on relationship satisfaction […] the degree of improvement in communication is not correlated with level of improvement in relationship adjustment”

“distressed couples commonly exhibit a pattern in which one partner pursues the other for interaction, while the other partner withdraws […] Females are more likely to be in the demanding role, whereas males more often withdraw”

“individuals often have strong standards for how partners should behave toward each other in a variety of domains. If these standards are not met, the individual is likely to become upset and behave negatively toward the partner. Likewise, one person’s level of satisfaction with the other’s behavior can be influenced by the attributions that person makes about the reasons for the partner’s actions. Thus, a husband might clean the house before his wife arrives at home, but whether she interprets this as a positive or negative behavior is likely to be influenced by her attribution or explanation for his behavior. If she concludes that he is attempting to be thoughtful and loving, she might experience his efforts to provide a clean house as positive. However, if she believes that he wishes to buy a new computer and is attempting to bribe her by cleaning the house, she might feel manipulated and experience the same behavior as negative. In essence, partners’ behaviors in intimate relationships carry great meaning, and not considering these cognitive factors can limit the effectiveness of treatment. We have described a variety of cognitive variables that are important in understanding couples’ relationships […], including the following:

Selective attention—what each person notices about the partner and the relationship.
Attributions—causal and responsibility explanations about marital events.
Expectancies—predictions of what will occur in the relationship in the future.
Assumptions—how each person believes people and relationships actually function.
Standards—how each person believes people and relationships should function.

These cognitions help to shape how each individual experiences the relationship. […] therapy at times will not focus on behavioral change but will help the partners reassess their cognitions about behaviors, so that they can be viewed in a more reasonable and balanced fashion.”

March 12, 2014 Posted by | Books, Psychology | Leave a comment

Four hours of my life

Here’s the link. I was black. I’m currently on the top-100 tactics list on playchess (#68 right now), but you can’t tell that from this game.

Note that the result displayed is of course wrong – the game was a dead draw and draw was agreed. It also was not a 1 minute game (see the post title) – it was a regular tournament game with FIDE rules against a ~1750 Elo opponent. I shared the game using playchess’ game sharing option, because it involves very little work and doesn’t require people who want to view games to have stuff like java, but unfortunately I had to ‘superimpose’ the game on top of a bullet game in order to share the game that way.

Because you often run into the Four Knights Game when playing the Petroff – which I as mentioned before often do, however these days mostly against stronger players where a draw would be acceptable – and because I haven’t actually ever looked seriously at that stuff because I figured it wasn’t anything to be afraid of, I watched this very nice instructional video on the afternoon before the game. Of course all of that analysis was completely useless because my opponent played 1.d4.

As far as I can tell from a very brief computer analysis I did not make any major inaccuracies during this game. Of course a more careful analysis might tell a different story, but I’m not going to spend more time on it than I already have. Unfortunately my opponent did not make any major inaccuracies either. The computer evaluation is around equal – if anything black has a slight edge – at move 18, and it doesn’t change a great deal throughout the rest of the game. Incidentally in case you were wondering the computer agrees with my assessment that it was stronger (~0.35 pawns or so stronger, actually a quite significant difference given the variation in evaluation that this game was subject to overall) to take with the a-pawn on b6 than with the Queen and that this capture overall improves my position. It’s the sort of move that may perhaps make people who know a little bit about chess but not very much confused, because they’ve heard about doubled pawns being weaknesses and so on – in this case the ‘weakness’ can’t really be exploited, I get a half-open file, and the former a-pawn can theoretically end up being exchanged with a c-pawn eyeing the center – a really good trade. Also in general you want the Queen to help control the light squares in a position like this, and taking the knight distracts it from that role and loses time.

The position on its own does not tell the whole story; my opponent got into serious time trouble and I was certainly the only one playing for a win after the knight exchange. My opponent had only 3 minutes left for the last 10 moves before the time control (i.e. at move 30), whereas I had half an hour, and already at move 35 he had only one minute left in a position which was certainly far from completely clear. From a positional point of view I also had no problems justifying playing on as his pawns are fixed on dark squares and my king might (somehow?) be able to invade and get to b3. So I pressed, but ended up having to accept the draw. This was not a surprising outcome as the London system is in general a very solid opening which is quite hard to break (on the other hand it’s also quite difficult to argue that white has any sort of advantage out of this opening, and the unambitious nature of these setups is presumably part of the reason why they are uncommon in top level chess).

Slightly boring games like these do hold some important lessons, but most of what one learns from such games one learns from the mistakes made, and there unfortunately weren’t a lot of those here. I guess you can use it as an example of the level of play you need to master in order to with the black pieces draw an average tournament player (who chooses an unambitious, if solid, opening).

March 11, 2014 Posted by | Chess | Leave a comment

The Cambridge Economic History of Modern Europe: Volume 1, 1700-1870 (2)

Here’s my first post about the book. I have now finished it, and I ended up giving it three stars on goodreads. It has a lot of good stuff – I’m much closer to four stars than two.

Back when I read Kenwood and Lougheed, the first economic history text I’ve read devoted to such topics, the realization of how much the world and the conditions of the humans inhabiting it had changed during the last 200 years really hit me. Reading this book was a different experience because I knew some stuff already, but it added quite a bit to the narrative and I’m glad I did read it. If you haven’t read an economic history book which tells the story of how we got from the low-growth state to the high-income situation in which we find ourselves today, I think you should seriously consider doing so. It’s a bit like reading a book like Scarre et al., it has the potential to seriously alter the way you view the world – and not just the past, but the present as well. Particularly interesting is the way information in books like these tend to ‘replace’ ‘information’/mental models you used to have; when people know nothing about a topic they’ll often still have ‘an idea’ about what they think about it, and most of the time that idea is wrong – people usually make assumptions based on what they know about, and when things about which they make assumptions are radically different from anything they know, they will make wrong assumptions and get a lot of things seriously wrong. To take an example, in recent times human capital has been argued to play a very important role in determining economic growth differentials, and so an economist who’s not read economic history might think human capital played a very important role in the Industrial Revolution as well. Some economic historians thought along similar lines, but it turns out that what they found did not really support such ideas:

“Although human capital has been seen as crucial to economic growth in recent times, it has rarely featured as a major factor in accounts of the Industrial Revolution. One problem is that the machinery of the Industrial Revolution is usually characterized as de-skilling, substituting relatively unskilled labor for skilled artisans, and leading to a decline in apprenticeship […] A second problem is that the widespread use of child labor raised the opportunity cost of schooling (Mitch, 1993, p. 276).”

I mentioned in the previous post how literacy rates didn’t change much during this period, which is also a serious problem with human-capital driven Industrial Revolution growth models. Here’s some stuff on how industrialization affected the health of the population:

“A large body of evidence indicates that average heights of males born in different parts of western and northern Europe began to decline, beginning with those born after 1760 for a period lasting until 1800. After a recovery, average heights resumed their decline for males born after 1830, the decline lasting this time until about 1860. The total reduction in average heights of English soldiers, for example, reached 2 cm during this period. Similar declines were found elsewhere […] in the case of England, it is clear that the decline in the average height of males born after 1830 occurred at a time when real wages were rising […] in the period 1820–70, the greatest improvement in life expectancy at birth occurred not in Great Britain but in other western and northwest European countries, such as France, Germany, the Netherlands, and especially Sweden […] Even in industrializing northern England [infant mortality] only began to register progress after the middle of the nineteenth century – before the 1850s, infant mortality still went up […] It is clear that economic growth accelerated during the 1700–1870 period – in northwestern Europe earlier and more strongly than in the rest of the continent; that real wages tended to lag behind (and again, were higher in the northwest than elsewhere); and that real improvements in other indicators of the standard of living – height, infant mortality, literacy – were often (and in particular for the British case) even more delayed. The fruits of the Industrial Revolution were spread very unevenly over the continent”

A marginally related observation which I could not help myself from adding here is this one: “three out of ten babies died before age 1 in Germany in the 1860s”. The world used to be a very different place.

Most people probably have some idea that physical things such as roads, railways, canals, steam engines, etc. made a big difference, but how they made that difference may not be completely clear. As a person who can without problems go down to the local grocery store and buy bananas for a small fraction of the hourly average wage rate, it may be difficult to understand how much things have changed. The idea that spoilage during transport was a problem to such an extent that many goods were simply not available to people at all may be foreign to many people, and I doubt many people living today have given it a lot of thought how they would deal with the problems associated with transporting stuff upstream on rivers before canals took off. Here’s a relevant quote:

“The difficulties of going upstream always presented problems in the narrow confines of rivers. Using poles and oars for propulsion meant large crews and undermined the advantages of moving goods by water. Canals solved the problem with vessels pulled by draught animals walking along towpaths alongside the waterways.”

Roads were very important as well:

“Roads and bridges, long neglected, got new attention from governments and private investors in the first half of the eighteenth century. […] Over long hauls – distances of about 300 km – improved roads could lead to at least a doubling of productivity in land transport by the 1760s and a tripling by the 1830s. There were significant gains from a shift to using wagons in place of pack animals, something made possible by better roads. […] Pavement was created or improved, increasing speed, especially in poor weather. In the Austrian Netherlands, for example, new brick or stone roads replaced mud tracks, the Habsburg monarchs increasing the road network from 200 km in 1700 to nearly 2,850 km by 1793”

As were railroads:

“As early as 1801 an English engineer took a steam carriage from his home in Cornwall to London. […] In 1825 in northern England a railroad more than 38 km long went into operation. By 1829 engines capable of speeds of almost 60 kilometers an hour could serve as effective people carriers, in addition to their typical original function as vehicles for moving coal. In England in 1830 about 100km of railways were open to traffic; by 1846 the distance was over 1,500 km. The following year construction soared, and by 1860 there were more than 15,000 km of tracks.”

How did growth numbers look like in the past? The numbers used to be very low:

“Economic historians agree that increases in per capita GDP remained limited across Europe during the eighteenth century and even during the early decades of the nineteenth century. In the period before 1820, the highest rates of economic growth were experienced in Great Britain. Recent estimates suggest that per capita GDP increased at an annual rate of 0.3 percent per annum in England or by a total of 45 percent during the period 1700–1820 […] In other countries and regions of Europe, increases in per capita GDP were much more limited – at or below 0.1 percent per annum or less than 20 percent for 1700–1820 as a whole. As a result, at some time in the second half of the eighteenth century per capita incomes in England (but not the United Kingdom) began to exceed those in the Netherlands, the country with the highest per capita incomes until that date. The gap between the Netherlands and Great Britain on the one hand, and the rest of the continent on the other, was already significant around 1820. Italian, Spanish, Polish, Turkish, or southeastern European levels of income per capita were less than half of those occurring around the North Sea […] From the 1830s and especially the 1840s onwards, the pace of economic growth accelerated significantly. Whereas in the eighteenth century England, with a growth rate of 0.3 percent per annum, had been the most dynamic, from the 1830s onwards all European countries realized growth rates that were unheard of during the preceding century. Between 1830 and 1870 the growth of GDP per capita in the United Kingdom accelerated to more than 1.5 percent per year; the Belgian economy was even more successful, with 1.7 percent per year, but countries on the periphery, such as Poland, Turkey, and Russia, also registered annual rates of growth of 0.5 percent or more […] Parts of the continent then tended to catch up, with rates of growth exceeding 1 percent per annum after 1870. Catch-up or convergence applied especially to France, Germany, Austria, and the Scandinavian countries. […] in 1870 all Europeans enjoyed an average income that was 50 to 200 percent higher than in the eighteenth century”

To have growth you need food:

“In 1700, all economies were based very largely on agricultural production. The agricultural sector employed most of the workforce, consumed most of the capital inputs and provided most of the outputs in the economy […] at the onset of the Industrial Revolution in England , around 1770, food accounted for approximately 60 percent of the household budget, compared with just 10 percent in 2001 (Feinstein, 1998). But it is important to realise that agriculture additionally provided most of the raw materials for industrial production: fibres for cloth, animal skins for leather, and wood for building houses and ships and making the charcoal used in metal smelting. There was scarcely an economic activity that was not ultimately dependent on agricultural production – even down to the quill pens and ink used by clerks in the service industries. […] substantial food imports were unavailable to any country in the eighteenth century because no country was producing a sufficient agricultural surplus to be able to supply the food demanded by another. Therefore any transfer of labor resources from agriculture to industry required high output per worker in domestic agriculture, because each agricultural worker had to produce enough to feed both himself and some fraction of an industrial worker. This is crucial, because the transfer of labor resources out of agriculture and into industry has come to be seen as the defining feature of early industrialization. Alternative paradigms of industrial revolution – such as significant increases in the rate of productivity growth, or a marked superiority of industrial productivity over that of agriculture – have not been supported by the empirical evidence.”

“Much, though not all, of the increase in [agricultural] output between 1700 and 1870 is attributable to an increase in the intensity of rotations and the switch to new crops […] Many of the fertilization techniques (such as liming and marling) that came into fashion in the eighteenth century in England and the Netherlands had been known for many years (even in Roman times), and farmers had merely chosen to reintroduce them because relative prices had shifted in such a way as to make it profitable once again. The same may also be true of some aspects of crop rotation, such as the increasing use of clover in England. […] O’Brien and Keyder […] have suggested that English farmers had perhaps two-thirds more animal power than their French counterparts in 1800, helping to explain the differences in labor productivity.[2] The role of horsepower was crucial to increasing output both on and off the farm […] [Also] by 1871 an estimated 25 percent of wheat in England and Wales was harvested by mechanical reapers, considerably more than in Germany (3.6 percent in 1882) or France (6.9 percent in 1882)”

“It is no coincidence that those places where agricultural productivity improved first were also the first to industrialize. For industrialization to occur, it had to be possible to produce more food with fewer people. England was able to do this because markets tended to be more efficient, and incentives for farmers to increase output were strong […] When new techniques, crop rotations, or the reorganization of land ownership were rejected, it was not necessarily because economic agents were averse to change, but because the traditional systems were considered more profitable by those with vested interests. Agricultural productivity in southern and eastern Europe may have been low, but the large landowners were often exceedingly rich, and were successful in maintaining policies which favored the current production systems.”

I think I talked about urbanization in the previous post as well, but I had to include these numbers because it’s yet another way to think about the changes that took place during the Industrial Revolution:

“On the whole, European urban patterns [in the mid-eighteenth century] were not very different from those of the late Middle Ages (i.e. between the tenth and the fourteenth centuries). The only difference was the rise of urbanization north of Flanders, especially in the Netherlands and England. […] In Europe, in the early modern age, fewer than 10 percent of the population lived in urban centers with more than 10,000 inhabitants. At the end of the twentieth century, this had increased to about 70 percent.[7] In 1800 the population of the world was 900 million, of which about 50 million (5.5 percent) lived in urban centers of more than 10,000 inhabitants: the number of such centers was between 1,500 and 1,700, and the number of cities with more than 5,000 inhabitants was more than 4,000.[8] At this time Europe was one of the most urbanized areas in the world […], with about one third of the world’s cities being located in Europe […] In the nineteenth century urban populations rose in Europe by 27 million […] (by 22.5 million in 1800–70) and the number of cities with over 5,000 inhabitants grew from 1,600 in 1800 to 3,419 in 1870. On the whole, in today’s developed regions, urbanization rates tripled in the nineteenth century, from 10 to 30 percent […] With regard to [European] centers with over 5,000 inhabitants, their number was 86 percent higher in 1800 than in 1700, and this figure increased fourfold by 1870. […] Between 1700 and 1800 centers with more than 10,000 inhabitants doubled. […] On the world scale, urbanization was about 5 percent in 1800, 15–20 percent in 1900, and 40 percent in 2000”

There’s a lot more interesting stuff in the book, but I had to draw a line somewhere. As I pointed out in the beginning, if you haven’t read a book dealing with this topic you might want to consider doing it at some point.

March 8, 2014 Posted by | Books, Data, economic history, Economics | Leave a comment

Feynman lectures: Quantum electrodynamics

At some point I should probably read his lectures, but I don’t see that happening anytime soon. In the meantime lectures like the ones posted below in this post are good, if imperfect, substitutes: They are very enjoyable to watch. He repeats himself quite a bit; I assume that part of the reason is that this stuff is from before internet lectures became a thing, and there would have been no way for people to learn what he’d said in previous lectures, making it a reasonable strategy for the lecturer to repeat main points made in previous lectures so that newcomers not be completely lost.

The sound is really awful in the beginning of the second lecture especially, but a lot of the stuff covered there is review and the sound problem gets fixed around 17 minutes in. More generally the sound quality varies somewhat and it isn’t that great. Neither is the image quality – it’s quite grainy most of the time and this sometimes makes it hard to see what he’s written/drawn on the blackboard. The last lecture in particular would presumably have been much easier to follow if you could actually tell the differences among the various colours of chalk he’s using. There are also problems in all videos with the image freezing up around the one-hour mark (the sound keeps working, so he’ll talk without you being able to see what he’s doing), but this problem fortunately lasts only a very short while (30 seconds or so). In my opinion minor technical issues such as these really should not keep you from watching these lectures – these are lectures given before I was even born, by a Nobel Prize winning physicist – the fact that you can watch them at all is quite remarkable.

I had fun watching these lectures. Here’s one neat quote from the third lecture: “Now in order to describe both the space and the time pictures, I’m going to make a kind of graph which we call… – which is very handy – if I call it by its name you’ll be frightened so I’m not going to call it by its name.” I couldn’t hold back a brief laugh at that point – I’m sure some of you understand why. Here’s another nice one, related to Eddington‘s work on the coupling constant: “The first idea was by Eddington, and experiments were very crude in those days and the number looked very close to 136, so he proved by pure logic that it had to be 136. Then it turned out that them experiments showed that that was a little wrong, that it was closer to 137, so he found a slight error in the logic and proved [loud laughter in the background] with pure logic that it had to be exactly the integer 137.” There are a lot more of these in the lectures and incidentally if you manage to watch these lectures without at any point feeling a desire to laugh, your sense of humour is most likely very different from mine. I’m sure you’ll have a lot more fun watching these lectures than you’ll have reading articles like this one.

I will emphasize that these lectures are meant for the general public. Knowledge about stuff like vector algebra, modular arithmetic and complex numbers is not required, even though he implicitly covers this kind of stuff in the lectures. He tries very hard to keep things as simple as possible while still dealing with the main ideas; if you’re the least bit curious don’t miss out on this stuff due to some faulty assumption that this stuff is somehow beyond you. Either way you’ll probably have fun watching these lectures, whether or not you understand all of the stuff he covers.

Oh right, the lectures:

(This is the one I talked about with really bad sound in the beginning. The issue is as mentioned resolved approximately 17 minutes in.)

If you like these lectures and haven’t seen his lectures on the character of physical law (which I’ve blogged before), you’ll probably like those as well – you can start here.

March 5, 2014 Posted by | Lectures, Physics | Leave a comment

Handbook of Cognitive-Behavioral Therapies

I started reading this book yesterday. I’m not super impressed, but it’s not horrible either.

One chapter in the book, chapter 2, deals specifically with ‘The Evidence Base for Cognitive-Behavioral Therapy’, and although this would normally be the sort of thing I’d be very interested in, I actually thought that was a rather weak chapter despite its preferential reliance on RCTs and reviews/meta-analyses – mostly because the authors seem to only care about whether or not there’s an effect, not how large it is; effect sizes are rarely reported. To make matters worse, in one case where they do report effect sizes as well as answering the ‘does this stuff work better than doing nothing?’-question (…and is that actually the question these articles answer? More on this below…), in the case where they’re talking about the treatment effects of cognitive-behavioral therapy (-CBT) on obsessive-compulsive disorder (-OCD), you suddenly realize that a lot of patients will not benefit at all from this stuff. A review article from the chapter notes that “one-third of those who complete a course of therapy, and nearly one-half of those who begin but do not complete treatment, will not make expected gains” – but despite this they conclude towards the end of the chapter when summing up that, “The absolute efficacy of CBT for OCD is positive and well-supported.” It makes you wonder which other conditions they talk about may technically ‘have an effect’ or ‘be well-supported’, yet lead to zero improvement for large groups of patients. A more thorough coverage of the treatment effects of a smaller number of conditions would probably have been advisable. There are other problems in this review – for example the coverage of CBT treatment effects of substance dependence/-abuse relies on material not reporting long-term results, making the results meaningless or worse – the authors note that long-term results are not reported, but the natural conclusion to draw from this problem is not drawn and it really should have been. For more on this topic see this post and Scott Alexander’s post to which I link in that post. Yet another problem is that in some cases the studies comparing the outcomes of CBT and pharmacological treatment options were undertaken so long ago (1980s) that they presumably no longer have much validity today, because they were comparing CBT to previous generations of pharmacotherapy. The problems with this chapter is part of why I don’t post much on this topic below despite being quite interested in this topic: Frankly I don’t really trust the authors’ conclusions, and I find the coverage severely lacking in detail. I should note that although chapter 2 wasn’t great, chapter 3 on ‘Cognitive Science and the Conceptual Foundations of Cognitive-Behavioral Therapy’ was significantly worse, and I actually decided against including anything from that chapter in the coverage below.

Some observations from the first third of the book below:

“At their core, CBTs share three fundamental propositions:
1. Cognitive activity affects behavior.
2. Cognitive activity may be monitored and altered.
3. Desired behavior change may be effected through cognitive change.”

“Three major classes of CBTs have been recognized, as each has a slightly different class of change goals […] These classes are coping skills therapies, problem-solving therapies, and cognitive restructuring methods. […] the different classes of therapy orient themselves toward different degrees of cognitive versus behavioral change. […] Therapies included under the heading of “cognitive restructuring” assume that emotional distress is the consequence of maladaptive thoughts. Thus, the goal of these clinical interventions is to examine and challenge maladaptive thought patterns, and to establish more adaptive thought patterns. In contrast, “coping skills therapies” focus on the development of a repertoire of skills designed to assist the client in coping with a variety of stressful situations. The “problem-solving therapies” may be characterized as a combination of cognitive restructuring techniques and coping skills training procedures.”

“Briefly stated, the “mediational position” is that cognitive activity mediates the responses the individual has to his or her environment, and to some extent dictates the degree of adjustment or maladjustment of the individual. As a direct result of the mediational assumption, the CBTs share a belief that therapeutic change can be effected through an alteration of idiosyncratic, dysfunctional modes of thinking. Additionally, due to the behavioral heritage, many of the cognitive-behavioral methods draw upon behavioral principles and techniques in the conduct of therapy, and many of the cognitive-behavioral models rely to some extent upon behavioral assessment of change to document therapeutic progress. […] one commonality among the various CBTs is their time-limited nature. In clear distinction from longer-term psychoanalytic therapy, CBTs attempt to effect change rapidly, and often with specific, preset lengths of therapeutic contact. Many of the treatment manuals written for CBTs recommend treatment in the range of 12–16 sessions […] Related to the time-limited nature of CBT is the fact almost all applications of this general therapeutic approach are to specific problems. […] A third commonality among cognitive-behavioral approaches is the belief that clients are, in a sense, the architects of their own misfortune, and that they therefore have control over their thoughts and actions […] many CBTs are by nature either explicitly or implicitly educative.”

“Other criticisms pertain to research methodology. It has been argued that amalgamating placebo and waiting-list controls into a composite control condition confounds results (Parker, Roy, & Eyers, 2003). Specifically, Parker et al. asserted that participants assigned to a placebo condition are hopeful, because they assume that they are being treated, whereas participants assigned to a waiting-list control condition are discouraged, because they are not undergoing any treatment. They recommended that future research compare active treatments to different control conditions to disentangle potentially differing results. […] In addition to limitations to the research base on the efficacy of CBT, there are limitations to efficacy research in general. Although RCTs are highly utilized and respected in efficacy research, the reelvance [sic] of their results to routine clinical practice has been questioned (Leichsenring et al., 2006). For example, the restrictive exclusion criteria of many RCTs may undermine the representativeness of the participants to the general population of people with the disorder. Also, comorbidities are common among disorders but are controlled for in RCTs through exclusionary criteria, or are simply not addressed. Also, researcher allegiance, or the tendency of the authors of a comparative treatment study to prefer one treatment over another, may introduce bias into the study design that results in findings supportive of the preferred treatment (Butler et al., 2006).”

“Most psychotherapists accept, at least in principle, the value of scientific inquiry, even while they differ widely in what they consider to be acceptable scientific methods. Despite this development, however, there has been a decided lag in the acceptance of scientific findings as the basis for setting new directions or for deciding what is factual among practicing therapists. Indeed for many practitioners, the true test of a given psychotherapy rests in both its theoretical logic and evidence from clinicians’ observations rather than data from sound scientific methods, even when the latter are available […] What practitioners accept as valid hinges on both the methods used to derive results and the strength of their opinions. Practitioners prefer naturalistic research over randomized clinical trials, N = 1 or single-case studies over group designs, and individualized over group measures of outcome […] They also tend to believe research favoring the brand that they practice over research that supports alternative psychotherapy approaches or equivalency among approaches. Since most psychotherapy research fails to comply with these values, psychotherapists often are quick to reject scientific findings that disagree with their own theoretical systems. Thus, while the reasons given for rejecting scientific evidence may be more sophisticated today than in the past, it may be no less likely to occur.”

“CT [cognitive therapy] is a specific form of the more general CBTs […] Cognitive theory has been empirically based since its inception, in that it used findings from formal research to establish its theoretical principles. […] CT may best be defined as the application of cognitive theory to a certain disorder and the use of techniques to modify the dysfunctional beliefs and maladaptive information-processing systems that are characteristic of the disorder […] CT does not depend on the validity of insights into the nature of psychopathology for effectiveness in the therapeutic arena. First and foremost, cognitive theory emphasizes reliable observation and measurement in the assessment of the effects of treatment.”

“the efficacy of CT is differentially influenced by a variety of qualities characteristic of the patient and problem. Qualities such as patient coping styles, reactance levels, and complexity and severity of problems, among others, may influence the way that CT is applied. […] One patient characteristic that has proven to predict patients’ response to CT is “coping style,” the method that an individual adopts when confronted with anxiety-provoking situations, and that typically is viewed as a trait-like pattern. CT has been found to be most effective among patients who exhibit an extroverted, undercontrolled, externalizing coping style […] Internalization and externalization represent opposite poles on the traitlike dimension of coping style. Both coping styles may be used to reduce uncomfortable experience (i.e., provide escape or avoidance). Some patients cope by activating externalizing behaviors that allow either direct escape or avoidance of the feared environment. Alternatively, other patients may prefer behaviors (i.e., self-blame, compartmentalization, sensitization) that control internal experiences such as anxiety. Internalizing patients are typically characterized by low impulsivity and overcontrol of impulses, whereas externalizers generally exhibit highly impulsive or exaggerated behaviors. Additionally, internalizers tend to be more insightful and self-reflective. Internalizers typically inhibit feelings, tolerate emotional distress better than externalizers, and frequently attribute difficulties they encounter to themselves. On the other hand, externalizers tend to deny personal responsibility for either the cause or the solution of their problems, experience negative emotions as intolerable, and seek external stimulation. […] Although the principles of treatment are the same as those for externalizers, the treatment of internalizing individuals is more complex.”

“The major impetus for psychotherapy integration comes from the evidence that no single school of psychotherapy has demonstrated consistent superiority over the others. Rather, psychotherapy research for specific problems, such as drug abuse or depression, has largely led to the conclusion that all approaches produce similar average effects […] Unfortunately, the nonsignificance of treatment main effects often draws more attention than the growing body of research that demonstrates meaningful differences in the types of patients for whom different aspects of treatment are effective […] For example, research indicates that for patients with symptoms of anxiety and depression […] nondirective and paradoxical interventions are more effective than directive treatments in patients with high levels of pretherapy resistance (i.e., “resistance potential”[…]; and (3) therapies that target cognitive and behavior changes through contingency management […] are more effective than insight-oriented therapies in impulsive or externalizing patients, but this effect is reversed in patients with less externalizing coping styles […] The techniques of CT may be used with virtually any patient; however, the greatest benefit is achieved when the strategies or techniques are employed differentially, depending on patient dimensions such as coping style, type of problem, subjective distress, functional and social impairment, and level of resistance.”

“Patient resistance typically bodes poorly for treatment effectiveness, unless it is managed skillfully. It is generally assumed that some patients are more likely than others to resist therapeutic procedures. “Resistance” may be characterized as a dispositional trait and a transitory in-therapy state of oppositional (e.g., angry, irritable, and suspicious) behaviors. It involves both intrapsychic (image of self, safety, and psychological integrity) and interpersonal (loss of interpersonal freedom or power imposed by another) factors […] “Reactance,” an extreme example of resistance, is manifested by oppositional and uncooperative behaviors. […] Resistance is easily identifiable, and differential treatment plans for patients with high and low resistance are easily crafted. The successful implementation of these plans, however, is often quite a different matter. Overcoming patient resistance to the clinician’s efforts is difficult. It requires that the therapist set aside his or her own resistance to recognize that the patient’s oppositional behavior may actually be iatrogenic […] therapists often [react] to patient resistance by becoming angry, critical, and rejecting, which are reactions that tend to reduce the willingness of patients to explore problems.” [This aspect of the treatment dimension was – perhaps not surprisingly – emphasized in Clark as well.]

March 5, 2014 Posted by | Books, Psychology | Leave a comment

A few lectures

I love Crawford’s lectures, and this one is great as usual. Much of this will presumably be review if you’ve explored wikipedia a bit (lots of good astronomy stuff there), but there’ll probably be some new stuff as well and her delivery is really good.

I’m very skeptical about some of the numbers presented in this lecture, and this kind of stuff – insufficiently sourced (/unsourced) numbers which are hard to look up, also on account of other information being constantly added to the mix – is an aspect of lectures which I really don’t like. Not a great lecture in my opinion, but I figured I might as well post it anyway.

I’ve linked to e.g. this article before, so some of the stuff covered in this lecture should be well known to those readers who’ve read along for a long time and follow all my links… (Ha!)

As usual it’s annoying that you can’t see where the lecturer is pointing when talking about stuff on a given slide, but the lecture has some interesting stuff and it’s worth watching it despite this problem.

March 2, 2014 Posted by | Astronomy, Infectious disease, Lectures, Mathematics, Medicine, Microbiology, Physics | Leave a comment

Random stuff

i. Effects of Academic Acceleration on the Social-Emotional Status of Gifted Students.

I’ve never really thought about myself as ‘gifted’, but during a conversation with a friend not too long ago I was reminded that my parents discussed with my teachers at one point early on if it would be better for me to skip a grade or not. This was probably in the third grade or so. I was asked, and I seem to remember not wanting to – during my conversation with the friend I brought up some reasons I had (…may have had?) for not wanting to, but I’m not sure if I remember the context correctly and so perhaps it’s better to just say that I can’t recall precisely why I was against this idea, but that I was. Neither of my parents were all that keen on the idea anyway. Incidentally the question of grade-skipping was asked in a Mensa survey answered by a sizeable proportion of all Danish members last year; I’m not allowed to cover that data here (or I would have already), but I don’t think I’ll get in trouble by saying that grade-skipping was quite rare even in this group of people – this surprised me a bit.

Anyway, a snippet from the article:

“There are widespread myths about the psychological vulnerability of gifted students and therefore fears that acceleration will lead to an increase in disturbances such as anxiety, depression, delinquent behavior, and lowered self-esteem. In fact, a comprehensive survey of the research on this topic finds no evidence that gifted students are any more psychologically vulnerable than other students, although boredom, underachievement, perfectionism, and succumbing to the effects of peer pressure are predictable when needs for academic advancement and compatible peers are unmet (Neihart, Reis, Robinson, & Moon, 2002). Questions remain, however, as to whether acceleration may place some students more at risk than others.”

Note incidentally that relative age effects (how is the grade/other academic outcomes of individual i impacted by the age difference between individual i and his/her classmates) vary across countries, but are usually not insignificant; most places you look the older students in the classroom do better than their younger classmates, all else equal. It’s worth having both such effects as well as the cross-country heterogeneities (and the mechanisms behind them) in mind when considering the potential impact of acceleration on academic performance – given differences across countries there’s no good reason why ‘acceleration effects’ should be homogenous across countries either. Relative age effects are sizeable in most countries – see e.g. this. I read a very nice study a while back investigating the impact of relative age on tracking options of German students and later life outcomes (the effects were quite large), but I’m too lazy to go look for it now – I may add it to this post later (but I probably won’t).

ii. Publishers withdraw more than 120 gibberish papers. (…still a lot of papers to go – do remember that at this point it’s only a small minority of all published gibberish papers which are computer-generated…)

iii. Parental Binge Alcohol Abuse Alters F1 Generation Hypothalamic Gene Expression in the Absence of Direct Fetal Alcohol Exposure.

Nope, this is not another article about how drinking during pregnancy is bad for the fetus (for stuff on that, see instead e.g. this post – link i.); this one is about how alcohol exposure before conception may harm the child:

“It has been well documented that maternal alcohol exposure during fetal development can have devastating neurological consequences. However, less is known about the consequences of maternal and/or paternal alcohol exposure outside of the gestational time frame. Here, we exposed adolescent male and female rats to a repeated binge EtOH exposure paradigm and then mated them in adulthood. Hypothalamic samples were taken from the offspring of these animals at postnatal day (PND) 7 and subjected to a genome-wide microarray analysis followed by qRT-PCR for selected genes. Importantly, the parents were not intoxicated at the time of mating and were not exposed to EtOH at any time during gestation therefore the offspring were never directly exposed to EtOH. Our results showed that the offspring of alcohol-exposed parents had significant differences compared to offspring from alcohol-naïve parents. Specifically, major differences were observed in the expression of genes that mediate neurogenesis and synaptic plasticity during neurodevelopment, genes important for directing chromatin remodeling, posttranslational modifications or transcription regulation, as well as genes involved in regulation of obesity and reproductive function. These data demonstrate that repeated binge alcohol exposure during pubertal development can potentially have detrimental effects on future offspring even in the absence of direct fetal alcohol exposure.”

I haven’t read all of it but I thought I should post it anyway. It is a study on rats who partied a lot early on in their lives and then mated later on after they’d been sober for a while, so I have no idea about the external validity (…I’m sure some people will say the study design is unrealistic – on account of the rats not also being drunk while having sex…) – but good luck setting up a similar prospective study on humans. I think it’ll be hard to do much more than just gather survey data (with a whole host of potential problems) and perhaps combine this kind of stuff with studies comparing outcomes (which?) across different geographical areas using things like legal drinking age reforms or something like that as early alcohol exposure instruments. I’d say that even if such effects are there they’ll be very hard to measure/identify and they’ll probably get lost in the noise.

iv. The relationship between obesity and type 2 diabetes is complicated. I’ve seen it reported elsewhere that this study ‘proved’ that there’s no link between obesity and diabetes or something like that – apparently you need headlines like that to sell ads. Such headlines make me very, tired.

v. Scientific Freud. On a related note I have been considering reading the Handbook of Cognitive Behavioral Therapy, but I haven’t gotten around to that yet.

vi. If people from the future write an encyclopedic article about your head, does that mean you did well in life? How you answer that question may depend on what they focus on when writing about the head in question. Interestingly this guy didn’t get an article like that.

March 1, 2014 Posted by | alcohol, Diabetes, Genetics, Personal, Psychology, Studies, Wikipedia | 2 Comments