I had trouble following this, but I thought it was an interesting lecture anyway. The sound falls out a couple times for very brief periods of time (a few seconds, but still irritating) and a few other times it’s a bit difficult to tell what he’s saying because he speaks very fast. The guy who controls the camera occasionally forgets to follow him around, which is annoying. But aside from these small problems it’s a good lecture. Here are some links that I found helpful along the way (some were more helpful than others…) while watching the lecture: Duality (projective geometry), Euler characteristic, Big O notation, configuration (geometry), cubic curve, algebraic geometry of projective spaces, Cayley–Bacharach theorem.
I liked most of the lecture, but I agree with Razib Khan’s assessment that: “there may not have been a gene which made humanity, but a subtle complex of numerous genetic and cultural changes which transitioned at a critical point”. Based on his comments towards the end of the lecture, it seems that Pääbo thinks along different lines. It seems to me that the story about the origin and evolution of culture(s) is complex and multidimensional, and to tell the story of how humans got from flint axes to airplanes you need a lot more than to identify a few SNPs. I’d be very surprised if we can ‘narrow it down’ as much as Pääbo seems to assume we might be able to.
This lecture is much less technical than the first two – it’s a rather light and data-poor lecture, but I did find it worth watching.
(Some of the stuff below started out as comments made during a skype conversation with a friend. I added some other unrelated ideas as well. Most of it deals with the job interview setting, but there’s a little bit of other personal stuff at the bottom as well. I don’t really write posts like these anymore and I was strongly considering not posting this, so if you think the post contains some valuable insights you’d probably be well-advised to save it somewhere else; I can’t guarantee that I won’t change my mind about the post later on and delete it when I realize it’s the sort of crap I shouldn’t blog.)
I consider job-interviewing to be a skill that I at some point hopefully not too far into the future will have to try to acquire. Like in other areas of life I’ll probably try to acquire that skill through reading stuff about it – it’s what I do. But it’s probably worth writing down a few observations I’ve already made along the way. It’s my belief that the things that decide whether or not a given person lands a job often are at least somewhat unrelated to the qualifications of said individual, and it probably makes sense to try to optimize along such variables as well. This is hard to do if you’ve not given it some thought. Saying someone got the job because there was a good chemistry between the interviewer and the interviewee may be correct but it is not a very informative statement, and usually some variables go into that equation which can at least be tweaked a little in the right direction.
I’ve occasionally talked to my brother about classic Fermi problems and how to go about answering such questions, which is one angle (some employers do pose such questions during an interview). However a probably much more important angle is the open ended question. Any semi-competent interviewer will probably make use of these during an interview, because they have the potential to give you a lot of information. This is because the potential variation in response strategies is much higher here than in other contexts; people may vary a lot in how many words they use (‘Not enough information (to anwer)’ vs ‘a 10 minute lecture on how you saved the lives of four kittens on November 13, 1999, and because of this – well, also partly due to Marjorie’s accident of course – decided to help out at the local homeless shelter…’), which words they use, how many variables they include in their response, which aspects they emphasize and which factors they exclude/overlook (e.g. intellectual vs social/emotional aspects), and so on and so forth. Interviewers ask such questions at least in part in order to get people to tell stuff about themselves which they might not otherwise have told them. When answering a question like that, one should probably try to keep in mind both why they ask (the answer to the question as such is not that important – which things they may be interested in learning about you is what’s important) and how you’d prefer to present yourself to them (how honest are you going to be in terms of signalling to them which type of person you are, and which types of variables would it be optimal for you to signal that you’d include in a random decision making process?). As always in these contexts, the response strategy will to some extent imply a tradeoff between increasing the likelihood of getting the job and getting a job you don’t want.
I think a common theme in the approach I at this point assume makes sense in the interview context is that you do not in general want to memorize answers to specific questions. This is certainly not the way to handle Fermi problems, and I don’t think it’s a good approach to many other types of questions either. I don’t think the ‘memory strategy’ makes much sense except in so far as it relates to very specific questions which you know will come up during the conversation, and which you know you’ll need to have a good answer for in order to land the job. However in general it probably makes more sense to have some idea which personality traits and behavioural dispositions you’re going to emphasize when talking about yourself (and the sort of work you can do for the employer), and which traits and dispositions it on the other hand would be optimal for you to neglect to tell them about. You probably want to along the way give some thought as to how perceived social signals from them about what they’re looking for should change your response strategies, if at all. Having given such topics some thought beforehand should make the social interaction more natural and make e.g. various ‘evasive maneuvers’ less obvious. A potentially important note is that response relevance (are you answering exactly the question they’re asking you, or are you perhaps answering a slightly different question which you would prefer to answer?) is not necessarily a variable you should always aim to maximize; the importance of this variable will depend upon the question and upon the preferences of the interviewer, and it is likely that you’ll quickly learn how much leeway you’re given in this respect.
All interviews will from a certain point of view contain a lot of elements which are included at least in part in order to make people slip and indicate that they’re not the right person for the job – if 7 people are interviewed and one person gets hired, the interviewer needs to justify why s/he didn’t hire any of the 6 others. Employing strategies such as trying to make people relax and feel comfortable are often effective in terms of squeezing relevant information out of the interviewees, because they tend to increase potential behavioural variation among interviewees; people behave more alike in environmentally-induced high-stress situations than they do in relaxed social environments (see e.g. Funder), and if anything an interviewer wants to maximize behavioural variation (the less important the environmental confound is the more behavioural variation is displayed during each encounter, and the fewer rounds of interviews will be needed to decide upon an optimal candidate). Feeling comfortable during an interview probably should not be considered a state to be avoided as such, as awkward encounters are unlikely to lead anywhere, but it should be kept in mind that there are potential negative behavioural effects associated with feeling ‘too’ comfortable. Extensive knowledge about which sort of social strategies interviewers apply during the interview should not (if you decide to try to obtain such knowledge in order to increase the likelihood of getting hired) make you more overtly cautious or mistrustful, as these are not traits you will want to display too openly (unless you’re applying for a job where such traits may be considered a plus). A better idea may be to signal that you’re comfortable and relaxed, whether or not you actually are comfortable and relaxed – this seems in general to be a much smarter move than would be signalling that ‘you know what they’re trying to do’; the former will, if you do it successfully, both signal confidence and perhaps also make the interviewer believe your behavioural input is more ‘valuable’ (to them) than it may actually be in reality, whereas the latter may put you into a very different box. I have in the past perhaps had a tendency to think of displays of meta-level thinking as a positive factor in these contexts; one example of engaging in this type of behaviour could be to signal that you know some stuff about which traits and behavioural dispositions the employer is likely to consider desirable in an applicant. I’m no longer at all sure such displays are a good idea; there are certainly ways to do these things which are better than others (‘making such comments jokingly and in a light-hearted manner may serve to display both confidence as well as intelligence’). In general displaying and drawing attention to the fact that you’re familiar with mechanisms applied by interviewers and that you’re trying to take them explicitly into account when answering questions may be a bad idea, as it may make your responses less trustworthy. Divulging explicit aspects of your response strategy may not be a good idea.
One thing to remember in the context of the information setting is that the interviewers know next to nothing about you (aside from what they may have learned from your job application and a quick google) and that any variable you have not told them about is a variable they will not take into account when deciding whether or not you should get the job. They’ll ask questions designed to figure out all the relevant information, but sometimes identifying the relevant information is not an easy task, and that helping them along in that respect may be required is something which may be worth keeping in mind. Asking the interviewer questions along the way may be a good idea (if that is ‘permitted’ in the setting in question), in that it may help you get the interviewer to tell you something about herself. Information like that is power because it may help you identify things which you have in common with said individual; the more aspects you have in common, and the more significant these aspects are to the self-perception of the person with whom you’re interacting, the more likely you are to be liked by the interviewer (and the more likely you may be to get the job). Information provided by answers to such questions may also enable you to better gauge which answers they’re looking for, enabling you to potentially switch self-presentation strategies as needed. Even if the setting discourages asking questions, the subtextual information provided by e.g. the type of questions asked by the interviewer may give valuable information that may be applied in a similar manner.
Optimized non-verbal behavioural interaction patterns (eye contact, open body language, etc.), as well as formulation of specific behavioural heuristics derived from the above observations to be applied in the interview setting, are things I’ll have to have a look at later. I should probably also try to at least get some idea about just how ‘normal’ I’ll want to appear to a future employer. Self-presentation strategies, reframing techniques, and perhaps even social inputs from others which might be relevant in the interview setting are potential things to look into later as well. Just like in the dating context, the goal of holding and projecting accurate self-perceptions can be problematic in this context, which is something to have in mind; in this particular context it’s taken as a given that you’ll try to mostly say nice things about yourself and present yourself in the best possible light, and if you don’t do that it may well make you look bad.
I have talked about the Mensa trip I went to this weekend before, so I guess I should add a few remarks about that here – I wrote an account of how it went and how I felt shortly after I’d returned home because I felt a need to do that, but I see no reason to share that stuff here. Intead I’ll keep it brief: It was not very much fun in general, but it wasn’t all bad -> Conclusion: I’m glad I decided to go because of the ‘get outside your comfort zone, try new stuff, learn stuff about yourself’-aspects, but at this point I don’t think I’ll repeat the experience anytime soon. Despite not being all that great it was not a particularly disappointing experience as I had rather low expectations from the outset. Interestingly I only recently realized that I may have initially ‘underestimated’ the value of some social feedback I got during the event; a couple of people there expressed a desire to interact with me at a later point in time (a later specific point in time – it was not ‘a general notion’ but a specific activity they had in mind). That activity is also incidentally placed well outside my comfort zone, but most social activities are anyway and the social angle on offer there is certainly very different from the ones to which I currently have access. I am actually seriously considering participating in that activity as well, if for no other reason then because it’s been a very long time since someone has approached me socially in this specific way. I sometimes forget that it’s actually nice to feel that other people have a desire to interact with you socially.
(Smbc). The book is not really a book about economics, but I haven’t come across a similar comic with the words ‘mathematical anthropology’, or something along those lines, at the bottom and I think it’s close enough (besides I really love that cartoon).
I have talked about the book before on more than one occasion, as some of Boyd and Richerson’s results/ideas tend to naturally pop up in a lot of contexts when one is reading e.g. anthropology texts – and despite not having read the book I’ve been familiar with some of the ideas. I’ve considered it to be ‘a book I ought to read at some point’ for quite a while. I think Razib Khan said nice things about it at one point; given that I really liked a few of the other books he’s recommended I think that was what originally made me focus in on the book. I have long believed I would find the topic to be interesting as well as the suggested approach to dealing with the topic sensible; I have also, however, believed for a long time that the book would be a lot of work, which is part of what has kept me back.
I’ve now read enough, I think, to at least have an impression of what it’s about. It is, as expected, a technical book – there are quite a few remarks along these lines in the book:
“While we have not been able to solve (12) analytically, it is easy to solve numerically [...] Because equation (13) is quite complex, we have not been able to derive an analytical expression for these equilibrium frequencies. However, it follows from the symmetry of the model that there is a stable symmetric equilibrium [...] A more rigorous local stability analysis of the complete set of recursions supports the heuristic argument just given. Consider the set of i+1 difference equations where Δpj(j=0,1,…,i; see the Appendix) provides the dynamics of the behavioral traits at each stage. The cooperative equilibrium point [...] is stable under the two distinct conditions …”
Someone ‘like me’ will not need to look up a lot of math-related stuff in order to understand the coverage in this book – the math is not that hard, it’s just that in some of the chapters there’s quite a lot of it. Then again if you’ve never seen a symmetry argument or people talking about deriving numerical solutions to troublesome analytical expressions (like the stuff above) before, and/or if you’ve never heard of eigenvalues or perhaps don’t have a good grasp of concepts like model equilibria or evolutionarily stable strategies, you’ll probably have some trouble along the way. One thing that ‘helps’ quite a bit in this context is that the math never seems superfluous; you get the clear impression that the authors did not add math in order to show how smart they are, but that they rather did it to promote and encourage a more systematic (…methodologically valid?) approach to this area of research. As they argue in the introduction:
“We think the way to make cultural explanations “hard” enough to enter into principled debates is to use Darwinian methods to analyze cultural evolution [...] applying the evolutionary biologists’ concepts and methods to the study of culture [...] Cultural evolution is rooted in the psychology of individuals, but it also creates population-level consequences. Keeping these two balls in the air is a job for mathematics; unaided reasoning is completely untrustworthy in such domains.”
I like their approach and I like the book so far. It has a lot of useful angles in terms of how to think about cultural stuff; variables, mechanisms, and tradeoffs.
I really liked the ‘Introduction’ chapter and before going any further I think I should a add a few (additional) remarks from that part of the book:
“People in culturally distinct groups behave differently mostly because they have acquired different beliefs, preferences, and skills, and these differences persist through time because the people of one generation acquire their beliefs and attitudes from those around them. To understand how cultures change, we set up an accounting system that describes how cultural variants are distributed in the population and how various processes, some psychological, others social and ecological, cause some variants to spread and others to decline. The processes that cause such cultural change arise in the everyday lives of individuals as people acquire and use cultural information. Some values are more appealing [...] Some skills are easy to learn [...] Some beliefs cause people to be more likely to be imitated [...] We want to explain how these processes, repeated generation after generation, account for observed patterns of cultural variation.”
“Culture completely changes the way that human evolution works, but not because culture is learned. Rather, the capital fact is that human-style social learning creates a novel evolutionary trade-off. Social learning allows human populations to accumulate reservoirs of adaptive information over many generations, leading to the cumulative cultural evolution of highly adaptive behaviors and technology. Because this process is much faster than genetic evolution, it allows human populations to evolve (culturally) adaptions to local environments – kayaks in the arctic and blowguns in the Amazon [...] To get the benefits of social learning, humans have to be credulous, for the most part accepting the ways that they observe in their society as sensible and proper, but such credulity opens human minds to the spread of maladaptive beliefs. The problem is one of information costs. The advantage of culture is that individuals don’t have to invent everything for themselves. We get adaptions like kayaks and blowguns on the cheap. The trouble is that a greed for such easy adaptive traditions easily leads to perpetuating maladaptions that somehow arise. Even though the capacities that give rise to culture and shape its content must be (or at least have been) adaptive on average, the behavior observed in any particular society at any particular time may reflect evolved maladaptions. Empirical evidence for the predicted maladaptions are not hard to find. [...] The spread of such maladaptive ideas is a predictable by-product of cultural transmission.”
“Selection acting on culture is an ultimate cause of human behavior just like natural selection acting on genes. In several of the chapters in part III we argue that much cultural variation exists at the group level. Different human groups have different norms and values, and the cultural transmission of these traits can cause such differences to persist for long periods. The norms and values that predominate in a group plausibly affect the probability that the group is successful, whether it survives, and whether it expands.”
At the time the authors wrote the book they’d been working on this stuff for 30 years. The book is a collection of articles they’ve written over the years (not always together), so naturally some of the stuff – I don’t know how much as I have not looked for it – is available elsewhere; if you don’t want to read the entire book but would like to know a little more about the topic, you can probably find some of the stuff covered here in the book via google scholar; for example chapter 2 (‘Why Does Culture Increase Human Adaptability?’) in the book is as far as I can tell simply a reprint of this paper (pdf) – go have a look if you want to know what the book is like. Here’s chapter 10 (‘Why People Punish Defectors – Weak Conformist Transmission can Stabilize Costly Enforcement of Norms in Cooperative Dilemmas’ (pdf)). In the first case they put all the math in the back; as illustrated in the second link they don’t always do that. I’d rather link to those papers than cover them in detail here – go have a look if you’re curious.
The coverage in the book is really nice so far, and if the quality of the material does not drop later on I’ll certainly feel tempted to give it five stars.
i. Great Fire of London (featured).
“The Great Fire of London was a major conflagration that swept through the central parts of the English city of London, from Sunday, 2 September to Wednesday, 5 September 1666. The fire gutted the medieval City of London inside the old Roman city wall. It threatened, but did not reach, the aristocratic district of Westminster, Charles II‘s Palace of Whitehall, and most of the suburban slums. It consumed 13,200 houses, 87 parish churches, St. Paul’s Cathedral and most of the buildings of the City authorities. It is estimated to have destroyed the homes of 70,000 of the City’s 80,000 inhabitants.”
Do note that even though this fire was a really big deal the ’70,000 out of 80,000′ number can be misleading as many Londoners didn’t actually live in the City proper:
“By the late 17th century, the City proper—the area bounded by the City wall and the River Thames—was only a part of London, covering some 700.0 acres (2.833 km2; 1.0938 sq mi), and home to about 80,000 people, or one sixth of London’s inhabitants. The City was surrounded by a ring of inner suburbs, where most Londoners lived.”
I thought I should include a few observations related to how well people behaved in this terrible situation – humans are really wonderful sometimes, and of course the people affected by the fire did everything they could to stick together and help each other out:
“Order in the streets broke down as rumours arose of suspicious foreigners setting fires. The fears of the homeless focused on the French and Dutch, England‘s enemies in the ongoing Second Anglo-Dutch War; these substantial immigrant groups became victims of lynchings and street violence.” [...] [no, wait...]
“Suspicion soon arose in the threatened city that the fire was no accident. The swirling winds carried sparks and burning flakes long distances to lodge on thatched roofs and in wooden gutters, causing seemingly unrelated house fires to break out far from their source and giving rise to rumours that fresh fires were being set on purpose. Foreigners were immediately suspects because of the current Second Anglo-Dutch War. As fear and suspicion hardened into certainty on the Monday, reports circulated of imminent invasion, and of foreign undercover agents seen casting “fireballs” into houses, or caught with hand grenades or matches. There was a wave of street violence. William Taswell saw a mob loot the shop of a French painter and level it to the ground, and watched in horror as a blacksmith walked up to a Frenchman in the street and hit him over the head with an iron bar.
The fears of terrorism received an extra boost from the disruption of communications and news as facilities were devoured by the fire. The General Letter Office in Threadneedle Street, through which post for the entire country passed, burned down early on Monday morning. The London Gazette just managed to put out its Monday issue before the printer’s premises went up in flames (this issue contained mainly society gossip, with a small note about a fire that had broken out on Sunday morning and “which continues still with great violence”). The whole nation depended on these communications, and the void they left filled up with rumours. There were also religious alarms of renewed Gunpowder Plots. As suspicions rose to panic and collective paranoia on the Monday, both the Trained Bands and the Coldstream Guards focused less on fire fighting and more on rounding up foreigners, Catholics, and any odd-looking people, and arresting them or rescuing them from mobs, or both together.”
I didn’t really know what to think about this part:
“An example of the urge to identify scapegoats for the fire is the acceptance of the confession of a simple-minded French watchmaker, Robert Hubert, who claimed he was an agent of the Pope and had started the Great Fire in Westminster. He later changed his story to say that he had started the fire at the bakery in Pudding Lane. Hubert was convicted, despite some misgivings about his fitness to plead, and hanged at Tyburn on 28 September 1666. After his death, it became apparent that he had not arrived in London until two days after the fire started.”
Just one year before the fire, London had incidentally been hit by a plague outbreak which “is believed to have killed a sixth of London’s inhabitants, or 80,000 people”. Being a Londoner during the 1660s probably wasn’t a great deal of fun. On the other hand this disaster was actually not that big of a deal when compared to e.g. the 1556 Shaanxi earthquake.
ii. Sea (featured). I was considering reading an oceanography textbook a while back, but I decided against it and I read this article ‘instead’. Some interesting stuff in there. A few observations from the article:
“About 97.2 percent of the Earth’s water is found in the sea, some 1,360,000,000 cubic kilometres (330,000,000 cu mi) of salty water. Of the rest, 2.15 percent is accounted for by ice in glaciers, surface deposits and sea ice, and 0.65 percent by vapour and liquid fresh water in lakes, rivers, the ground and the air.“
“The water in the sea was once thought to come from the Earth’s volcanoes, starting 4 billion years ago, released by degassing from molten rock.(pp24–25) More recent work suggests that much of the Earth’s water may have come from comets.” (This stuff covers 70 percent of the planet and we still are not completely sure how it got to be here. I’m often amazed at how much stuff we know about the world, but very occasionally I also get amazed at the things we don’t know. This seems like the sort of thing we somehow ‘ought to know’..)
“An important characteristic of seawater is that it is salty. Salinity is usually measured in parts per thousand (expressed with the ‰ sign or “per mil”), and the open ocean has about 35 grams (1.2 oz) of solids per litre, a salinity of 35‰ (about 90% of the water in the ocean has between 34‰ and 35‰ salinity). [...] The constituents of table salt, sodium and chloride, make up about 85 percent of the solids in solution. [...] The salinity of a body of water varies with evaporation from its surface (increased by high temperatures, wind and wave motion), precipitation, the freezing or melting of sea ice, the melting of glaciers, the influx of fresh river water, and the mixing of bodies of water of different salinities.”
“Sea temperature depends on the amount of solar radiation falling on its surface. In the tropics, with the sun nearly overhead, the temperature of the surface layers can rise to over 30 °C (86 °F) while near the poles the temperature in equilibrium with the sea ice is about −2 °C (28 °F). There is a continuous circulation of water in the oceans. Warm surface currents cool as they move away from the tropics, and the water becomes denser and sinks. The cold water moves back towards the equator as a deep sea current, driven by changes in the temperature and density of the water, before eventually welling up again towards the surface. Deep seawater has a temperature between −2 °C (28 °F) and 5 °C (41 °F) in all parts of the globe.“
“The amount of light that penetrates the sea depends on the angle of the sun, the weather conditions and the turbidity of the water. Much light gets reflected at the surface, and red light gets absorbed in the top few metres. [...] There is insufficient light for photosynthesis and plant growth beyond a depth of about 200 metres (660 ft).“
“Over most of geologic time, the sea level has been higher than it is today.(p74) The main factor affecting sea level over time is the result of changes in the oceanic crust, with a downward trend expected to continue in the very long term. At the last glacial maximum, some 20,000 years ago, the sea level was 120 metres (390 ft) below its present-day level.” (this of course had some very interesting ecological effects – van der Geer et al. had some interesting observations on that topic)
“On her 68,890-nautical-mile (127,580 km) journey round the globe, HMS Challenger discovered about 4,700 new marine species, and made 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations.“
“Seaborne trade carries more than US $4 trillion worth of goods each year.“
“Many substances enter the sea as a result of human activities. Combustion products are transported in the air and deposited into the sea by precipitation. Industrial outflows and sewage contribute heavy metals, pesticides, PCBs, disinfectants, household cleaning products and other synthetic chemicals. These become concentrated in the surface film and in marine sediment, especially estuarine mud. The result of all this contamination is largely unknown because of the large number of substances involved and the lack of information on their biological effects. The heavy metals of greatest concern are copper, lead, mercury, cadmium and zinc which may be bio-accumulated by marine invertebrates. They are cumulative toxins and are passed up the food chain.
Much floating plastic rubbish does not biodegrade, instead disintegrating over time and eventually breaking down to the molecular level. Rigid plastics may float for years. In the centre of the Pacific gyre there is a permanent floating accumulation of mostly plastic waste and there is a similar garbage patch in the Atlantic. [...] Run-off of fertilisers from agricultural land is a major source of pollution in some areas and the discharge of raw sewage has a similar effect. The extra nutrients provided by these sources can cause excessive plant growth. Nitrogen is often the limiting factor in marine systems, and with added nitrogen, algal blooms and red tides can lower the oxygen level of the water and kill marine animals. Such events have created dead zones in the Baltic Sea and the Gulf of Mexico.“
iii. List of chemical compounds with unusual names. Technically this is not an article, but I decided to include it here anyway. A few examples from the list:
“Sonic hedgehog: A protein named after Sonic the Hedgehog.”
iv. Operation Proboi. When trying to make sense of e.g. the reactions of people living in the Baltic countries to Russia’s ‘current activities’ in the Ukraine, it probably helps to know stuff like this. 1949 isn’t that long ago – if my father had been born in Latvia he might have been one of the people in the photo.
v. Schrödinger equation. I recently started reading A. C. Phillips’ Introduction to Quantum Mechanics – chapter 2 deals with this topic. Due to the technical nature of the book I’m incidentally not sure to which extent I’ll cover the book here (or for that matter whether I’ll be able to finish it..) – if I do decide to cover it in some detail I’ll probably include relevant links to wikipedia along the way. The wiki has a lot of stuff on these topics, but textbooks are really helpful in terms of figuring out the order in which you should proceed.
vi. Happisburgh footprints. ‘A small step for man, …’
“The Happisburgh footprints were a set of fossilized hominin footprints that date to the early Pleistocene. They were discovered in May 2013 in a newly uncovered sediment layer on a beach at Happisburgh [...] in Norfolk, England, and were destroyed by the tide shortly afterwards. Results of research on the footprints were announced on 7 February 2014, and identified them as dating to more than 800,000 years ago, making them the oldest known hominin footprints outside Africa. Before the Happisburgh discovery, the oldest known footprints in Britain were at Uskmouth in South Wales, from the Mesolithic and carbon-dated to 4,600 BC.“
The fact that we found these footprints is awesome. The fact that we can tell that they are as old as they are is awesome. There’s a lot of awesome stuff going on here – Happisburg also simply seems to be a gift that keeps on giving:
“Happisburgh has produced a number of significant archaeological finds over many years. As the shoreline is subject to severe coastal erosion, new material is constantly being exposed along the cliffs and on the beach. Prehistoric discoveries have been noted since 1820, when fishermen trawling oyster beds offshore found their nets had brought up teeth, bones, horns and antlers from elephants, rhinos, giant deer and other extinct species. [...]
In 2000, a black flint handaxe dating to between 600,000 and 800,000 years ago was found by a man walking on the beach. In 2012, for the television documentary Britain’s Secret Treasures, the handaxe was selected by a panel of experts from the British Museum and the Council for British Archaeology as the most important item on a list of fifty archaeological discoveries made by members of the public. Since its discovery, the palaeolithic history of Happisburgh has been the subject of the Ancient Human Occupation of Britain (AHOB) and Pathways to Ancient Britain (PAB) projects [...] Between 2005 and 2010 eighty palaeolithic flint tools, mostly cores, flakes and flake tools were excavated from the foreshore in sediment dating back to up to 950,000 years ago.”
vii. Keep (‘good article’).
“A keep (from the Middle English kype) is a type of fortified tower built within castles during the Middle Ages by European nobility. Scholars have debated the scope of the word keep, but usually consider it to refer to large towers in castles that were fortified residences, used as a refuge of last resort should the rest of the castle fall to an adversary. The first keeps were made of timber and formed a key part of the motte and bailey castles that emerged in Normandy and Anjou during the 10th century; the design spread to England as a result of the Norman invasion of 1066, and in turn spread into Wales during the second half of the 11th century and into Ireland in the 1170s. The Anglo-Normans and French rulers began to build stone keeps during the 10th and 11th centuries; these included Norman keeps, with a square or rectangular design, and circular shell keeps. Stone keeps carried considerable political as well as military importance and could take up to a decade to build.
During the 12th century new designs began to be introduced – in France, quatrefoil-shaped keeps were introduced, while in England polygonal towers were built. By the end of the century, French and English keep designs began to diverge: Philip II of France built a sequence of circular keeps as part of his bid to stamp his royal authority on his new territories, while in England castles were built that abandoned the use of keeps altogether. In Spain, keeps were increasingly incorporated into both Christian and Islamic castles, although in Germany tall towers called Bergfriede were preferred to keeps in the western fashion. In the second half of the 14th century there was a resurgence in the building of keeps. In France, the keep at Vincennes began a fashion for tall, heavily machicolated designs, a trend adopted in Spain most prominently through the Valladolid school of Spanish castle design. Meanwhile, in England tower keeps became popular amongst the most wealthy nobles: these large keeps, each uniquely designed, formed part of the grandest castles built during the period.
By the 16th century, however, keeps were slowly falling out of fashion as fortifications and residences. Many were destroyed between the 17th and 18th centuries in civil wars, or incorporated into gardens as an alternative to follies. During the 19th century, keeps became fashionable once again and in England and France a number were restored or redesigned by Gothic architects. Despite further damage to many French and Spanish keeps during the wars of the 20th century, keeps now form an important part of the tourist and heritage industry in Europe. [...]
“By the 15th century it was increasingly unusual for a lord to build both a keep and a large gatehouse at the same castle, and by the early 16th century the gatehouse had easily overtaken the keep as the more fashionable feature: indeed, almost no new keeps were built in England after this period. The classical Palladian style began to dominate European architecture during the 17th century, causing a further move away from the use of keeps. [...] From the 17th century onwards, some keeps were deliberately destroyed. In England, many were destroyed after the end of the Second English Civil War in 1649, when Parliament took steps to prevent another royalist uprising by slighting, or damaging, castles so as to prevent them from having any further military utility. Slighting was quite expensive and took considerable effort to carry out, so damage was usually done in the most cost efficient fashion with only selected walls being destroyed. Keeps were singled out for particular attention in this process because of their continuing political and cultural importance, and the prestige they lent their former royalist owners [...] There were some equivalent destruction of keeps in France in the 17th and 18th centuries [...] The Spanish Civil War and First and Second World Wars in the 20th century caused damage to many castle keeps across Europe; in particular, the famous keep at Coucy was destroyed by the German Army in 1917. By the late 20th century, however, the conservation of castle keeps formed part of government policy across France, England, Ireland and Spain. In the 21st century in England, most keeps are ruined and form part of the tourism and heritage industries, rather than being used as functioning buildings – the keep of Windsor Castle being a rare exception. This is contrast to the fate of bergfried towers in Germany, large numbers of which were restored as functional buildings in the late 19th and early 20th century, often as government offices or youth hostels, or the modern conversion of tower houses, which in many cases have become modernised domestic homes.“
“The Battles of Khalkhyn Gol [...] constituted the decisive engagement of the undeclared Soviet–Japanese border conflicts fought among the Soviet Union, Mongolia and the Empire of Japan in 1939. The conflict was named after the river Khalkhyn Gol, which passes through the battlefield. In Japan, the decisive battle of the conflict is known as the Nomonhan Incident [...] after a nearby village on the border between Mongolia and Manchuria. The battles resulted in the defeat of the Japanese Sixth Army. [...]
While this engagement is little-known in the West, it played an important part in subsequent Japanese conduct in World War II. This defeat, together with other factors, moved the Imperial General Staff in Tokyo away from the policy of the North Strike Group favored by the Army, which wanted to seize Siberia as far as Lake Baikal for its resources. [...] Other factors included the signing of the Nazi-Soviet non-aggression pact, which deprived the Army of the basis of its war policy against the USSR. Nomonhan earned the Kwantung Army the displeasure of officials in Tokyo, not so much due to its defeat, but because it was initiated and escalated without direct authorization from the Japanese government. Politically, the defeat also shifted support to the South Strike Group, favored by the Navy, which wanted to seize the resources of Southeast Asia, especially the petroleum and mineral-rich Dutch East Indies. Two days after the Eastern Front of World War II broke out, the Japanese army and navy leaders adopted on 24 June 1941 a resolution “not intervening in German Soviet war for the time being”. In August 1941, Japan and the Soviet Union reaffirmed their neutrality pact. Since the European colonial powers were weakening and suffering early defeats in the war with Germany, coupled with their embargoes on Japan (especially of vital oil) in the second half of 1941, Japan’s focus was ultimately focused on the south, and led to its decision to launch the attack on Pearl Harbor, on 7 December that year.”
Note that there’s some disagreement in the reddit thread as to how important Khalkhin Gol really was – one commenter e.g. argues that: “Khalkhin Gol is overhyped as a factor in the Japanese decision for the southern plan.”
ix. Medical aspects, Hiroshima, Japan, 1946. Technically this is also not a wikipedia article, but multiple wikipedia articles link to it and it is a wikipedia link. The link is to a video featuring multiple people who were harmed by the first nuclear weapon used by humans in warfare. Extensive tissue damage, severe burns, scars – it’s worth having in mind that dying from cancer is not the only concern facing people who survive a nuclear blast. A few related links: a) How did cleanup in Nagasaki and Hiroshima proceed following the atom bombs? b) Minutes of the second meeting of the Target Committee Los Alamos, May 10-11, 1945. c) Keloid. d) Japan in the 1950s (pictures).
(Before I move on to talk about the book, I wanted to add a short unrelated personal note: I have been under a lot of stress over the last few weeks on account of stuff I really didn’t have many realistic ways to deal with (I tried various approaches and I think I was somewhat creative in my attempts, but they were mostly unsuccessful). The main stressor is now gone for the moment, so maybe I’ll blog more in the weeks to come than I have over the last few weeks. However as I’ve decided to participate in a Mensa event this weekend you should not expect me to update this blog between Friday evening and Sunday afternoon, as I assume I’ll not be spending much time near a computer during that time.)
“of all political ideals, that of making the people happy is perhaps the most dangerous one. It leads invariably to the attempt to impose our scale of ‘higher’ values upon others, in order to make them realize what seems to us of greatest importance for their happiness; in order, as it were, to save their souls. It leads to Utopianism and Romanticism. We all feel certain that everybody would be happy in the beautiful, the perfect community of our dreams. [...] the attempt to make heaven on earth invariably produces hell.”
Let’s just say the author of this book has not read Popper.
Here’s what I wrote on goodreads:
“I’m not rating this as it does not make sense to rate it. Some parts of the last few chapters deserve 0 stars. A few of the first chapters deserve three stars.
The first half of the book has a few problems but is generally of a reasonably high quality. I learned some new stuff there. The last chapters of the book are quite poor.
In general I’d probably if hard-pressed give it two stars as a sort of average rating of the material. But 2 stars would imply that I think the book is ‘okay’. And some parts of it really is not okay. However I also cannot justify giving the book one star.”
I’d wish it were this easy, but unfortunately it isn’t so I’m finding myself reading this stuff. It did not take much time to read the book and that the first half to two-thirds of it was reasonably interesting. I don’t regret reading the rest – it’s relevant for how to assess the remainder of the coverage, if nothing else, and the book is so short I never got to dwell on the bad stuff much. Popper’s quote is incidentally relevant because the author seems to think people reading the book care about what he thinks about politics and stuff like that. I don’t, and I tend to assume that I’m not the only one; most people reading Springer publications don’t do so because they’re looking for political coverage of the topics of the day. Anyway I see no need to talk about those aspects here. I also don’t want to talk much about some of the specific advice he gives, which I consider to be … (I don’t really have a good word for it). He’s a proponent of embracing religion because it may make you happier, and he’s also a fan of various forms of ‘positive thinking’-type psychological interventions. Dobson et al. covered that kind of stuff and there was also a bit on that kind of stuff in Leary & Hoyle, and I think Grinde is overestimating how large effects can be derived from such cognitive interventions – in an impact-evaluation framework the evidence for much of the advice he gives is simply either poor or non-existent, and adding a reference to one study or something like that to justify an approach is not going to convince me when review chapters on related topics have failed to do the same. The fact that he seems to systematically (deliberately?) overestimate the prevalence of various mental problems throughout the last part of the book, presumably because he assumes that doing this will make the political suggestions he’s heading towards more palatable, certainly does not help; it makes him look untrustworthy. Which is unfortunate because other parts of the coverage are actually okay.
Enough about the bad stuff. I’d rather talk a little about some of the interesting stuff in the book.
Here’s part of the abstract from the the beginning of the book:
“This book presents a model for what happiness is about—based on an evolutionary perspective. Briefly, the primary purpose of nervous systems is to direct an animal either towards opportunities or away from danger in order to help it survive and procreate. Three brain modules are engaged in this task: one for avoidance and two for attraction (seeking and consuming). While behaviour originally was based on reflexes, the brain gradually evolved into a more adaptive and flexible system based on positive and negative affects (good and bad feelings). The human capacity for happiness is presumably due to this whim of evolution—i.e. the advantages of having more flexibility in behavioural response. A variety of submodules have appeared, caring for a long list of pursuits, but recent studies suggest that they converge on shared neural circuits designed to generate positive and negative feelings. The brain functions involved in creating feelings, or affect, may collectively be referred to as mood modules. Happiness can be construed as the net output of these modules. Neural circuits tend to ‘expand’ (gain in strength and influence) upon frequent activation. This suggests the following strategy for improving mental health and enhancing happiness: To avoid excessive stimulation of negative modules, to use cognitive interference to enhance the ‘turn off’ function of these modules, and to exercise modules involved in positive feelings.”
He uses the term happiness in the book in a way such that both hedonic and eudaimonic elements are included. There are quite a few ways to break down what happiness ‘really is all about’ and philosophers and others have written about these things for thousands of years, but Grinde argues that “Whatever divisions are made, it all seems to come down to activation of nerve circuits designed for the purpose of creating positive affect”. It should also be noted that: “Our knowledge in neurobiology is not yet at the level where we can accurately delegate happiness to particular brain structures.” There are some structures we know to be involved and we know that neurotransmitters involved in these processes in humans and other mammals also serve similar functions in more primitive organisms/neural systems, but of course if you’re taking ‘a broad view’ of happiness the way the author does, demanding that we have the full picture is perhaps a bit much. On a related note:
“There has been considerable work aimed at defining the neuroanatomy of mood modules [...] The more ancient, presumably subconscious, neural circuitry involved is situated in the subcortical part of the brain—particularly in the thalamus, hypothalamus, amygdala and hippocampus. The cognitive extension appears to involve circuitry in the orbitofrontal, lateral prefrontal, insular and anterior cingulate parts of the cortex. The subcortical nerve circuits are probably essential for initiating positive and negative feelings, while the cortex enables both the particulars of how they are perceived, and a capacity to modulate their impact. [...] the two reward modules (seeking and liking) and the punishment module presumably evolved from simple neurological structures catering to approach and avoidance reflexes in primitive animals.”
The neurobiology stuff relevant to this discussion is covered in much more detail in Clark & Treisman, although that one of course also only really scratches the surface and very different aspects are emphasized there. As for the reflexes mentioned above, they are very useful in some contexts and can from one point of view (the author’s) be considered a forerunner to more complex emotions. Reflexes don’t however always work that well, in particular they don’t necessarily handle change and complexity very well; if different reactions are optimal in different contexts an organism may benefit from upgrading from reflexes only/mainly to more complex information feedback systems. You don’t need emotions for that, but emotions may be a part of such a complex feedback system. Instead of going from the ‘simple to the complex’, one might also ask why e.g. plants never developed a nervous system? This may add a bit to the understanding of why these things are the way they are – Grinde argues that:
“The reason why plants never obtained anything similar to a nervous system is presumably because they (or at least the more complex versions) are sedentary. They do not need to move around to find food”. Animals tend to do, and even if they’re sedentary “their survival requires what we refer to as behaviour [...] which may be defined as movements required for survival and procreation. [...] The nerve system, and the concomitant use of muscles, was the evolutionary response to this requirement. In complex animals like vertebrates, the nervous system infiltrates all parts of the body. It connects with sense organs, to extract information from the environment, and effector organs (muscles), to orchestrate behaviour. The sense organs offer the organism information that is used to decide on an action, and the muscles set the action in motion. Between these two lies a processing capacity, which in advanced animals is referred to as a brain.”
I think it’s interesting in this context that a lot of what most humans probably consider to be ‘different stuff’ is really dealt with by the same brain structures:
“the three mood modules appear to cater to all sorts of pleasures and pains [...] the ups and downs associated with the emotional response to sociopsychological events rely on much the same neural circuitry that underlies the typical pain and pleasures caused by physical stimuli. For example, experiencing envy of another person’s success activates pain-related circuitry, whereas experiencing delight at someone else’s misfortune (what is referred to as schadenfreude), activates reward-related neural circuits [...] Similarly, feeling excluded or being treated unfairly activates pain-related neural regions [...] On the other hand, positive social feelings, such as getting a good reputation, fairness and being cooperative, offers rewards similar to those one gets from desirable food [...] And the same reward-related brain regions are activated when having sex or enjoying music [...] Apparently, the ancient reward and punishment circuits of the brain have simply been co-opted for whatever novel needs that arouse in the evolutionary lineage leading toward humans.”
Some parts of the brain are more sensitive to stimuli than others, although we tend to hover around a set point of happiness. The set point is one we may be able to slowly change over time, and for most people it seems to be ‘positive’ in the sense that we tend to be relatively content when negative feelings are not activated – ‘a default state of contentment’, as Grinde terms it. I thought it was interesting that humans seem to be more sensitive to big negative emotional stimuli than to other stimuli (most positive stimuli tend to have relatively short-lived effects), “presumably because a single threat can have a far more drastic effect on genetic fitness (e.g., leading to death), than can a single fortunate event.” As in the case of many other complex traits, there aren’t any major-impact ‘happiness genes’; although genes matter the differences they cause are most likely due to the combined effects of a large number of small-impact genes and their interactions with the environment. This should hardly be surprising.
Thinking about the evolutionary context underlying our emotional responses to various stimuli the way Grinde does in this book of course also leads to questions about whether the enviroment in which humans live today is well-suited for the task of making us happy and related questions such as how we might best go about trying to optimize our environment in order to live a happy life. When looked at from a certain point of view modern humans live lives which are a bit like the lives of zoo animals; the environment we inhabit is very different from the one in which our ancestors evolved, and zoo animals that are not well taken care of tend to be unhappy and engage in various problematic behaviours. I’m not sure I want to go into that discussion in too much detail, but it’s certainly the case that whereas some aspects of modern life have the potential to increase our happiness, e.g. by dealing with stimuli that tends to be make us unhappy (hunger, pain, disease), other aspects probably have the opposite effect (e.g. weaker social bonds). This should not be new to the readers of this blog either as I think I’ve talked about this stuff before; I’ve certainly read stuff which has made me think along similar lines in the past.
There’s some stuff covered in the book which I have not talked about, but I figure I’ll stop here. I really would not recommend the book, but parts of it was actually reasonably interesting.
It is occasionally slightly annoying that you can’t tell what she’s pointing at (a recurring problem in these lectures), but aside from this it’s a nice lecture – and this is a rather minor problem.
Most of this stuff was review to me, but it’s a nice overview lecture in case you have never had a closer look at this topic. There are some sound issues along the way, but otherwise the coverage is quite nice.
This one is technically not a lecture as much as a conversation, but I figured I should cover it somewhere and this may be as good a place as any. If you’re going to watch both this one and the lecture above, you should know that the order I posted them in is not random - the lectures overlap a little (Ed Copeland is one of those “lots of people [who] are playing with that idea” which Crawford mentions towards the end of her lecture) and I think it makes most sense to watch Crawford’s lecture before you watch Brady and Ed Copeland’s discussion if you’re going to watch both.
Incidentally the fact that this is not a lecture does not in my opinion subtract from the coverage provided in the video – if anything I think it may well add. Instead of a lecturer talking to hundreds of people simply following a script without really knowing whether they understand what he’s talking about due to lack of feedback, here you have one expert talking to a very curious individual who asks quite a few questions along the way and makes sure the ideas presented are explained and clarified whenever explanation or clarification is needed. Of course the standard lecture does have its merits as well, but I really like these ‘longer-than-average’ Sixty Symbols conversation videos.
Again I’m not sure I’d categorize this as a lecture, but it’s close enough for me to include it here. Unfortunately if you’re not an at least reasonably strong player who knows some basic concepts I assume some of the stuff covered may well be beyond you – I’ve seen it remarked before in the comments to some of Sielecki’s videos that there are other channels which are better suited for new/weak players – and I’m not sure how many people might find the video interesting, but I figured I might as well include it anyway. If comments like “this move is terrible because black loses control over the f5 square – which means his position is basically lost” (he doesn’t actually say this in the video, but it’s the kind of thing he might say) would be hard for you to understand (‘why would I care about the f5 square?’ ‘Why is it lost? What are you talking about? The position looks fine to me!’ …or perhaps even: ‘the f5 square? What’s that?’), this video may not be for you (in the latter case it most certainly isn’t).
This book is a collection of ’1085 aphorisms and other aphoristically brief writings’, as Hollingdale puts it in his introduction. It’s basically a collection of random observations and remarks made by Lichtenberg over the years. I’d liked some of Lichtenberg’s quotes I’d read in the past, so I figured I’d give the book a try. It was sort of okay, but I actually do not hold this book, or the author, in very high regard; my opinion of the author definitely went down while I was reading this work. I often disagree with Lichtenberg, and from my reading of him I get the sense that I’d have considered him a person who thought much too highly of himself, to the point that he’d simply be the kind of person I’d find completely insufferable – for example there are quite a few quotes in the book about what separates The Genius from Ordinary People, or something along those lines, and one is not for one second in doubt as to which category Lichtenberg considers himself to belong to, despite how trivial and formulaic/simplistic most of these specific quotes/observations are. He has absolutely terrible taste in books: “Most of our writers possess, I do not say insufficient genius, but insufficient sense to write a Robinson Crusoe.” To which I say: ‘Aargh!’ That one is one of the worst books I’ve ever read. What’s even worse in that specific case is of course the fact that he seems to have taken Defoe’s tale to reflect reality – as he puts it in a different quote elsewhere, “Oh if only we could return to the age of the Patriarchs … or go to happy Tahiti, where … there is perfect human equality and you have the right to eat your enemies and to be eaten by them.”
But of course many such objections/problems are arguably just values dissonance-related, and the fact that the author thought it was okay to write some of the things he did the way he did should not make you think he was ‘stupid and wrong’ as much as it should make you think about what such quotes may tell you about the time/setting at which point the quotes were written, if anything. I don’t think I’d like spending time with this guy, to put it mildly, but I never will anyway so that’s hardly relevant. A relatively small number of good quotes keep you reading, but actually I am not sure you need to read the book in order to find most of his ‘quite good’ quotes (some of which I have blogged in the past, in the quotes posts). It should be noted that although some of the various ‘not great’ quotes do add something in that they ‘make you think’ and/or perhaps provide context and increase your understanding of the setting, others really do not add much.
I have added a few quotes from the book below. I have limited my coverage to quotes which I perceive to be of a reasonably high quality and which I have not already blogged – or at least I have tried to avoid repeats. For other Lichtenberg quotes covered here on the blog in the past, follow this link.
“Most propagators of a faith defend their propositions, not because they are convinced of their truth, but because they once asserted that they were true.”
“The greatest things in the world are brought about by other things which we count as nothing: little causes we overlook but which at length accumulate.”
“Reasons are often and for the most part only expositions of pretensions designed to give a coloring of legitimacy and rationality to something we would have done in any case …”
“You can take the first book you lay your hands on and with your eyes closed point to any line and say: A book could be written about this. When you open your eyes you will seldom find you are deceived.”
“Devised with a maximum of erudition and a minimum of common sense.” (I’m saving that one! Another one along the same lines: “It requires no especially great talent to write in such a way that another will be very hard put to it to understand what you have written.”)
“There are people who sometimes boast of how frank and candid they are: they ought to reflect, however, that frankness and candor must proceed from the nature of one’s character, or even those who would otherwise esteem it highly must regard it as a piece of insolence.”
“It makes a great difference by what path we come to a knowledge of certain things. If we begin in our youth with metaphysics and religion we can easily proceed along a series of rational conclusions that will lead us to the immortality of the soul. Not every other path will lead to this, at least not quite so easily.” (‘at least not quite so easily’ was a nice touch – good luck finding a path that’ll lead you there if you think the notion of a ‘soul’ is, well… But the main point stands.)
“I believe [...] that most people know men better than they themselves are aware of, and that they make great use of their knowledge in everyday life …” (On a related note, “More often than we think, people notice things we believe we have artfully concealed from them.” See also this.)
“To make clever people believe we are what we are not is in most instances harder than really to become what we want to seem to be.”
“Honest unaffected distrust of human abilities under all circumstances is the surest sign of strength of mind.”
“Sometimes we know a person better than we can say, or at least than we do say.”
“He who is enamoured of himself will at least have the advantage of being inconvenienced by few rivals.” (this one was funny, considering some of the other quotes in this book.)
“For the loss of those we have loved there is no alleviation but time and carefully and rationally chosen diversions such as will not cause our heart to reproach us.”
“Nothing is more inimical to the progress of science than the belief that we know what we do not yet know.” (Compare with quote xiii here – there he thought the greatest impediment to progress in science was ‘the desire to see it take place too quickly’…)
“Nothing makes one old so quickly as the ever-present thought that one is growing older.”
I’m somewhat conflicted about whether or not to blog fiction here on this blog, but I felt like blogging this one. Stefan recommended Fforde to me (well, sort of) and I figured I’d give him a try. I read this book quite fast and I have already decided to read at the very least the rest of the first Thursday Next series, i.e. three more books. The books are about Thursday Next – yep, that’s her name. The name actually makes a lot more sense than some of the names in this book, as there’s a perfectly reasonable explanation for (at least the first part of) it: “I was born on a Thursday, hence the name.” I gave the book five stars on goodreads. I have tried very hard to avoid spoilers in this post.
The book is all over the place and some would probably categorize this kind of stuff as ‘childish’ or something along those lines. I don’t give a crap, I enjoyed reading this stuff. It’s a combination of alternate history, fantasy, and some other stuff. The book takes place in some sort of alternate reality 1985-world where England is still at war with Russia over Crimea as the Crimean War never ended (given recent developments, that part was sort of, …). It also takes place in a world where Wales is an independent country, and has been since 1854 where The People’s Republic of Wales declared its independence. A world where people keep pet dodos in their homes, and where the occasional so-called ‘temporal distortions’ cause significant enough problems for there to have been established an Office for Special Temporal Stability and a ChronoGuard to deal with them. As you might infer from some of the quotes I’ve included in the post below, books matter quite a bit more to the people inhabiting this world than they do to people inhabiting ours.
A few samples from the book:
“A flick through the London telephone directory would yield about four thousand John Miltons, two thousand William Blakes, a thousand or so Samuel Coleridges, five hundred Percy Shelleys, the same of Wordsworth and Keats, and a handful of Drydens. Such mass name-changing could have problems in law enforcement. Following an incident in a pub where the assailant, victim, witness, landlord, arresting officer and judge had all been called Alfred Tennyson, a law had been passed compelling each namesake to carry a registration number tattooed behind the ear. It hadn’t been well received” [Something went wrong when he wrote this paragraph - 'name-changings' don't have problems, people have problems, and 'name-changings' is hardly great English. He should have used the verb 'cause' or a similar verb instead. I liked the rest ('spirit') of the quote enough to include it here, though I figured I should point out that I'm aware his choice of words here was not optimal so that people don't get the impression I'll miss stuff like this. I was considering leaving out 'the offending sentence' from the quote, but I decided against it because it seemed dishonest.]
“Your post was held by Jim Crometty. He was shot dead in the old town during a bookbuy that went wrong.” [Again I find myself questioning his word choice: Is not the expression 'book deal' more natural? But yet again I find myself thinking that this is just a minor detail of little importance, even if I did notice it. It's his first novel after all, and satisfying pedants should hardly be the primary objective of an author trying to get published for the first time...] [...] big business and the huge amounts of cash in the sale and distribution of literary works had attracted a bigger criminal element. I knew of at least four London LiteraTecs who had died in the line of duty.
‘It’s becoming more violent out there. It’s not like it is in the movies. Did you hear about the surrealist riot in Chichester last night?’
‘I certainly did,’ he replied. ‘I can see Swindon involved in similar disturbances before too long. The art college nearly had a riot on its hands last year when the governors dismissed a lecturer who had been secretly encouraging students to embrace abstract expressionism. They wanted him charged under the Interpretation of the Visual Medium Act. He fled to Russia, I think.’”
“‘Imagine Martin Chuzzlewit without Chuzzlewit!’ he exclaimed earnestly, running through all the possibilities. ‘The book would end within a chapter. Can you imagine the other characters sitting around, waiting for a lead character who never appears?” [The book has quite a few of these kinds of obscure references and I love them! Here's another example from the book: "'How long since I died?' he asked abruptly. 'Over a hundred and fifty years.' 'Really? Tell me, how did the revolution in France turn out?' 'It's a little early to tell.'" As mentioned he also has a lot of fun with names. Let's incidentally try not to get into the question of how that person could be having that conversation despite having been dead for more than a hundred and fifty years here - there's a perfectly reasonable in-universe explanation...]
“He lowered the binoculars and sighed. It was a stinking, lousy, lonely job. He had been working in the ChronoGuard for almost forty years Standard Earth Time. In logged work time he was 209. In his own personal physiological time he was barely 28. His children were older than him and his wife was in a nursing home. [...] It wasn’t a difficult job; it just took a long time. He had mended a similar rent in spacetime that had opened up in Weybridge’s municipal park just between the floral clock and the bandstand. The job itself had taken ten minutes; he had simply walked in and stuck a tennis ball across the hole while outside seven months flashed by – seven months on double pay plus privileges, thank you very much.”
“‘Hall and Marston – both Elizabethan satirists – were firmly of the belief that Bacon was the true author of “Venus and Adonis” and “The Rape of Lucrece”. I have a pamphlet here which goes into the matter further. More details are available at our monthly gatherings; we used to meet at the town hall but the radical wing of the “New Marlovians” fire-bombed us last week. I don’t know where we will meet next. But if I can take your name and number, we can be in touch.’ [...] The Baconians were quite mad but for the most part harmless. Their purpose in life was to prove that Francis Bacon and not William Shakespeare had penned the greatest plays in the English language. Bacon, they believed, had not been given the recognition that he rightfully deserved and they campaigned tirelessly to redress this supposed injustice.”
I read this book over the weekend. I gave it three stars on goodreads but seriously considered giving it four stars.
The book is from 2005, meaning that some parts of it, particularly I assume those related to diagnostics (genotyping etc.), presumably are a bit dated – progress in vaccine development may also have occurred in the meantime, I wouldn’t know but some authors assumed such a development would be likely in their coverage. Most of the stuff covered is, I think, still as relevant today as it was when it was written.
The book is a Springer publication and contains 10 chapters on various topics related to bioterrorism and specific infectious disease agents which may be used for that purpose. Most chapters deal with specific agents or classes of agents which have the potential to be used in a bioterrorism setting, and only the last two chapters deal with more general topics – the first one of these addresses the bioterrorism setting more generally than do the previous chapters (“When the agent used in a biological attack is known, response to such an attack is considerably simplified. The first eight chapters of this text deal with agent-specific concerns and strategies for dealing with infections due to the intentional release of these agents. A larger problem arises when the identity of an agent is not known. [...] in some cases, an attack may be threatened or suspected, but it may remain unclear as to whether such an attack has actually occurred. Moreover, it may be unclear whether casualties are due to a biological agent, a chemical agent, or even a naturally occurring infectious disease process or toxic exposure [...] This chapter provides a framework for dealing with outbreaks of unknown origin and etiology. Furthermore, it addresses several related concerns and topics not covered elsewhere in this text.”), whereas the last one very briefly addresses ‘The Economics of Planning and Preparing for Bioterrorism’.
An implicit assumption I’d made before reading this book regarding the bioterrorism setting is that in such a setting we’d know that bioterrorism was taking place – it would be obvious because of all those sick people. But it is far from clear that this would always be the case. Most of the agents have incubation periods measured in days or weeks, and even after symptoms present it may be difficult to realize what’s going on because these diseases are not commonly seen in clinical practise and may be confused with other more common conditions. An aerosolized agent introduced into an environment with a large number of people could infect a lot of people who’d not display symptoms until much later. It’d be difficult to figure out what was going on. A long incubation period incidentally doesn’t necessarily mean the disease isn’t severe; it may well mean that once you get symptoms of a severity that’ll lead you to seek medical attention you’re already screwed. An example:
“Symptoms and physical findings are nonspecific in the beginning of [anthrax] infection. The clinical presentation is usually biphasic. The initial stage begins with the onset of myalgia, malaise, fatigue, nonproductive cough, occasional sensation of retrosternal pressure, and fever. [...] anthrax symptoms insidiously mimic flu-like symptoms in the beginning [...] In some patients, a brief period of apparent recovery follows. Other patients may progress directly to the second, fulminant stage of illness. The second stage develops suddenly with the onset of acute respiratory distress, hypoxemia, and cyanosis. Death sometimes occurs within hours [...] The disease progression from the first manifestation of symptoms until death appears to have a considerable range from a few hours [...] to 11 days”
While reading the book, and especially in the beginning, I was a bit surprised more effort was not put into covering the topics briefly addressed in chapter 9 especially (the ‘unknown etiology’ chapter above), but actually the coverage that was chosen matches quite well what they state that they set out to do. The book is written for health professionals: “this volume will provide health care workers with up-to-date important reviews by world-renowned experts on infectious and biological agents that could be used for bioterrorism”. Mostly the book is about the infectious agents and how people affected by these agents may present and what can and should be done in terms of treatment/monitoring/isolation etc., so it makes sense that this work does not include a lot of stuff on what might be termed more general risk management aspects, response modelling, coordination problems and so on; there is a little bit on that stuff in the last chapters, but not much. I’d be very surprised if there are not other books/works published which deal with the risk- and decision-management aspects of this kind of stuff in much more detail (especially given the existence of books like this one).
The fact that the book is written for health professionals (“Emergency physicians, Public Health personnel, Internists, Infectious Disease specialists, Microbiologists, Critical care specialists, and even General practitioners”) means that if you’re not a health professional some of this stuff will be stuff you won’t understand. Patients will not be described as having double vision (they’ll have diplopia), and they won’t be described as ‘sweating a lot’ (they’ll be diaphoretic). The authors assume that when they tell you that the suggested treatment may result in hemolytic anemia you’ll know what that implies, and that you know what G-CSF stands for in the context of adjunctive melioidosis treatment. Usage of abbreviations/acronyms which are not explained is incidentally part of the reason why this book would never get five stars from me; using acronyms without telling you once what the letters stand for is a capital offence in my book. Even if you don’t know much about medicine you’ll learn about exposure routes of various substances/diseases (is person-to-person transmission something I should be worried about? Is it airborne?), symptoms (to some extent – you’ll understand some of the words without looking up the medical terms), prognosis in case of exposure, existence (or lack thereof) of a vaccine/treatment, etc. You’ll also learn a little about the history of some of the substances in question; some of them have been used in warfare before, and extensive research has been conducted on quite a few of them during the Cold War, where both the US and the Soviet Union worked on weaponization of some of these substances.
The 8 chapters on specific biological agents/diseases deal with anthrax, plague, tularemia, melioidosis and glanders, smallpox, hemorrhagic fever viruses, botulism, and ricin. None of these things are nice and you can certainly justify covering them in a book like this. The US Centers for Disease Control and Prevention classifies 6 biological agents as ‘Category A’ biowarfare agents, which is the highest risk category and include agents which “can be easily disseminated or transmitted from person-to-person, can cause high mortality, and have the potential for major public health impact. This category includes agents like smallpox, anthrax, plague, botulinum toxin, and Ebola hemorrhagic fever.” All category A agents are covered in this book, as are a few category B agents. The fact that agents such as ricin (“A dose the size of a few grains of table salt can kill an adult human”) are included in the B category, rather than category A, provides you with a bit of context as to of how awful the agents belonging in the A category are. Many of the agents are not just terrible because they kill a lot of people; some of them will also cause really severe and prolonged morbidity in case people survive. A few examples:
“Patients who require mechanical ventilation, respectively, need average periods of 58 days (type A) and 26 days (type B) for weaning (Hughes et al., 1981). Recovery may not begin for as long as 100 days (Colerbatch et al., 1989).” (Botulism. You may not be able to breathe on your own for a month or two.)
“Smallpox is disfiguring. Older texts suggested removing mirrors from patients’ rooms (Dixon, 1962).”
“Deafness is a very common and often permanent result of LASV [Lassa virus] infection, occurring in approximately 30% of patients (Cummins et al., 1990a).”
“Following parenteral treatment, prolonged oral antibiotics are needed to prevent relapse [...] The proportion of patients who relapse can be reduced to less than 10%, and probably less than 5%, if appropriate antibiotics are given for 20 weeks.” (They’re talking about melioidosis victims. You may need to treat these people for months to prevent them from relapsing, and some will relapse even if you do. Melioidosis isn’t unique in this respect: “all persons exposed to a bioterrorist incident involving anthrax should be administered one of these [post-exposure prophylaxis] regimens at the earliest possible opportunity. Adherence to the antibiotic prophylaxis program must be strict, as disease can result at any point within 30–60 days after exposure if antibiotics are stopped.”)
Even the class A agents may be said to some extent to belong on a spectrum. Anthrax doesn’t really transmit from person to person, so the total death toll would mostly be limited to people directly exposed to the agent during an attack (‘mostly’ because e.g. people handling the bodies may be exposed to anthrax spores as well). Pneumonic plague is, well, different. Sometimes the very high virulence of an agent may actually implicitly be an argument against using the agent as a biological weapon in some contexts: “F. tularensis is less desirable than other organisms as a weapon because it does not have a stable spore phase and is difficult to handle without infecting those processing and dispersing the pathogen (Cunha, 2002).”
Especially disconcerting in the context of an attack is the idea of wide-spread panic following release of one of these agents, causing health services to become overextended and unable to help actual victims – they do address this topic in the book:
“An announced or threatened bioterroism attack can provoke fear, uncertainty, and anxiety in the population, resulting in overwhelming numbers of patients seeking medical evaluation for unexplained symptoms, and demanding antidotes for feared exposure. Such a scenario could also follow a covert release when the resulting epidemic is characterized as the consequence of a bioterror attack. Symptoms due to anxiety and autonomic arousal, and side effects of postexposure antibiotic prophylaxis may suggest prodromal disease due to biological agent exposure, and pose challenges in differential diagnosis. This “behavioral contagion” is best prevented by risk communication from health and government authorities that includes a realistic assessment of the risk of exposure, information about the resulting disease, and what to do and whom to contact for suspected exposure. Risk communication must be timely, accurate, consistent, and well coordinated.”
One thing I should perhaps note in this context is that anthrax is not the only one of these agents which ‘for practical purposes’ do not transmit from person-to-person (e.g., “Only two well-documented instances of person-to-person spread are recorded in the [melioidosis] literature”), and that some of those that do actually require quite a bit of exposure to transfer successfully – the ‘everybody who stands next to someone with the Incurable Cough of Death disease and get coughed at will die horribly within 24 hours and we have no cure’-situation will never happen because such diseases don’t exist. On a related note, the faster a disease kills/incapacitates you, the less time the infected individual has to actively transfer it to other people; so even severe and fast-acting diseases will often be self-limiting to some extent. On a related note, “With the exception of smallpox, pneumonic plague, and, to a lesser degree, certain viral hemorrhagic fevers, the agents in the Centers for Disease Control and Prevention’s (CDC’s) categories A and B [...] are not contagious via the respiratory route.”
I could cover this book in a lot of detail, but I decided to limit my coverage to talking about the stuff above and then add a few remarks about smallpox and plague here, because I figure these two sort of deserve to be covered when dealing with a book like this.
First, plague. This is not just a disease of the past:
“Improved sanitation, hygiene, and modern disease control methods have, since the early 20th century, steadily diminished the impact of plague on public health, to the point that an average of 2,500 cases is now reported annually [...] The plague bacillus is, however, entrenched in rodent populations in scattered foci on all inhabited continents except Australia [...] and eliminating these natural transmission cycles is unfeasible. Furthermore, although treatment with antimicrobials has reduced the case fatality ratio of bubonic plague to 10% or less, the fatality ratio for pneumonic plague remains high. A review of 420 reported plague cases in the US in the period 1949–2000 identified a total of 55 cases of plague pneumonia, of which 22 (40.0%) were fatal”
Note that even though the annual number of cases is relatively low, you don’t have to go back to Medieval times to find a rather severe outbreak costing millions of lives:
“The third (Modern) pandemic began in southwestern China in the mid-19th, struck Hong Kong in 1894, and was soon carried by rat-infested steamships to port cities on all inhabited continents, including several in the United States (US) (Link, 1955; Pollitzer, 1954). By 1930, the third pandemic had caused more than 26 million cases and 12 million deaths.”
This is a terrible disease, so of course people have thought about weaponizing it:
“Biological warfare research programs begun by the Soviet Union (USSR) and the US during the Second World War intensified during the Cold War, and in the 1960s both nations had active programs to “weaponize” Y. pestis. In 1970, a World Health Organization (WHO) expert committee on biological warfare warned of the dangers of plague as a weapon, noting that the causative agent was highly infective, that it could be easily grown in large quantities and stored for later use, and that it could be dispersed in a form relatively resistant to desiccation and other adverse environmental conditions [...] Models developed by this expert committee predicted that the intentional release of 50 kg of aerosolized Y. pestis over a city of 5 million would, in its primary effects, cause 150,000 cases of pneumonic plague and 36,000 deaths. It was further postulated that, without adequate precautions, an initial outbreak of pneumonic plague involving 50% of a population could result in infection of 90% of the rest of the population in 20–30 days and could cause a case fatality ratio of 60–70%. The work of this committee provided a basis for the 1972 international Biological Weapons and Toxins Convention prohibiting biological weapons development and maintenance, and that went into effect in 1975 [...] It is now known that, despite signing this accord, the USSR continued an aggressive clandestine program of research and development that had begun decades earlier, stockpiling battle-ready plague weapons (Alibek, 1999). The Soviets prepared Y. pestis in liquid and dry forms as aerosols to be released by bomblets, and plague was considered by them as one of the most important strategic weapons in their arsenal. [...] It is assumed that a terrorist attack would most likely use a Y. pestis aerosol, possibly resulting in large numbers of severe and fatal primary and secondary pneumonic plague cases. Especially given plague’s notoriety, even a limited event would likely cause public panic, create large numbers of the “worried-well,” foster irrational evasive behavior, and quickly place an overwhelming stress on medical and other emergency response elements working to save lives and bring about control of its spread”
“Several simulations of a plague attack have been conducted in the US [...] these have involved all levels of government, numerous agencies, and a wide range of first responders [...] Two of these [...] were based on coordinated national and local responses to simulated plague attacks. During these simulations, critical deficiencies in emergency response became obvious, including the following: problems in leadership, authority, and decision-making; difficulties in prioritization and distribution of scarce resources; failures to share information; and overwhelmed health care facilities and staff. The need to formulate in advance sound principles of disease containment, and the administrative and legal authority to carry them out without creating confusing new government procedures were glaringly obvious [...] In the US, several “sniffing devices” to detect aerosolized microbial pathogens have been developed and tested. The Department of Homeland Security and the Environmental Detection Agency have deployed a PCR-based detection system named BioWatch to continuously monitor filtered air in major cities for Y. pestis and other select agents.”
One of the ‘interesting’ aspects is how the effect of such an attack might be magnified by an attack using conventional weapons as well targeting the likely first responders. Imagine the bombing of local hospitals combined with a plague outbreak and widespread panic plus lack of coordination at the higher decision making level – societal collapse combined with pneumonic plague seems like a combination that could really elevate the body count.
Okay, lastly: Smallpox. Before going into the details I have express my opinion on this matter: If a person works towards releasing smallpox in order to infect other human beings (and so reintroduce the disease), that person is in my book an enemy of the human race who should be shot on sight. No trial, just kill him (or her).
“Smallpox [...] is one of the six pathogens considered a serious threat for biological terrorism [...] Smallpox has several attributes that make it a potential threat. It can be grown in large amounts. It spreads via the respiratory route. It has a 30% mortality rate. [...] In summary, variola has several virologic attributes that make it attractive as a terrorist weapon. It is easy to grow. It can be lyophilized to protect it from heat. It can be aerosolized. Its genome is large and theoretically amendable to modification.”
“The clinical illness and fatality rate roughly parallel the density of the skin lesions. When lesions are sparse, cases are unlikely to die and probably are not efficient transmitters. However, their mobility may allow them to have enough social interaction to result in transmission [...] As lesions become denser and confluent, the fatality rate increases, the amount of virus in the respiratory secretions increases, and patients are more infectious [...] Hemorrhagic smallpox has a fatality rate of nearly 100%, and patients are highly infectious. About 1–5% of unvaccinated patients with V. major get hemorrhagic smallpox [...] They are usually very sick, usually unable to get out of bed and thus may not transmit efficiently. The clinical presentation (from mild to discrete to confluent to hemorrhagic) is a function of the host response, not the virus. The clinical types do not breed true, in that transmission from any patient can give rise to any of the clinical presentations, and the virus is the same.”
“The individual lesions undergo a slow and predictable evolution. [...] By about the 3rd day, the macules become papular, and the papules progress to fluid-filled vesicles by about the 5th day. These vesicles become large, hard, tense pustules by about the seventh or eighth day. [...] The pustules are “in” the skin, not just “on” the skin. They are deep-seated [...] About the 8th or 9th day, the lesions begin to dry up and umbilicate. By about 2 weeks after the onset of the rash, lesions are scabbing. About 3 weeks after onset, the scabs begin to separate, leaving pitted and depigmented scars. The causes of death from smallpox are not well elucidated. Massive viral toxemia probably causes a sepsis cascade. Cardiovascular shock may be part of the agonal syndrome. In hemorrhagic cases, disseminated intravascular coagulation probably occurs. Antibacterial agents are not helpful. Loss of fluid and proteins from the exudative rash probably contribute to death. Modern medical care might reduce the fatality rate, but there is no way to prove that contention [...] There is no proven therapy. No data exist to show whether modern supportive care could reduce the death rate.”
“When smallpox is known to be circulating, the clinical presentation and characteristic rash make diagnosis fairly easy. Diagnosis can be difficult when smallpox is not high on the index of suspicion. Initial cases after a covert bioterrorist attack will probably be missed, at least until the 4th or 5th day of the rash. Transmission may have already taken place by this time. [...] Smallpox does not ordinarily spread rapidly. Transmission requires prolonged face-to-face contact, such as that which occurs among family members or caregivers. Transmission is most efficient when the index patient is less than 6 feet from the recipient, so that the large-droplet respiratory secretions can be inhaled [...] Since virus is not secreted from the respiratory tract until the end of the prodrome, patients are usually bedridden when they become infectious and usually do not transmit the disease widely. [...] No historical evidence exists that smallpox was an effective bioweapon [...] what has been written into historical texts and some medical journals may have been fueled more by fear than plausibility.”
“Smallpox virus currently exists legally in only two laboratories: the CDC in Atlanta and at the State Research Center for Virology and Biotechnology in the Novosibirsk region of Russia. Possession of smallpox virus in any place other than these two laboratories is illegal by international convention. A former Deputy Director of the Soviet Union’s bioweapons program has written that, during the cold war, their laboratories produced smallpox in large amounts, and made efforts to adapt it for loading into intercontinental missiles (Alibek, 1999). Scientists defecting from the former Soviet Union, or leaving Russia seeking work in other nations, may have illegally carried stocks of the virus to “rogue” nations (Alibek, 1999; Gellman, 2002; Mangold et al., 1998; Warrick, 2002). There is no publicly accessible proof that such defectors actually transported smallpox out of Russia, but no way of disproving that they did. [...] Terrorists with access to a modern virus laboratory might genetically modify smallpox in ways similar to the published manipulations of ectromelia [mousepox] [...] Genetically altered strains might pose problems of transmission; alteration of pathogenicity might have unknown effects on the transmissibility of the virus. Experienced intelligence observers feel that terrorists would avoid creating a strain with enhanced virulence. Such strains could devastate developing countries with poor public health systems, and a widespread outbreak would quickly spread to such countries (Johnson et al., 2003). Natural smallpox could similarly boomerang. Terrorists with the ability to manufacture it would realize that an effective attack might cause widespread disease in nations harboring their colleagues. Many such nations have poor public health systems and little vaccine, and would be more devastated than the nation initially attacked”
“The United States stopped routine vaccination in 1972. It could be resumed if the threat of smallpox becomes considerable. Only in a scenario where smallpox becomes widespread would it be wise to resume mass vaccination. [...] The current CDC smallpox response strategy is based on pre-exposure vaccination of carefully screened members of first response teams, epidemiologic response teams, and clinical response teams at designated facilities. [...] Readiness to control an outbreak resulting from an attack entails a high index of suspicion among clinicians, a good network of diagnostic laboratory capabilities, and a plan for use of surveillance and isolation techniques to quickly contain outbreaks. [...] Resumption of widespread vaccination is dangerous and unnecessary.”
Vaccinations are not dangerous because they may cause smallpox to reappear, but rather because there are some other risks involved when getting the vaccination. It’s important to note that Variola major is not the active ingredient in the vaccinations used – rather the vaccinia virus is used, a virus belonging to the same family. There’s more on this stuff here.
As implied by the goodreads rating, I liked this book.
I finished the book.
I did not have a lot of nice things to say about the second half of it on goodreads. I felt it was a bad idea to blog the book right after I’d finished it (I occasionally do this) because I was actually feeling angry at the author at that point, and I hope that after having now distanced myself a bit from it perhaps I’m now better able to evaluate the book.
The author is a classics professor writing about science. I must say that at this point I have now had some bad experiences with reading authors with backgrounds in the humanities writing about science and scientific history – reading this book at one point reminded me of the experience I had reading the Engelhardt & Jensen book. It also reminded me of this comic – I briefly had a ‘hmmmmm…. – Is the reason why I have a hard time following some of this stuff the simple one that the author is a fool who doesn’t know what he’s talking about?‘-experience. It’s probably not fair to judge the book as harshly as I did in my goodreads review (or to link to that comic), and this guy is a hell of a lot smarter than Engelhardt and Jensen are (which should not surprise you – classicists are smart), but I frankly felt during the second half of this work that the author was wasting my time and I get angry when people do that. He spends inordinate amounts of time discussing trivial points which to me seem only marginally related to the topic at hand – he’d argue they’re not ‘marginally related’ of course, but I’d argue that that’s at least in part because he’s picked the wrong title for his book (see also the review to which I linked in the previous post). There’s a lot of stuff in the second half about things like historiography and ontology, discussions about the proper truth concept to apply in this setting and things like that. Somewhat technical stuff, but certainly readable. I feel he’s spending lots of words and time on trivial and irrelevant points, and there are a couple of chapters where I’ve basically engaged in extensive fisking in the margin of the book. I don’t really want to cover all that stuff here.
I’ve added some observations from the second half of the book below, as well as some critical remarks. I’ve tried in this post to limit my coverage to the reasonably good stuff in there; if you get a good impression of the book based on the material included in this post I have to caution you that I did not think the book was very good. If you want to read the book because you’re curious to know more about ‘the wisdom of the ancients’, I’ll remind you that on the topic of science at least there simply is no such thing:
“Science is special because there is no ancient wisdom. The ancients were fools, by and large. I mean no disrespect, but if you wish to design a rifle by Aristotelian principles, or treat an illness via the Galenic system, you are a fool, following foolishness.”
Lehoux would, I am sure, disagree somewhat with that assessment (that the ancients were fools), in that he argues throughout the book that the ancients actually often could be argued to be reasonably justified in believing many of the things that they did. I’m not sure to which extent I agree with that assessment, but the argument he makes is not without some merit.
“That magnets attract because of sympathy had long been, and would long continue to be, the standard explanation for their efficacy. That they can be impeded by garlic is brought in to complete the pairing of forces, since strongly sympathetic things are generally also strongly antipathetic with respect to other objects. [...] in both Plutarch and Ptolemy, garlic-magnets are being invoked as a familiar example to fill out the range of the powers of the two forces. Sympathy and antipathy, the author is saying, are common — just look at all the examples [...] goat’s blood as an active substance is another trope of the sympathy-antipathy argument. [...] washing the magnet in goat’s blood, a substance antipathetic to the kind of thing that robs magnets of their power, negates the original antipathetic power of the garlic, and so restores the magnets. [...] we should remember that — even for the eccentric empiricist — the test only becomes necessary under the artificial conditions I have created in this chapter. We know the falsity of garlic-magnets so immediately that no test [feels necessary] [...] We know exactly where the disproof lies — in experience — and we know that so powerfully as to simply leave it at that. The proof that it is false is empirical. It may be a strange kind of empirical argument that never needs to come to the lab, but it is still empirical for all that. On careful analysis we can argue that this empiricism is indirect [...] Our experiences of magnets, and our experiences of garlic, are quietly but very firmly mediated by our understanding of magnets and our understanding of garlic, just as Plutarch’s experiences of those things were mediated by his own understandings. But this is exactly where we hit the big epistemological snag: our argument against the garlic-magnet antipathy is no stronger, and more importantly no more or less empirical, than Plutarch’s argument for it. [...]
None of the experience claims in this chapter are disingenuous. Neither we nor Plutarch are avoiding a crucial test out of fear, credulity, or duplicity. We simply don’t need to get our hands dirty. This is in part because the idea of the test becomes problematized only when we realize that there are conflicting claims resting on identical evidential bases — only then does a crucial test even suggest itself. Otherwise, we simply have an epistemological blind spot. At the same time, we recognize (as Plutarch did) how useful and reliable our classification systems are, and so even as the challenge is raised, we remain pretty confident, deep down, about what would happen to the magnet in our kitchen. The generalized appeal to experience has a lot of force, and it still has the power to trick us into thinking that the so-called “empirically obvious” is more properly empirical than it is just obvious. [...]
An important part of the point of this chapter is methodological. I have taken as my starting point a question put best by Bas van Fraassen: “Is there any rational way I could come to entertain, seriously, the belief that things are some way that I now classify as absurd?” I have then tried to frame a way of understanding how we can deal with the many apparently — or even transparently — ridiculous claims of premodern science, and it is this: We should take them seriously at face value (within their own contexts). Indeed, they have the exact same epistemological foundations as many of our own beliefs about how the world works (within our own context).”
“On the ancient understanding, astrology covers a lot more ground than a modern newspaper horoscope does. It can account for everything from an individual’s personality quirks and dispositions to large-scale political and social events, to racial characteristics, crop yields, plagues, storms, and earthquakes. Its predictive and explanatory ranges include some of what is covered by the modern disciplines of psychology, economics, sociology, medicine, meteorology, biology, epidemiology, seismology, and more. [...] Ancient astrology [...] aspires to be [...] personal, precise, and specific. It often claims that it can tell someone exactly what they are going to do, when they are going to do it, and why. It is a very powerful tool indeed. So powerful, in fact, that astrology may not leave people much room to make what they would see as their own decisions. On a strong reading of the power of the stars over human affairs, it may be the case that individuals do not have what could be considered to be free will. Accordingly, a strict determinism seems to have been associated quite commonly with astrology in antiquity.”
“Seneca [...] cites the multiplicity of astrological causes as leading to uncertainty about the future and inaccuracy of prediction. Where opponents of astrology were fond of parading famous mistaken predictions, Seneca preempts that move by admitting that mistakes not only can be made, but must sometimes be made. However, these are mistakes of interpretation only, and this raises an important point: we may not have complete predictive command of all the myriad effects of the stars and their combinations, but the effects are there nonetheless. Where in Ptolemy and Pliny the effects were moderated by external (i.e., nonastrological) causes, Seneca is saying that the internal effects are all-important, but impossible to control exhaustively. [...] Astrology is, in the ancient discourses, both highly rational and eminently empirical. It is surprising how much evidence there was for it, and how well it sustained itself in the face of objections [...] Defenders of astrology often wielded formidable arguments that need to be taken very seriously if we are to fully understand the roles of astrology in the worlds in which it operates. The fact is that most ancient thinkers who talk about it seem to think that astrology really did work, and this for very good reasons.” [Lehoux goes into a lot of detail about this stuff, but I decided against covering it in too much detail here.]
I did not have a lot of problems with the stuff covered so far, but this point in the coverage is where I start getting annoyed at the author, so I won’t cover much more of it. Here’s an example of the kind of stuff he covers in the later chapters:
“The pessimistic induction has many minor variants in its exact wording, but all accounts are agreed on the basic argument: if you look at the history of the sciences, you find many instances of successful theories that turn out to have been completely wrong. This means that the success of our current scientific theories is no grounds for supposing that those theories are right. [...]
In induction, examples are collected to prove a general point, and in this case we conclude, from the fact that wrong theories have often been successful in the past, that our own successful theories may well be wrong too.”
He talks a lot about this kind of stuff in the book. Stuff like this as well. Not much in those parts about what the Romans knew, aside from reiteration and contextualization of stuff covered earlier on. A problem he’s concerned with and presumably one of the factors which motivated him to writing the book is how we might convince ourselves that our models of the world are better than those of the ancients, who also thought they had a pretty good idea about what was going on in the world – he argues this is very difficult. He also talks about Kuhn and stuff like that. As mentioned I don’t want to cover the stuff from the book I don’t like in too much detail here, and I added the quotes in the two paragraphs above mostly because they marginally relate to a point (a few points?) that I felt compelled to include here in the coverage because this stuff is important to me to underscore, on account at least in part of the fact that the author seems to be completely oblivious about it:
Science should in my opinion be full of people making mistakes and getting things wrong. This is not a condition to be avoided, this is a desirable state of affairs.
This is because scientists should be proven wrong when they are wrong. And it is because scientists should risk being proven wrong. Looking for errors, problems, mistakes – this is part of the job description.
The fact that scientists are proven wrong is not a problem, it is a consequence of the fact that scientific discovery is taking place. When scientists find out that they’ve been wrong about something, this is good news. It means we’ve learned something we didn’t know.
This line of thinking seems from my reading of Lehoux to be unfamiliar to him – the desirability of discovering the ways we’re wrong doesn’t really seem to enter the picture. Somehow Lehoux seems to think that the fact that scientists may be proven wrong later on is an argument which should make us feel less secure about our models of the world. I think this is a very wrongheaded way to think about these things, and I’d actually if anything argue the opposite – precisely because our theories might be proven wrong we have reason to feel secure in our convictions, because theories which can be proven wrong contain more relevant information about the world (‘are better’) than theories which can’t, and because theories which might in principle be proven wrong but have not yet been proven wrong despite our best attempts should be placed pretty high up there in the hierarchy of beliefs. We should feel far less secure in our convictions if there were no risk they might be proven wrong.
Without errors being continually identified and mistakes corrected we’re not learning anything new, and science is all about learning new things about the world. Science shouldn’t be thought of as being about building some big fancy building and protecting it against attacks at all costs, walking around hoping we got everything just right and that there’ll be no problems with water in the basement. Philosophers of science and historians of science in my limited experience seem often to subscribe to a model like that, implicitly, presumably in part due to the methodological differences between philosophy and science – they often seem to want to talk about the risk of getting water in the basement. I think it’s much better to not worry too much about that and instead think about science in terms of unsophisticated cavemen walking around with big clubs or hammers, smashing them repeatedly into the walls of the buildings and observing which parts remain standing, in order to figure out which building materials manage the continual assaults best.
Lastly just to reiterate: Despite being occasionally interesting this book is not worth your time.
“It is not that the Romans knew only a little and were puzzled about a whole lot, [rather] they thought — just as we do — that they had a pretty good idea of what was going on in the world.”
“The main theme of this book [...] is about what it means to understand a world [...] If we look to the Roman sources, we find an exceedingly rich and complex tangle — every bit as rich and complex as our own, but very, very different. Sometimes startlingly so: different entities, different laws, different tools and motivations for studying the natural world. So, too, different ways of organizing knowledge, and sometimes different ways of understanding even the most basic levels of sensory experience. This book is an inquiry into how and why the Romans saw things differently than we do, or to put it more pointedly, how and why they saw different things when they looked at the world.”
Here’s one (brief) review of the book – I disagree with the last sentence and I would not have given it 4 stars based on what I’ve read so far, but aside from these objections I cannot find much in there with which I disagree.
I’ve read half of the book at this point. If not for the fact that I hadn’t updated the blog in a while I probably would not have covered this book before I’d read it all – I’m not really sure it ‘deserves’ two blogposts. Incidentally this might be a good reminder of the fact that what you read here on the blog is not what I read in order to write these posts - the post here is based on 130 pages of academic writing and however much of it I decide to include in my coverage here on the blog, reading 130 pages actually takes a while. If you want to update a book blog frequently you need to either read some pretty interesting stuff, or you need to read a lot (preferably presumably both).
The book is sort of okay but nothing too special. In my opinion the author uses a lot of words to say not very much, but some of the points he does make are really rather interesting which is why I’m still reading. The world looked very differently to people who lived in Rome around the time of Cicero, and a lot of the ways in which their perceptions of the world differed from ours may well be surprising to the modern reader, as will surely some of the ways in which specific beliefs about the world were justified – as pointed out in the book, “relatively innocuous-looking assumptions about how phenomena are related, and how those relationships enable possibilities for interaction, can have major effects on how the world itself looks to be put together, and on what kinds of things are possible or impossible, patently obvious or patently ridiculous, in that world.”
The book’s coverage centers around the writings of people such as Cicero, Lucretius, Galen, Ptolemy, and Seneca, and it’s most certainly not a book about what the average guy on the street knew and thought about stuff during Roman times – such a book would be exceedingly hard to write.
Parts of the book are hard to cover here in detail due to what might be termed the contextual nature of the arguments presented, and I’ve actually decided against covering a few things which I’d sort of planned on covering here on account of not wanting to have to bother with explaining terms in the quotes with other quotes, but I have added what I believe to be a few interesting observations from the book below:
“when Cicero finally comes to laying out the details of the specific laws of the ideal state, we find the mapping out of the duties of people to gods as the first order of business. Not just any gods, but public gods, for the public good. Thus at the outset, Cicero establishes not the existence of the gods, for he thinks that is a given, but the parameters and responsibilities of the state religion [...] what emerges repeatedly is an insistence that the maintenance of the official cult is absolutely central not just to the maintenance of the state as it stands, but [...] to the maintenance of justice itself, and of all human society. [...] Only when we come to know nature — perhaps better, Nature — can we fully understand religio, our duty to the gods, and the core of the best possible state. [...] careful observation of higher-order aspects of nature (its beauty, its order) leads inevitably to proper ethical behavior, both between people, and between people and the gods. [...] today, it is often taken as definitional that ancient science begins where ancient theology ends, and many treatments of ancient political philosophy tend to downplay the foundational roles of the gods, even though natural-law theory is saturated with theology for most of its history. [...] the gods are never very far away in ancient science.”
“the big schools of philosophy that had developed in the Hellenistic period were in large part [...] dedicated to ethics as the primary focus of the school’s teaching. Many schools saw their physics and their logic as deeply connected with, and in some cases primarily as instruments in the pursuit of, ethical ends. [...] Looking to Seneca’s works on nature, we find ethics front and center.”
“Ancient optics is not about light, it is about vision. The modern idea that visual information is carried in the first instance by the action and movement of light has become so ingrained for us that it is often difficult to set this assumption aside and to allow some room for the very foreign mechanisms of sight in ancient optics [...] In antiquity light played some very different roles in seeing, and not every account of seeing seems to have even felt the need to invoke or explain the role of light in any detail. Perhaps the oddness of ancient light is seen most clearly in Aristotle, for whom light was nothing more than the actualization of the inherent (but passive) tendency of air to be transparent. That is: air (or water) is potentially, but not always, see-through. At night, the potential transparency is unactivated, and the air is accordingly nontransparent, so we cannot see through it. Light is just the actualization of the air’s potential transparency, which thus allows visual forms to pass.
This is a very foreign idea, indeed.
Turning from physics to mathematical optics, we find virtually universal agreement on a different model. Unlike the modern model, where the eye takes in light and thence information, for ancient mathematical opticians the eye instead sends out some kind of radiative visual force that contacts objects in the world and somehow then passes information back to the eye. The details of this radiation vary from writer to writer, but the basic model is one of extramission out from the eye, rather than intromission into the eye.“
As I’ve now finished the book this will be the last post in the series.
The way I read this book has been different from the way I usually read books; most books I read I’ll read in one go over a relatively brief amount of time. As for this one, I certainly didn’t read it in one go and I had breaks from it lasting a quite significant amount of time. I’m not really sure why I read it that way, but one obvious factor which certainly contributed is that this book is hard to read and takes a lot of mental firepower to handle.
I gave the book five stars on goodreads and added it to my list of favourites. Here’s the review I wrote on that site:
“This review got to be rather longer than usual, but I guess I don’t have a hard time justifying that on account of the nature of the book.
To get this over with from the beginning: If you have never read a medical textbook before, don’t bother with this one. You’ll learn nothing and you’ll never finish it. Unless you speak more or less fluent medical textbook you’ll have to either look up a lot of new words, or you’ll read a lot of words you’ll not understand. The fact that the book is somewhat inaccessible was the most important factor pulling me towards 4 stars. I decided to let it have 5 stars anyway in the end – given how many hours I was willing to spend on this stuff I really couldn’t justify giving it any other rating, although there are also a few other small problems which I might have punished in other contexts.
If you know enough to benefit from reading this book it’s a great book, even though I’d prefer if future doctors – which would presumably make up most of the potential readers who ‘know enough to benefit from reading it’ – read a newer version of it. But in order to read it and get something out of it, you need some basic knowledge about stuff like microbiology, histology, immunology, endocrinology, oncology, (/bio-)chemistry, genetics, pharmacology, etc. And I don’t mean basic knowledge like what you’d get from a couple of wikipedia articles – having read textbooks and/or watched medical lectures on some of these topics is a must.
On top of relevant background knowledge you need to be willing to commit at the very least something like 50 hours of spare time to reading this thing. I spent significantly more time than that, and most people probably need to do that as well if they want to actually understand most of this stuff – you certainly do if you want some of it to actually stick.
There probably exist quite a few similar medical textbooks which are more up to date and which may provide slightly better coverage. But I’m not going to read those books. I read this one. And I’m glad I did. Don’t interpret the 5 stars to mean that this is the best book on this topic – I have no way of knowing whether or not it is, though I assume it isn’t. But it is a highly informative and well-written book which covers a lot of ground and from which I learned a lot.”
The ‘covers a lot of ground’ thing can’t be overemphasized - this book has 23 chapters mainly organized in terms of organ systems. It gives you an overview of how things work in general and some of the ‘classical’ ways which they may go wrong. It does this very well, and despite being the kind of book where one chapter will cover heart disease and another chapter will cover pulmonary disease they’re very good at ‘connecting the dots’ – that disorders are often interrelated and e.g. that a failing heart will cause problems with your lungs is not something they’re neglecting to deal with. Indeed the ‘big-picture view’ the book provides made me aware of multiple connections between ‘human subsystems’ which I’d been completely unaware of, and learning about these kinds of relationships was quite fascinating.
Another fascinating aspect was how much stuff there is to know about these things. It’s quite common for me to read books where the coverage overlap to some extent with what I’ve read in other books – I’ll often prefer to read such books (though I also take steps to avoid limiting my exposure to new stuff I don’t know about too much) because the information they cover will be easier to relate to and connect to other stuff up there in my head. One chapter (or a few pages) in one book may cover material which another book spent hundreds of pages dealing with. While reading this book I very often realized that I’d covered a specific topic somewhere else, which gave me a different perspective; ‘this topic is covered in more detail in Hall‘, ‘see Sperling for much more on this topic’, ‘see also Kolonin et al.’, ‘see also Eckel‘, ‘see Holmes et al.‘, and so on and so forth – I’ve added a lot of those kinds of comments along the way. While reading this book you sort of read the big-picture version, and at various points you’re likely to come across places where you can sort of ‘zoom in’, on account of knowing a lot about that topic. What was most amazing to me in this context was how many places I couldn’t zoom in. There’s such a lot of stuff to know and learn.
I won’t cover the last chapters in much detail. The chapters I’ve read over the last few days covered disorders of the hypothalamus and pituitary gland (chapter 19), thyroid disease (chapter 20), disorders of the adrenal cortex (chapter 21), and disorders of the female (chapter 22) and male (chapter 23) reproductive tracts. A few of these chapters I think I probably paid a bit more attention to than I would have done if I had not read Sperling (see link above) in one of my ‘breaks’ from this book. One reason for this is that Sperling, or rather ‘Tuomi and Perheentupa’ as they were the ones who wrote that specific chapter in the book, spent some time and effort in the book dealing with various forms of combinations of autoimmune conditions involving type 1 diabetes as one of the components, which suddenly makes in particular the chapter on thyroid disease more relevant than it otherwise would have been. Tuomi and Perheentupa covered this stuff because: “Two fundamentally different autoimmune polyendocrine syndromes (APSs) are generally recognized, and type 1 diabetes mellitus is common in both.” The risk of me developing another autoimmune condition on top of my diabetes one should think would be low, and it sort of is (it would incidentally most likely be significantly higher if I were a female); but a key observation here is that other autoimmune conditions usually show up later in life than does the diabetes, so the higher risk I face of developing e.g. Graves’ disease and Hashimoto’s disease (both are covered in chapter 20 of the Pathophysiology text) is not yet really accounted for, and the fact that I haven’t developed any of them yet is not very relevant to my risk of developing these conditions later in life (what is relevant is that I developed diabetes very early in my life – this actually makes it less likely that other organ systems will get hit as well, though it does not make the risk go away). I’ll include a quote from the relevant chapter from Sperling below as I’m aware this was some of the stuff I did not cover when I read that book and so people may be completely in the dark about what I’m talking about:
“All combinations of adrenocortical insufficiency, thyroid disease (Graves’ disease, goitrous or atrophic thyroiditis), type 1 diabetes, celiac disease, hypogonadism, pernicious anemia (vitamin B12 malabsorption), vitiligo, alopecia, myasthenia gravis, and the collagen vascular diseases, which include at least one of the said endocrine diseases but exclude hypoparathyroidism and mucocutaneous candidiasis, are collectively called APS type 2. The co-occurrence of these diseases is presumably the result of a common genetic background. No exact incidence or prevalence figures are available, and they would probably vary with the population concerned. APS-2 is more common than APS- 1, with a general prevalence of at least 1 per 10,000. Females are affected two to four times more often than men. The highest incidence of the components is in the third to the fifth decade of life, but a substantial number of patients develop the first component disease, usually type 1 diabetes, already in the first and second decade”
Note that the uncertain, yet seemingly low, prevalence estimate is easy to misunderstand. I haven’t looked at these numbers recently and I’m not going to go look for them now, but say type 1 diabetes (-T1DM) affects 1 out of 300 people. Now combine the ‘at least 1 in 10.000′ estimate with that one and observe that roughly 2 out of 3 patients with APS-2 have T1DM and the risk a type 1 diabetic will develop another autoimmune condition is already measured in percent. These numbers incidentally downplay the actual risk – I decided to include a few examples from Sperling to illustrate. It makes sense to start with Graves’ disease as I already mentioned that one: “Graves’ disease has been reported in 9.3% of patients with type 1 diabetes (76).” Also, “Hypothyroid or hyperthyroid AITD [AutoImmune Thyroid Disease] has been observed in 10–24% of patients with type 1 diabetes” – uncertain figures with big error bars, but not exactly low risks of no import. Especially not when considering that: “In addition, between 5% and 25% of type 1 diabetic patients without clinical thyroid disease have antibodies to thyroid microsomal antigens (TMAb) or thyroid peroxidase (TPOAb)”. Although combination forms with multiple autoimmune disorders are quite rare, they’re not actually that rare (‘not rare enough…’) when you take into account that T1DM is also, well, rare.
The stuff above was mostly just an aside explaining why I perhaps cared a bit more about the stuff covered in these last chapters than I otherwise would have, but hopefully it was an informative aside. I should note that the ‘more interesting’ stuff was not all of it more interesting on account of dealing with some elevated risk of ugly things happening to me; other parts of the last chapters were ‘particularly relevant’ because of other stuff, like the role cortisol plays in circadian variation in insulin resistance and the role ACTH-excretion plays in hypoglycemia. But I think it would take too much time and effort to go into the details of these things in this post so I’ll cut it short here.
I’ve finished the book. I ended up at three stars on goodreads; the book has less to say about ‘the really interesting stuff’ than I’d have liked, and although part of the reason for this was that the research simply didn’t exist at the time of publication it was still a little disappointing. Funder provides some ideas in the second half about where to go look for interesting questions and their answers in this area, but actual answers were few in numbers when he published the book. I have been wondering along the way how much of this stuff has been looked at since he wrote the book – I don’t know, but I’m getting a bit curious and I may have a closer look at this stuff at a later point in time.
So anyway I ended up liking the book overall somewhat less than I thought I would while I was reading the first chapters. It is interesting, but many of the answers people reading a book like this are probably looking for in all likelihood aren’t in there. Much of the book, especially the second half of it, is centered around a simple signalling model used to conceptualize various elements which are part of the personality judgment process. The model (he calls it RAM – realistic accuracy model) is quite similar to standard signalling models known from e.g. microeconomics; you have a sender and a receiver, and you have noise as well as various variables (relevance, availability, detection and utilization) that impact the information exchange process. It should be noted that the question being asked is not whether or not information gets from A to B, but whether or not a correct inference about the sender is made by the receiver, and one might also observe that it is not critical that the sender deliberately supplies the information in question to the receiver; we often send signals about our behavioural patterns and traits that other people might use to get a better understanding of us without actually being aware of the fact that we’re doing it (‘extroverts talk in a louder voice than introverts’ – yep, in case you didn’t know they seem to do that…). He talks a lot about the model and tries to frame relevant questions so that they fit somewhere into the model, but he doesn’t do any actual work with the model, it’s just a way to present his way of thinking about these things (i.e. there are no derivations of equilibria under given conditions or stuff like that).
A few words should perhaps be included here about the variables mentioned above. Relevance relates to whether or not behaviour is relevant to personality perception. Some behaviours are more relevant to specific trait judgments than are others; you learn more about someone’s courage by observing whether or not he enters a burning building to save a child than you do by observing how he behaves in the grocery store. Situational factors play a key role here. Availability relates to whether or not the information provided becomes available to the observer. If the observer/judge is not around when trait-relevant behaviour takes place, he or she cannot use that information. On a related note, different people have different relationships with other people, and so have access to different types of information. A close friend for instance has more (relevant) information available to judge you from than does the local grocery store clerk. In general more information is available to people who have known a person for a longer amount of time and have observed the individual in a wider variety of social contexts; there’s both a quantity and a quality aspect to familiarity. As for the next variable, not all available information gets picked up on by the receiver, and so this is where the detection stage becomes relevant. Even though a friend you’ve known for a while has seen you in a lot of different contexts, that doesn’t mean much if the friend, say, didn’t pay attention. Traits we possess ourselves (or at least believe ourselves to possess) are incidentally often easier for us to detect in others; a person who prides himself on his intelligence may be more likely to look for cues of intelligence provided by the sender during a social interaction than may the person who doesn’t think of himself as being particularly intelligent, but rather prides himself on his conscientiousness (I think he mentions this in the book, but stuff like this was certainly covered in Leary & Hoyle. Note that the reverse is true as well: “Research has shown that traits that are central to a person’s self-concept or are seen by the individual as ‘‘personally relevant’’ tend to be easier for others to detect”). The last of the variables, utilization, relates to the receiver’s interpretation of the sender’s signal/observed behaviour; people often have relevant information available to them which they detect, yet misinterpret. Two major problems people encounter when trying to utilize the information provided to them which Funder mentions in this context are that the relevance of a given behaviour depends on the situational context (the exact same behaviour may in one situation be highly relevant to a given trait and in another situation be completely irrelevant), and that any given behaviour may be affected by/motivated by more than one trait at the same time. Something that doesn’t help is that personality traits vary in how easy they are for others to observe (“traits like extraversion and agreeableness are the ones most likely to become visible in overt social behavior” – this dimension is rather important when it comes to the effects related to getting to know people better: “As Paunonen (1989) showed, even less visible traits become more judgable when the judge and the target are closely acquainted. To know somebody longer is not necessarily to learn more and more about how extraverted they are. With longer acquaintance, more and more subtle aspects of personality slowly become visible.”). Naturally an implication of the model is that “any efforts to improve accuracy, to be effective, must have an effect on relevance, availability, detection, or utilization.”
Having talked about the general model Funder then proceeds to talk about moderator variables, variables that affect accuracy. These can be subdivided into four classes: Accuracy is affected by properties of the judge, properties of the target (the person who’s sending information), properties of the trait that is judged, and properties of the information supplied. As for the judge, three variables are brought up: “The capacity to detect and to utilize available cues correctly can be divided into three components: knowledge, ability, and motivation.” Other new variables are introduced when talking about the other moderator variables. Various forms of variable interactions are also covered later in the book (to take one example, people are generally poor at judging people they don’t like – this relates to the judge-target interaction term). Much of the discussion is somewhat theoretic because the research had yet to be performed when Funder wrote the book, but the discussion is helpful even so.
I’ve added a few more observations from the book below.
“Social psychologists have frequently observed that female friends spend much of their time discussing emotions and relationships, whereas male friends are more likely to engage in work or play activities or to discuss less personal matters such as sports or politics [...] If this observation is combined with Andersen’s (1984) findings, that conversations that reveal more personal information yield better information on which to base personality judgments, the following prediction can be derived: Well-acquainted women ought to judge each other with more accuracy than do well-acquainted men. Data relevant to this prediction are surprisingly rare, but a sex difference in the predicted direction has reported by Harackiewicz and DePaulo (1982) as well as in a recent study by Vogt and Colvin (1998). The general (albeit small) superiority of women over men in the detection of emotional states is a long-standing staple of the literature”
“At a very basic level, there is a particularly powerful reason to expect one’s own personality to be particularly difficult to see: It is always there. Kolar, Funder, and Colvin (1996) dubbed this the ‘‘fish and water effect,’’ after the cliché that fish do not know that they are wet because they are always surrounded by water. In a similar fashion, the same personality traits that are most obvious to others might become nearly invisible to ourselves, except under the most unusual circumstances. [...] In their experimental study, Kolar et al. obtained personality judgments from subjects’ close acquaintances as well as from the subjects themselves. In nearly every comparison, the acquaintances’ judgments manifested better predictive validity than did the self-judgments. For example, acquaintances’ judgments of assertiveness correlated more highly with assertive behavior measured later in the laboratory than did self-judgments of assertiveness. Although the differences were sometimes quite small, the same finding appeared for talkativeness, initiation of humor, physical attractiveness, feelings of being cheated and victimized by life, and several other traits of personality and behavior. A further study by Spain (1994) showed that the degree of difference in accuracy between the self and others depends on the criterion used. When the criterion for accuracy was the ability to predict overt, social behavior, this latter study found, self-judgments held no advantage over judgments by others (no advantage for the others was found in this study). But when the criterion was on-line reports of emotional experience, self-judgments of personality afforded better predictions than did peers’ judgments.
The bottom line seems to be this: Notwithstanding the obvious advantages of self-observation, in some ways it may be surprisingly difficult. [...] Other people have a view of your social behavior that is as good as and sometimes even superior to the view you have of yourself.”
“The tendency to view different situations as similar causes a person to respond to them in a like manner, and the patterns of behavior that result are the overt manifestations of traits. The interpretation of a trait as a subjective, situational-equivalence class offers an idea about phenomenology—about what it feels like to have a trait, to the person who has it [...] The answer is that ordinarily it doesn’t really feel like anything. The only subjective manifestation of a trait within a person will be his or her tendency to react and feel similarly across the situations to which the trait is relevant. [...] A sociable person does not ordinarily say to him- or herself, ‘‘I am a sociable person; therefore, I shall now act in a sociable fashion.’’ Rather, he or she responds positively to the presence of others in a natural, automatic, unselfconscious way. An unsociable person, who perceives the presence of others differently, accordingly also responds differently. And a highly emotional person is too busy experiencing strong emotions to notice that his or her very emotional responsiveness may be one of his or her strongest, most characteristic and (to others) most obvious personality traits.”
“The improvement of relevance can be attempted in two ways. [...] First, the judge can take care to observe the person being judged in the contexts that are most informative for the trait in question [...] To judge social traits, one must observe the target person’s behavior in interpersonal situations. To judge occupational competencies, one must observe the target person’s job behavior. This seemingly obvious point is often neglected. People too often infer traits from the observation of behavior in contexts where no relevant information could be expected to occur. [...] A second way to improve relevance is to do something to create the appropriate observational context. Some kind of stimulus might be created that will lead the target person to emit a behavior that is relevant to the behavior that the judge wants or needs to evaluate. This is not as unusual a tactic as might first appear. The simple act of asking someone a question is an example. [...] People who are better judges of personality might, to an important degree, be those who know how to ask better questions. A good question, in this sense, is one that elicits relevant data about personality, an informative answer. [...] It is also possible to set up social contexts in which more informative behaviors are likely to appear. If a situation is relaxed and informal, for example, people are more likely to be their real selves”
“The improvement of availability [...] requires the judge to observe more behaviors in a wider variety of contexts. [again, remember that there's both a quantity and quality aspect to this and that some settings may be more informative than others] [...] there are at least a couple of things that a judge can do to improve detection. First, the judge can simply watch closely. [...] This does not come without cost, however, so it should be done judiciously. [...] In a similar vein, a distracted judge will garner less information [...] Perhaps the most important thing a judge can do to improve the detection stage is to learn what is important to detect. [...] Unfortunately, our knowledge of the cues that are [...] informative about personality, though beginning to develop, is still far too thin. [...] Even if psychologists were to gear up an intensive program for teaching people how to judge personality more accurately, on surveying the research literature they would find they still have surprisingly little of use to teach.”
“The utilization stage of accurate judgment involves thinking. The relevant and available information has been detected, and now the judge must do some interpretational work to figure out what it all means. [...] Research indicates that this work is best done alone. When people get together to talk about their judgments before rendering them, apparently factors of group dynamics rather than valid inferential reasoning take control of the judgmental output. People discussing their judgments become concerned about self-presentation, saving face, politeness, making friends, achieving dominance, and a host of other issues that are irrelevant to accuracy. As a result, personality judgments are more accurate when made by individuals working alone than by those who have discussed their judgments with others first [...] To optimize accuracy, these independently formulated judgments can then be combined arithmetically into an average that is much more reliable than any one of them would be.”
“[One way] to improve the intuitive judgment of personality is for anyone who would judge his or her peers to acquire as much practice and feedback as possible. Get out more, be an extravert [...] The same advice applies to those who would improve their self-knowledge [...] Mix with many different people in a wide range of social settings. Travel. Meet the kind of people you do not ordinarily meet. Most important, be sure to seek feedback. The lack of good feedback is the missing link in much ordinary social experience (Hammond, 1996) and may be the reason many of us are not as good judges of personality as we should be. If we give up on a new acquaintance because we think we will not like him or her over time, we lose the chance to learn whether this prediction was right. [...] if we fail to let our acquaintances feel free to express themselves, perhaps because we interrupt, are easily offended, or just fail to show interest, we will be cutting ourselves off from potentially useful knowledge about what they, and people like them, are really like. Unless the people we encounter feel free to be themselves, we will never be in a position to learn about what they are really like. By the same token, if we would know ourselves, we should encourage and be open to feedback from others concerning the nature of our own personalities. [...] the general perspective of RAM implies not one, but two general prescriptions for improving the accuracy of personality judgment. [...] the judge needs to use the available information better, but also needs for better information to be available.”
This is where new readers come out of the woodwork and say ‘hi!’ And it’s where regular readers tell me about interesting stuff they’ve come across since the last Open Thread.
I had social obligations this weekend and so I haven’t done a lot of blogging-relevant stuff over the last few days. I’ve read Ishiguro’s The Remains of the Day, and although I won’t blog it here I will note that it was an awesome book.
A few links:
i. I recently watched this lecture, but I decided against embedding it here because I was very far from impressed by it. If you decide to give it a shot you should at the very least do yourself a favour and skip the first 5 minutes. You should also note that quite a bit of work has been done in related areas such as search and matching theory since the Gale–Shapley algorithm was developed.
Some results and data from the link:
“Among adults aged 25–44, about 98% of women and 97% of men ever had vaginal intercourse, 89% of women and 90% of men ever had oral sex with an opposite-sex partner, and 36% of women and 44% of men ever had anal sex with an opposite-sex partner. Twice as many women aged 25–44 (12%) reported any same-sex contact in their lifetimes compared with men (5.8%). Among teenagers aged 15–19, 7% of females and 9% of males have had oral sex with an opposite-sex partner, but no vaginal intercourse.”
“About one-half of all STIs occur among persons aged 15–24″
“Although current HIV medications have substantially increased life expectancy (7), the medical costs are substantial, averaging approximately $20,000 per year for each person in care”
“Among women aged 15–44 in the 2006–2008 NSFG, 11% had never had any form of sexual activity with a male partner in their lives, 6.1% had sex in their lifetime but had no opposite-sex sexual activity in the past 12 months, and 69% had one male partner in the past 12 months. Nearly 8% had two partners in the past year, and about 5% had three or more partners in the past year. [...] Among women aged 25–44, 1.6% never had any form of sexual activity with a male partner, 6.6% have had sex with a male but not in the past year, and 82% had one partner in the past year. Having one partner in the past 12 months was more common at older ages, presumably because more of these women are married. Having one partner in the past year was significantly more common among married (97%) or cohabiting (86%) women than those in other groups [...] women aged 22–44 with less than a high school diploma were nearly twice as likely (13%) to have had two or more partners in the past 12 months as women with a bachelor’s degree or higher (7%).”
“Among women aged 15–44, the median number of male partners is 3.2 and in 2002 it was essentially the same at 3.3. For men aged 15–44, the median number of female partners was 5.6 in 2002 and remained similar at 5.1 in 2006–2008. As in 2002 when 23% of men and 9% of women reported 15 or more partners in their lifetimes, men were more likely than women to report 15 or more partners in 2006–2008 (21% of men and 8% of women). [...] These results are consistent with prior findings from surveys in the United States and other countries, which all show that men on average report higher numbers of opposite-sex sexual partners than do women of the same age range [...] While 11%–12% of women with lower levels of education reported 15 or more partners, 6.8% with bachelor’s degrees or higher reported 15 or more partners. For men [...], the disparity by college education was smaller”
iii. The FIDE Candidates Tournament (the tournament deciding who’s to play against Magnus Carlsen in the next World Chess Championship match) has begun and a few rounds have been played. Some interesting chess so far. The official site is here. I haven’t followed the live commentary, but I’ve noted that the main commentators seem to be Danish Grandmaster Peter Heine Nielsen and his wife Viktorija Čmilytė (currently the 12th strongest female player in the world). Without having followed the coverage I can’t of course say how well they’ve done, but I’d say that picking someone like Nielsen to provide commentary seems to me like a very good idea; aside from being a ‘pretty strong player’ who’s been in the world top 100 for a decade or something like that, he’s also been one of Anand’s seconds for years – he’s currently Magnus Carlsen’s second – and if you want someone able to talk about the specific details of the various openings likely to be employed in games like these, it would probably be very hard to find someone significantly better than him.
“This is a book about accuracy in personality judgment. It presents theory and research concerning the circumstances under which and processes by which one person might make an accurate appraisal of the psychological characteristics of another person, or even of oneself.
Accuracy is a practical topic. Its improvement would have clear advantages for organizations, for clinical psychology, and for the lives of individuals.With accurate personality judgment, organizations would become more likely to hire the right people and place them in appropriate positions. Clinical psychologists would make more accurate judgments of their clients and so serve them better. Moreover, a tendency to misinterpret the interpersonal world is an important part of some psychological disorders. If we knew more about accurate interpersonal judgment, this knowledge might help people to correct the kinds of misjudgments that can cause problems. Most important of all, if individuals made more accurate judgments of personality they might do better at choosing friends, avoiding people who cannot be trusted, and understanding their interpersonal worlds (Nowicki & Mitchell, 1998). [...] This is a book about how people make judgments of what each other is like, the degree to which these judgments achieve accuracy, and the factors that make accuracy in personality judgment more and less likely.”
I’m currently reading this book by David Funder. It’s quite interesting. Much of the book so far – and I’ve about read half of it at this point – has been dealing with how the different schools of research in this field historically have approached these matters, and the various ways they’ve tried to conceptualize central issues of interest (e.g. questions such as, ‘how do we establish when people are accurate? Which criteria do we apply?’). There’s been a good deal of focus on methodological issues and how to interpret results in various contexts, and less specific focus on ‘the actual results’ (though of course some of these have been reported as well). This emphasis on methodology probably means that some people may find this book a bit boring. I’m reasonably sure Funder will proceed to the more ‘meaty’ parts in the second half and I look forward to reading the rest of the book. I think I’m currently hovering around a 4 star evaluation on goodreads. There’s a lot of good stuff in here, including incidentally some observations that made it easier for me to realize why I disliked the cognitive behaviour handbook as much as I did (it pointed out some specific problems with the approaches applied in that book (/line of research) that I had not been fully aware of).
I’ve added some more observations from the book below. A lot of good stuff didn’t make it into this post. If you want to know if, say, a policeman is more likely to figure out if someone is lying or not than some random person on the street, at least judging from the material covered so far this is not the book for you (though I know there are studies covering this type of stuff which you can find on google scholar). But it’s very interesting and I’m really liking it. One of the few problems with this book is that for a research book it’s rather old (1999), but given the topics covered so far this actually matters much less than you’d think. Incidentally if my comments at the top of this paragraph made you curious about these things, you may want to see this post covering the results of a recent rather large review of studies dealing with humans’ ability to spot liars – I’ve covered this stuff before here on the blog.
Observations from the book:
“many doors in life are opened or closed to you as a function of how your personality is perceived. Someone who thinks you are cold will not date you, someone who thinks you are uncooperative will not hire you, and someone who thinks you are dishonest will not lend you money. This will be the case regardless of how warm, cooperative, or honest you might really be. [...] a long tradition of research on expectancy effects shows that to a small but important degree, people have a way of living up, or down, to the impressions others have of them. Children expected to improve their academic performance to some degree will do just that [...], and young women expected to be warm and friendly tend to become so [...] There is another important reason to care about what others think of us: They might be right. [...] The people in your social world have observed your behavior and drawn conclusions about your personality and behavior, and they can therefore be an important source of feedback about the nature of your own personality and abilities. [...] looking to the natural experts in our social world is a rational way to learn more about what we are really like.”
“There are vastly more active social than personality psychologists now doing research, more social psychology training programs, and more grant money for social psychology research. [...] Perhaps the most obvious difference between modern social and personality psychology is that the former is based almost exclusively on experiments, whereas the latter is usually based on correlational studies. [...] In summary, over the past 50 years social psychology has concentrated on the perceptual and cognitive processes of person perceivers, with scant attention to the persons being perceived. Personality psychology has had the reverse orientation, closely examining self-reports of individuals for indications of their personality traits, but rarely examining how these people actually come off in social interaction. [...] individuals trained in either social or personality psychology are often more ignorant of the other field than they should be. Personality psychologists sometimes reveal an imperfect understanding of the concerns and methods of their social psychological brethren, and they in particular fail to comprehend the way in which so much of the self-report data they gather fails to overcome the skepticism of those trained in other methods. For their part, social psychologists are often unfamiliar with basic findings and concepts of personality psychology, misunderstand common statistics such as correlation coefficients and other measures of effect size, and are sometimes breathtakingly ignorant of basic psychometric principles. This is revealed, for example, when social psychologists, assuring themselves that they would not deign to measure any entity so fictitious as a trait, proceed to construct their own self-report scales to measure individual difference constructs called schemas or strategies or construals (never a trait). But they often fail to perform the most elementary analyses to confirm the internal consistency or the convergent and discriminant validity of their new measures, probably because they do not know that they should. [...] an astonishing number of research articles currently published in major journals demonstrate a complete innocence of psychometric principles. Social psychologists and cognitive behaviorists who overtly eschew any sympathy with the dreaded concept of ‘‘trait’’ freely report the use of self-report assessment instruments of completely unknown and unexamined reliability, convergent validity, or discriminant validity. It is almost as if they believe that as long as the individual difference construct is called a ‘‘strategy,’’ ‘‘schema,’’ or ‘‘implicit theory,’’ then none of these concepts is relevant. But I suspect the real cause of the omission is that many investigators are unfamiliar with these basic concepts, because through no fault of their own they were never taught them.”
“Many studies over a period of several decades have shown that the impressions others have of your personality agree to an impressive extent both with each other and with your impression of yourself. [...] recent research using sophisticated data analyses has shown that the consistent effect of the person is by far the largest factor in determining behavior, overwhelming more transient influences of situational variables or person-by-situation interactions [...] correlations between personality and behavior are particularly high when the predictive target is aggregates or averages of behavior rather than single instances [...] In everyday life what we usually wish to predict on the basis of our personality judgments are not single acts but aggregate trends. Will the person we are trying to judge make an agreeable friend, a reliable employee, or an affectionate spouse? Each of these important outcomes is defined not by a single act at a single time, but by an average of many behaviors over a diverse range of contexts. The classic Spearman-Brown formula shows how even seemingly small correlations with single acts can compound into high correlations with the average of many acts. For example, Mischel and Peake (1982) found that inter-item correlations among the single behaviors they measured were in the range of .14 to .21, but that the coefficient alpha for the average of the behaviors they measured was .74. That is, a similar aggregate of behaviors would be expected to correlate .74 with that one. In the same vein, Epstein and O’Brien (1985) reanalyzed several classical studies in the field of personality and found in each case that although behavior seemed situationally specific at the single-item level, it was quite consistent at the level of behavioral aggregates.” [I'm familiar with this stuff at this point, but I can't remember to which extent I included stuff like this in my coverage of Leary & Hoyle so I decided to include these observations here; there are a lot of pages in L&H about these and related matters because this kind of stuff is really important in terms of how to measure variables and interpret coefficients in these fields.]
“To evaluate the degree to which a behavior is affected by a personality variable, the routine practice is to correlate a measure of behavior with a measure of personality. But how does one evaluate the degree to which behavior is affected by a situational variable? [...] this question has received surprisingly little attention over the years. Where it has been addressed, the usual practice is rather strange: The power of situations is determined by subtraction. [...] Of course, this is not a legitimate practice [...] the two sides of the person-situation debate have in an important way been talking past each other for a couple of decades. For the cognitive behaviorists, significant differences in behavior across conditions has been taken as conclusive proof that behavior is situationally determined and otherwise inconsistent. For personality psychologists, the maintenance of individual differences in behavior across situations demonstrates the importance of stable aspects of personality for determining what people do. It turns out that these two conclusions are not in the least incompatible. [...] Behavior in general changes with the situation, and the behavior of individuals is impressively consistent across situations. These statements are not incompatible; they are both true [...] [and] some behaviors are more dependent on the situation than are others.”
“[One] aim of the heuristics-and-biases approach was to compile a vast catalog of the many different ways in which human judgment is faulty (Lopes, 1991). Surprisingly often, authors slid easily from describing heuristics as useful and even necessary components of human judgment under heavy cognitive load to characterizing them as woeful ways in which otherwise rational thinking too often goes astray. This is an important change of emphasis. [...] The emphasis on mistakes [...] had a deep and pervasive influence throughout psychology and even beyond. Over the 20-year reign of the error paradigm, a conventional wisdom became established that people were—not to put too fine a point on it—stupid. [...] However, it might not necessarily be helpful to make one’s judgments while afflicted by the kind of self-doubt a reading of some researchers on error would inflict. [...] Furthermore, some writers have noted that the heuristics-and-biases approach, as typically employed, has a direct and powerful implication that seems to be quite false. The implication is that if we could eliminate all heuristics, biases, and errors from our judgment, our judgments would become more accurate. In fact, the reverse seems to be the case. Researchers on artificial intelligence find they must build heuristics and biases into their programs to allow them to function at all in environments that have any degree of complexity or unpredictability—environments, in other words, like the real world [...] For example, successful elimination of the ‘‘halo’’ effect has been shown to make judgments of real individuals less accurate [...] This is probably because socially desirable traits really do tend to co-occur, making the inference of one such trait from the observation of another—the halo effect—a practice that ordinarily enhances accuracy [...] Other heuristics have also been found to enhance accuracy”
“When the mythic age finally arrives when research has answered all our questions, it still might turn out that overattribution to the person is more common than overattribution to the situation. But already it is clear that both kinds of error exist, and both are important. Calling just one of them ‘‘fundamental’’ is probably unwise.” [he included a lot of stuff about this one, but I decided against covering all of that here]
“the first, most obvious, and perhaps most daunting difficulty in accuracy research is the criterion problem. To study the moderators and processes of accurate judgment, a researcher needs some sort of criterion for determining the degree to which a given judgment is right or wrong. [...] methodological issues concern the techniques a researcher should use to assess and statistically analyze the two criteria for accuracy that are available. To make a long story (temporarily) short, these criteria are interjudge agreement and behavioral prediction. [...] Error researchers employ what Hammond (1996) called ‘‘coherence’’ criteria. These criteria include the degree to which a judgment follows the prescriptions of one or another normative model of judgment [...] Accuracy researchers employ ‘‘correspondence’’ criteria. Correspondence criteria include the degree to which a judgment matches or corresponds with one or more independent indicators of reality. [...] Both criteria can be and sometimes are applied to the same judgment. For example, the process by which a weather forecaster makes his or her judgments might be compared to the inferential rules that were taught in meteorology school. If the process followed by the forecaster makes logical sense and follows the rules he or she was taught, the judgment passes the coherence criterion. Alternatively, if his or her judgment is that it will rain tomorrow, one can also wait and see if it actually rains. If it does, then the judgment passes the correspondence criterion. The difference between these criteria is interesting and important because a judgment deemed correct by one criterion may be incorrect according the other. [...] In an ideal world, researchers interested in accuracy would use both. [...] At present, however, the two criteria are employed by areas of research that are quite separate.”
“To interact successfully with someone you really need to know accurately only about those aspects of the person that are relevant to his or her behaviors in the environments you share [...] a ‘‘circumscribed accuracy,’’ [...] This approach is useful, but its implications are limited [...] research seems to show, perhaps surprisingly, that circumscribed accuracy is no better and is sometimes worse than generalized accuracy [...] For example, people are better at judging another person’s general degree of talkativeness than at judging how talkative he or she will be specifically with them”
“different judges of the same person tend to agree in their judgments, even after fairly brief acquaintance [...] And two judges who rate each other generally do not describe each other as similar to themselves [...] One of Kenny’s most important empirically based conclusions is that people agree with others about what they are like (self-other agreement) because both the target and the observer base their impressions on the same information, which is the target’s behavior. That is, you see what I do, and I also see what I do, and this why we agree about what I generally do and therefore what I am like. [...] judges use stereotypes as an important basis for their judgment only when they have little information about the target. In this situation, it appears, judges fill in the missing information with general stereotypes or even [...] their own self-description (which itself is a sort of stereotype if applied to the judgment of others; Hoch, 1987). When you know someone well you can base your judgments on what you have seen. When you have little information, you fall back on stereotypes and self-knowledge.”
I’ve finished the book.
I almost didn’t. A few of the chapters were quite awful. Here’s what I wrote on goodreads:
“Much closer to one star than three – I was very close to giving it one star.
It starts out not terrible, but then gets worse and worse as it moves on. Some chapters are almost hilariously bad. Often all that’ll be worth knowing about a given method is one or two key ideas; you quickly realize that most of the rest is just crap and/or speculation. Some chapters have more than this, but not many, and frankly a lot of this stuff is pure bullshit.
Many of the chapters are written by partisans who don’t even try to pretend to be impartial.
I was very disappointed by this book.”
I wrote this review after having just read the last two chapters, the first of which was far from great and the latter of which was simply spectacularly bad. So I might have been a bit harder on the book than I should have been. The ‘it progressively gets worse’ model is also on second thought perhaps not completely fair. Either way I don’t think you should read this book. At least not all of it. Some chapters were not bad, it’s just that others broke the scale and tempted me to give the book a negative goodreads score. This makes it somewhat hard for me to say good things about it in general.
Actually I have been tempted while reading this handbook to add another star to Leary & Hoyle‘s work. If that’s the kind of stuff which is out there, perhaps I was too hard on those guys.
In some of the chapters of this book you’ll have to look really hard to find any formal tests of whether it even makes sense to think about the problems in the manner proposed (you’ll see lots of words spilled, but words are cheap); some of the therapy approaches have assumptions underlying them which are not flexible at all and are simply taken for granted – in some cases they’re arguably not even testable in theory. To take an example it’s always theoretically possible for you to blame your parents for your problems, and it’s not hard to come up with a therapy approach which helps you feel better by enabling you to evade responsibility for your problems by blaming your parents for them. Parent-blaming is a frequently encountered component in so-called schema-therapy approaches, though of course they don’t call it that in the book. Other times the therapists get even more creative; for example did you know that marital discord may be partly due to (societal) racism? I didn’t, I’m glad they included this important variable in their coverage. In one case I considered the proposed theoretical framework underlying/justifying the therapeutic approach frankly inconsistent – it simply makes no sense to me even in theory. Some approaches have proposed mechanisms of actions which remain either completely unexplored or at the very least seriously underexamined – the proponents seem to feel fine justifying the treatment approach solely by reference to various outcome variables arguably completely orthogonal to the methodology applied. They sometimes have no clue why it works, when it works (…if it works?). Ideas like selection bias and selective attrition naturally spring to mind, as do (as always) underpowered studies of questionable validity and publication bias, but they pretty much don’t talk about stuff like that at all. It should be noted that problems such as these are quite important to address if you want to argue, as some indeed implicitly do, that ‘regardless of whether it works ‘the way it’s supposed to’ or not, if it does work then that’s the important part.’ If the methodology is questionable it gets a lot harder to ‘just accept that it works’ because that conclusion might be wrong – and if you as a contributor to a handbook like this just choose to pretend specific problems don’t exist by not talking about them, that does not make you look good. There are a lot of problems meta-analyses do not solve.
In the coverage below I’ve tried to stay away from the low quality material and focus only on the stuff I can justify sharing here – don’t take the passages below to be representative of the book in general.
“The case formulation is an element of a hypothesis-testing empirical mode of clinical work [...] The therapist begins the process by carrying out an assessment to collect information that is used to develop an initial formulation of the case. The case formulation is a hypothesis about the psychological mechanisms and other factors that cause and maintain a particular patient’s disorders and problems. The formulation is used to develop a treatment plan and to assist in obtaining the patient’s informed consent to it. After obtaining informed consent, the therapist moves forward with treatment. At every step in the treatment process [...] the therapist returns repeatedly to the assessment phase; that is, the therapist collects data to monitor the process and progress of the therapy and uses those data to test the hypotheses (formulations) that underpin the intervention plan and to revise them as needed. Thus, the four elements of case formulation-driven cognitive-behavioral therapy (CBT) are (1) assessment to obtain a diagnosis and case formulation; (2) treatment planning and obtaining the patient’s informed consent to the treatment plan; (3) treatment; and (4) continuous monitoring and hypothesis-testing. [...] A case formulation is important, because interventions flow from it [...] a complete case formulation describes all of the patient’s symptoms, disorders, and problems, and proposes hypotheses about the mechanisms causing the disorders and problems, the precipitants of the disorders and problems, and the origins of the mechanisms. [...] To understand the case fully, the therapist must know all of the problems. [...] the therapist who simply focuses on the obvious problems or those on which the patient wishes to focus may miss important problems. Patients frequently wish to ignore problems such as substance abuse, self-harming behaviors, or others that can interfere with the successful treatment of the problems on which the patient does want to focus”
“numerous studies have now shown that CT (Cognitive Therapy) is associated with reductions of negative cognitions [...] Garratt, Ingram, Rand, and Sawalani (2007) concluded in their review that the empirical literature is generally consistent with the hypothesis that CT results in cognitive changes that in turn predict reductions in depressive symptom severity. [...] although the research designs and statistical techniques employed in most of these studies are appropriate for testing whether reductions in depressive symptoms and negative cognitions covary during CT, they do not allow for rigorous tests of the causal relations between symptoms and cognitions [...] Notably, relatively few studies have included multiple assessments of both symptoms and plausible mediators [...] In summary, given the research designs and data-analytic strategies employed in the majority of studies to date, only tentative conclusions can be drawn from the literature regarding the role of cognition in mediating therapeutic improvement in CT. [...] Even though CT is somewhat more expensive than antidepressant medications in the short run, cost–benefit analyses to date have indicated that it pays for itself within a short time following treatment termination considering its potential to confer resistance to relapse and recurrence (Antonuccio, Thomas, & Danton, 1997; Dobson et al., 2008; Hollon et al., 2005).”
“Much of what distinguishes CT from other cognitive-behavioral therapies lies in the role assumed by the therapist and the role that he or she recommends to the patient. In the relationship, which is meant to be collaborative, the therapist and patient assume an equal share of the responsibility for solving the patient’s problems. The patient is assumed to be the expert on his or her own experience and on the meanings he or she attaches to events [...] cognitive therapists do not assume to know why a certain thought was upsetting; they ask the patient.” [Other approaches don't.]
“The purpose of scheduling activities in CT is twofold: (1) to increase the probability that the patient will engage in activities that he or she has been avoiding unwisely, and (2) to remove decision making as an obstacle in the initiation of an activity. Since the decision has been made in the therapist’s office, or in advance by the patient him- or herself, the patient need only carry out what he or she has agreed (or decided) to do. [...] Since tasks that have been avoided by the patient are often exactly those that have been difficult to do, modifying the structure of these tasks is often appropriate. Large tasks [...] are explicitly broken down into their smaller units [...] to make them more concrete and less overwhelming. This intervention has been termed “chunking.” “Graded tasks” can also be constructed, such that easier tasks or simpler aspects of larger tasks are set out as the first to be attempted. [...] Though chunking and graded task assignments may seem simplistic, it is often surprising to both patient and therapist how these simple alterations in the structure of a task change the patient’s view of the task and, subsequently, the likelihood of its being accomplished.”
“Problem-solving therapy (PST) is a positive approach to clinical intervention that focuses on training in constructive problem-solving attitudes and skills. [...] Problem solving should be distinguished from solution implementation. These two processes are conceptually different and require different sets of skills. “Problem solving” refers to the process of discovering solutions to specific problems, whereas “solution implementation” refers to the process of carrying out those solutions in the actual problematic situations. [...] Problem-solving skills and solution implementation skills are not always correlated; some individuals might possess poor problem-solving skills but good solution implementation skills, or vice versa.”
“A major assumption underlying the use of PST is that symptoms of psychopathology can often be understood and effectively prevented or treated if they are viewed as ineffective, maladaptive, and self-defeating coping behaviors that in turn have negative psychological and social consequences [...] The most important concept in the relational/problem-solving model is “problem-solving coping,” a process that integrates all cognitive appraisal and coping activities within a general social problem-solving framework. A person who applies the problem-solving coping strategy effectively (1) perceives a stressful life event as a challenge or “problem to be solved,” (2) believes that he or she is capable of solving the problem successfully, (3) carefully defines the problem and sets a realistic goal, (4) generates a variety of alternative “solutions” or coping options, (5) chooses the “best” or most effective solution, (6) implements the solution effectively, and (7) carefully observes and evaluates the outcome. [...] When the situation is appraised as changeable or controllable, then problem-focused goals are emphasized [...] On the other hand, if the situation is appraised as largely unchangeable, then emotion-focused goals are emphasized (e.g., acceptance, making something good come from the problem).”
“a number of studies have suggested that an accumulation of unresolved daily problems may have a greater negative impact on well-being than the number of major negative events”
“Problem-solving ability has been found to be positively related to adaptive situational coping strategies, behavioral competence (e.g., social skills, academic performance, job performance), and positive psychological functioning (e.g., positive affectivity, self-esteem, a sense of mastery and control, life satisfaction). In addition, problem-solving deficits have been found to be associated with general psychological distress, depression, suicidal ideation, anxiety, substance abuse and addictions, offending behavior (e.g., aggression, criminal behavior), severe psychopathology (e.g., schizophrenia), health- related distress, and health-compromising behaviors. These results have been found using different measures of social problem-solving ability in a wide range of participants”
“compared to happy couples, distressed couples are characterized by a high frequency of reciprocal negative or punishing exchanges between partners, a relative scarcity of positive outcomes that each partner provides for the other, and deficits in communication and problem-solving skills [...] Research has also demonstrated that partners in distressed relationships are more likely to notice selectively or “track” each other’s negative behavior [...], make negative attributions about the determinants of such behavior [...], hold unrealistic beliefs about intimate relationships [...], and be dissatisfied with the ways that their personal standards for the relationship (e.g., regarding the amount of time and effort that they should put into their relationship) are met [...] [However] some studies have indicated that increases in partners’ exchanges of positive behavior and improved communication skills have had limited impact on relationship satisfaction [...] the degree of improvement in communication is not correlated with level of improvement in relationship adjustment”
“distressed couples commonly exhibit a pattern in which one partner pursues the other for interaction, while the other partner withdraws [...] Females are more likely to be in the demanding role, whereas males more often withdraw”
“individuals often have strong standards for how partners should behave toward each other in a variety of domains. If these standards are not met, the individual is likely to become upset and behave negatively toward the partner. Likewise, one person’s level of satisfaction with the other’s behavior can be influenced by the attributions that person makes about the reasons for the partner’s actions. Thus, a husband might clean the house before his wife arrives at home, but whether she interprets this as a positive or negative behavior is likely to be influenced by her attribution or explanation for his behavior. If she concludes that he is attempting to be thoughtful and loving, she might experience his efforts to provide a clean house as positive. However, if she believes that he wishes to buy a new computer and is attempting to bribe her by cleaning the house, she might feel manipulated and experience the same behavior as negative. In essence, partners’ behaviors in intimate relationships carry great meaning, and not considering these cognitive factors can limit the effectiveness of treatment. We have described a variety of cognitive variables that are important in understanding couples’ relationships [...], including the following:
Selective attention—what each person notices about the partner and the relationship.
Attributions—causal and responsibility explanations about marital events.
Expectancies—predictions of what will occur in the relationship in the future.
Assumptions—how each person believes people and relationships actually function.
Standards—how each person believes people and relationships should function.
These cognitions help to shape how each individual experiences the relationship. [...] therapy at times will not focus on behavioral change but will help the partners reassess their cognitions about behaviors, so that they can be viewed in a more reasonable and balanced fashion.”
Here’s the link. I was black. I’m currently on the top-100 tactics list on playchess (#68 right now), but you can’t tell that from this game.
Note that the result displayed is of course wrong – the game was a dead draw and draw was agreed. It also was not a 1 minute game (see the post title) – it was a regular tournament game with FIDE rules against a ~1750 Elo opponent. I shared the game using playchess’ game sharing option, because it involves very little work and doesn’t require people who want to view games to have stuff like java, but unfortunately I had to ‘superimpose’ the game on top of a bullet game in order to share the game that way.
Because you often run into the Four Knights Game when playing the Petroff – which I as mentioned before often do, however these days mostly against stronger players where a draw would be acceptable – and because I haven’t actually ever looked seriously at that stuff because I figured it wasn’t anything to be afraid of, I watched this very nice instructional video on the afternoon before the game. Of course all of that analysis was completely useless because my opponent played 1.d4.
As far as I can tell from a very brief computer analysis I did not make any major inaccuracies during this game. Of course a more careful analysis might tell a different story, but I’m not going to spend more time on it than I already have. Unfortunately my opponent did not make any major inaccuracies either. The computer evaluation is around equal – if anything black has a slight edge – at move 18, and it doesn’t change a great deal throughout the rest of the game. Incidentally in case you were wondering the computer agrees with my assessment that it was stronger (~0.35 pawns or so stronger, actually a quite significant difference given the variation in evaluation that this game was subject to overall) to take with the a-pawn on b6 than with the Queen and that this capture overall improves my position. It’s the sort of move that may perhaps make people who know a little bit about chess but not very much confused, because they’ve heard about doubled pawns being weaknesses and so on – in this case the ‘weakness’ can’t really be exploited, I get a half-open file, and the former a-pawn can theoretically end up being exchanged with a c-pawn eyeing the center – a really good trade. Also in general you want the Queen to help control the light squares in a position like this, and taking the knight distracts it from that role and loses time.
The position on its own does not tell the whole story; my opponent got into serious time trouble and I was certainly the only one playing for a win after the knight exchange. My opponent had only 3 minutes left for the last 10 moves before the time control (i.e. at move 30), whereas I had half an hour, and already at move 35 he had only one minute left in a position which was certainly far from completely clear. From a positional point of view I also had no problems justifying playing on as his pawns are fixed on dark squares and my king might (somehow?) be able to invade and get to b3. So I pressed, but ended up having to accept the draw. This was not a surprising outcome as the London system is in general a very solid opening which is quite hard to break (on the other hand it’s also quite difficult to argue that white has any sort of advantage out of this opening, and the unambitious nature of these setups is presumably part of the reason why they are uncommon in top level chess).
Slightly boring games like these do hold some important lessons, but most of what one learns from such games one learns from the mistakes made, and there unfortunately weren’t a lot of those here. I guess you can use it as an example of the level of play you need to master in order to with the black pieces draw an average tournament player (who chooses an unambitious, if solid, opening).
Back when I read Kenwood and Lougheed, the first economic history text I’ve read devoted to such topics, the realization of how much the world and the conditions of the humans inhabiting it had changed during the last 200 years really hit me. Reading this book was a different experience because I knew some stuff already, but it added quite a bit to the narrative and I’m glad I did read it. If you haven’t read an economic history book which tells the story of how we got from the low-growth state to the high-income situation in which we find ourselves today, I think you should seriously consider doing so. It’s a bit like reading a book like Scarre et al., it has the potential to seriously alter the way you view the world – and not just the past, but the present as well. Particularly interesting is the way information in books like these tend to ‘replace’ ‘information’/mental models you used to have; when people know nothing about a topic they’ll often still have ‘an idea’ about what they think about it, and most of the time that idea is wrong – people usually make assumptions based on what they know about, and when things about which they make assumptions are radically different from anything they know, they will make wrong assumptions and get a lot of things seriously wrong. To take an example, in recent times human capital has been argued to play a very important role in determining economic growth differentials, and so an economist who’s not read economic history might think human capital played a very important role in the Industrial Revolution as well. Some economic historians thought along similar lines, but it turns out that what they found did not really support such ideas:
“Although human capital has been seen as crucial to economic growth in recent times, it has rarely featured as a major factor in accounts of the Industrial Revolution. One problem is that the machinery of the Industrial Revolution is usually characterized as de-skilling, substituting relatively unskilled labor for skilled artisans, and leading to a decline in apprenticeship [...] A second problem is that the widespread use of child labor raised the opportunity cost of schooling (Mitch, 1993, p. 276).”
I mentioned in the previous post how literacy rates didn’t change much during this period, which is also a serious problem with human-capital driven Industrial Revolution growth models. Here’s some stuff on how industrialization affected the health of the population:
“A large body of evidence indicates that average heights of males born in different parts of western and northern Europe began to decline, beginning with those born after 1760 for a period lasting until 1800. After a recovery, average heights resumed their decline for males born after 1830, the decline lasting this time until about 1860. The total reduction in average heights of English soldiers, for example, reached 2 cm during this period. Similar declines were found elsewhere [...] in the case of England, it is clear that the decline in the average height of males born after 1830 occurred at a time when real wages were rising [...] in the period 1820–70, the greatest improvement in life expectancy at birth occurred not in Great Britain but in other western and northwest European countries, such as France, Germany, the Netherlands, and especially Sweden [...] Even in industrializing northern England [infant mortality] only began to register progress after the middle of the nineteenth century – before the 1850s, infant mortality still went up [...] It is clear that economic growth accelerated during the 1700–1870 period – in northwestern Europe earlier and more strongly than in the rest of the continent; that real wages tended to lag behind (and again, were higher in the northwest than elsewhere); and that real improvements in other indicators of the standard of living – height, infant mortality, literacy – were often (and in particular for the British case) even more delayed. The fruits of the Industrial Revolution were spread very unevenly over the continent”
A marginally related observation which I could not help myself from adding here is this one: “three out of ten babies died before age 1 in Germany in the 1860s”. The world used to be a very different place.
Most people probably have some idea that physical things such as roads, railways, canals, steam engines, etc. made a big difference, but how they made that difference may not be completely clear. As a person who can without problems go down to the local grocery store and buy bananas for a small fraction of the hourly average wage rate, it may be difficult to understand how much things have changed. The idea that spoilage during transport was a problem to such an extent that many goods were simply not available to people at all may be foreign to many people, and I doubt many people living today have given it a lot of thought how they would deal with the problems associated with transporting stuff upstream on rivers before canals took off. Here’s a relevant quote:
“The difficulties of going upstream always presented problems in the narrow confines of rivers. Using poles and oars for propulsion meant large crews and undermined the advantages of moving goods by water. Canals solved the problem with vessels pulled by draught animals walking along towpaths alongside the waterways.”
Roads were very important as well:
“Roads and bridges, long neglected, got new attention from governments and private investors in the first half of the eighteenth century. [...] Over long hauls – distances of about 300 km – improved roads could lead to at least a doubling of productivity in land transport by the 1760s and a tripling by the 1830s. There were significant gains from a shift to using wagons in place of pack animals, something made possible by better roads. [...] Pavement was created or improved, increasing speed, especially in poor weather. In the Austrian Netherlands, for example, new brick or stone roads replaced mud tracks, the Habsburg monarchs increasing the road network from 200 km in 1700 to nearly 2,850 km by 1793″
As were railroads:
“As early as 1801 an English engineer took a steam carriage from his home in Cornwall to London. [...] In 1825 in northern England a railroad more than 38 km long went into operation. By 1829 engines capable of speeds of almost 60 kilometers an hour could serve as effective people carriers, in addition to their typical original function as vehicles for moving coal. In England in 1830 about 100km of railways were open to traffic; by 1846 the distance was over 1,500 km. The following year construction soared, and by 1860 there were more than 15,000 km of tracks.”
How did growth numbers look like in the past? The numbers used to be very low:
“Economic historians agree that increases in per capita GDP remained limited across Europe during the eighteenth century and even during the early decades of the nineteenth century. In the period before 1820, the highest rates of economic growth were experienced in Great Britain. Recent estimates suggest that per capita GDP increased at an annual rate of 0.3 percent per annum in England or by a total of 45 percent during the period 1700–1820 [...] In other countries and regions of Europe, increases in per capita GDP were much more limited – at or below 0.1 percent per annum or less than 20 percent for 1700–1820 as a whole. As a result, at some time in the second half of the eighteenth century per capita incomes in England (but not the United Kingdom) began to exceed those in the Netherlands, the country with the highest per capita incomes until that date. The gap between the Netherlands and Great Britain on the one hand, and the rest of the continent on the other, was already significant around 1820. Italian, Spanish, Polish, Turkish, or southeastern European levels of income per capita were less than half of those occurring around the North Sea [...] From the 1830s and especially the 1840s onwards, the pace of economic growth accelerated significantly. Whereas in the eighteenth century England, with a growth rate of 0.3 percent per annum, had been the most dynamic, from the 1830s onwards all European countries realized growth rates that were unheard of during the preceding century. Between 1830 and 1870 the growth of GDP per capita in the United Kingdom accelerated to more than 1.5 percent per year; the Belgian economy was even more successful, with 1.7 percent per year, but countries on the periphery, such as Poland, Turkey, and Russia, also registered annual rates of growth of 0.5 percent or more [...] Parts of the continent then tended to catch up, with rates of growth exceeding 1 percent per annum after 1870. Catch-up or convergence applied especially to France, Germany, Austria, and the Scandinavian countries. [...] in 1870 all Europeans enjoyed an average income that was 50 to 200 percent higher than in the eighteenth century”
To have growth you need food:
“In 1700, all economies were based very largely on agricultural production. The agricultural sector employed most of the workforce, consumed most of the capital inputs and provided most of the outputs in the economy [...] at the onset of the Industrial Revolution in England , around 1770, food accounted for approximately 60 percent of the household budget, compared with just 10 percent in 2001 (Feinstein, 1998). But it is important to realise that agriculture additionally provided most of the raw materials for industrial production: fibres for cloth, animal skins for leather, and wood for building houses and ships and making the charcoal used in metal smelting. There was scarcely an economic activity that was not ultimately dependent on agricultural production – even down to the quill pens and ink used by clerks in the service industries. [...] substantial food imports were unavailable to any country in the eighteenth century because no country was producing a sufficient agricultural surplus to be able to supply the food demanded by another. Therefore any transfer of labor resources from agriculture to industry required high output per worker in domestic agriculture, because each agricultural worker had to produce enough to feed both himself and some fraction of an industrial worker. This is crucial, because the transfer of labor resources out of agriculture and into industry has come to be seen as the defining feature of early industrialization. Alternative paradigms of industrial revolution – such as significant increases in the rate of productivity growth, or a marked superiority of industrial productivity over that of agriculture – have not been supported by the empirical evidence.”
“Much, though not all, of the increase in [agricultural] output between 1700 and 1870 is attributable to an increase in the intensity of rotations and the switch to new crops [...] Many of the fertilization techniques (such as liming and marling) that came into fashion in the eighteenth century in England and the Netherlands had been known for many years (even in Roman times), and farmers had merely chosen to reintroduce them because relative prices had shifted in such a way as to make it profitable once again. The same may also be true of some aspects of crop rotation, such as the increasing use of clover in England. [...] O’Brien and Keyder [...] have suggested that English farmers had perhaps two-thirds more animal power than their French counterparts in 1800, helping to explain the differences in labor productivity. The role of horsepower was crucial to increasing output both on and off the farm [...] [Also] by 1871 an estimated 25 percent of wheat in England and Wales was harvested by mechanical reapers, considerably more than in Germany (3.6 percent in 1882) or France (6.9 percent in 1882)”
“It is no coincidence that those places where agricultural productivity improved first were also the first to industrialize. For industrialization to occur, it had to be possible to produce more food with fewer people. England was able to do this because markets tended to be more efficient, and incentives for farmers to increase output were strong [...] When new techniques, crop rotations, or the reorganization of land ownership were rejected, it was not necessarily because economic agents were averse to change, but because the traditional systems were considered more profitable by those with vested interests. Agricultural productivity in southern and eastern Europe may have been low, but the large landowners were often exceedingly rich, and were successful in maintaining policies which favored the current production systems.”
I think I talked about urbanization in the previous post as well, but I had to include these numbers because it’s yet another way to think about the changes that took place during the Industrial Revolution:
“On the whole, European urban patterns [in the mid-eighteenth century] were not very different from those of the late Middle Ages (i.e. between the tenth and the fourteenth centuries). The only difference was the rise of urbanization north of Flanders, especially in the Netherlands and England. [...] In Europe, in the early modern age, fewer than 10 percent of the population lived in urban centers with more than 10,000 inhabitants. At the end of the twentieth century, this had increased to about 70 percent. In 1800 the population of the world was 900 million, of which about 50 million (5.5 percent) lived in urban centers of more than 10,000 inhabitants: the number of such centers was between 1,500 and 1,700, and the number of cities with more than 5,000 inhabitants was more than 4,000. At this time Europe was one of the most urbanized areas in the world [...], with about one third of the world’s cities being located in Europe [...] In the nineteenth century urban populations rose in Europe by 27 million [...] (by 22.5 million in 1800–70) and the number of cities with over 5,000 inhabitants grew from 1,600 in 1800 to 3,419 in 1870. On the whole, in today’s developed regions, urbanization rates tripled in the nineteenth century, from 10 to 30 percent [...] With regard to [European] centers with over 5,000 inhabitants, their number was 86 percent higher in 1800 than in 1700, and this figure increased fourfold by 1870. [...] Between 1700 and 1800 centers with more than 10,000 inhabitants doubled. [...] On the world scale, urbanization was about 5 percent in 1800, 15–20 percent in 1900, and 40 percent in 2000″
There’s a lot more interesting stuff in the book, but I had to draw a line somewhere. As I pointed out in the beginning, if you haven’t read a book dealing with this topic you might want to consider doing it at some point.
At some point I should probably read his lectures, but I don’t see that happening anytime soon. In the meantime lectures like the ones posted below in this post are good, if imperfect, substitutes: They are very enjoyable to watch. He repeats himself quite a bit; I assume that part of the reason is that this stuff is from before internet lectures became a thing, and there would have been no way for people to learn what he’d said in previous lectures, making it a reasonable strategy for the lecturer to repeat main points made in previous lectures so that newcomers not be completely lost.
The sound is really awful in the beginning of the second lecture especially, but a lot of the stuff covered there is review and the sound problem gets fixed around 17 minutes in. More generally the sound quality varies somewhat and it isn’t that great. Neither is the image quality – it’s quite grainy most of the time and this sometimes makes it hard to see what he’s written/drawn on the blackboard. The last lecture in particular would presumably have been much easier to follow if you could actually tell the differences among the various colours of chalk he’s using. There are also problems in all videos with the image freezing up around the one-hour mark (the sound keeps working, so he’ll talk without you being able to see what he’s doing), but this problem fortunately lasts only a very short while (30 seconds or so). In my opinion minor technical issues such as these really should not keep you from watching these lectures – these are lectures given before I was even born, by a Nobel Prize winning physicist – the fact that you can watch them at all is quite remarkable.
I had fun watching these lectures. Here’s one neat quote from the third lecture: “Now in order to describe both the space and the time pictures, I’m going to make a kind of graph which we call… – which is very handy – if I call it by its name you’ll be frightened so I’m not going to call it by its name.” I couldn’t hold back a brief laugh at that point – I’m sure some of you understand why. Here’s another nice one, related to Eddington‘s work on the coupling constant: “The first idea was by Eddington, and experiments were very crude in those days and the number looked very close to 136, so he proved by pure logic that it had to be 136. Then it turned out that them experiments showed that that was a little wrong, that it was closer to 137, so he found a slight error in the logic and proved [loud laughter in the background] with pure logic that it had to be exactly the integer 137.” There are a lot more of these in the lectures and incidentally if you manage to watch these lectures without at any point feeling a desire to laugh, your sense of humour is most likely very different from mine. I’m sure you’ll have a lot more fun watching these lectures than you’ll have reading articles like this one.
I will emphasize that these lectures are meant for the general public. Knowledge about stuff like vector algebra, modular arithmetic and complex numbers is not required, even though he implicitly covers this kind of stuff in the lectures. He tries very hard to keep things as simple as possible while still dealing with the main ideas; if you’re the least bit curious don’t miss out on this stuff due to some faulty assumption that this stuff is somehow beyond you. Either way you’ll probably have fun watching these lectures, whether or not you understand all of the stuff he covers.
Oh right, the lectures:
(This is the one I talked about with really bad sound in the beginning. The issue is as mentioned resolved approximately 17 minutes in.)
I started reading this book yesterday. I’m not super impressed, but it’s not horrible either.
One chapter in the book, chapter 2, deals specifically with ‘The Evidence Base for Cognitive-Behavioral Therapy’, and although this would normally be the sort of thing I’d be very interested in, I actually thought that was a rather weak chapter despite its preferential reliance on RCTs and reviews/meta-analyses – mostly because the authors seem to only care about whether or not there’s an effect, not how large it is; effect sizes are rarely reported. To make matters worse, in one case where they do report effect sizes as well as answering the ‘does this stuff work better than doing nothing?’-question (…and is that actually the question these articles answer? More on this below…), in the case where they’re talking about the treatment effects of cognitive-behavioral therapy (-CBT) on obsessive-compulsive disorder (-OCD), you suddenly realize that a lot of patients will not benefit at all from this stuff. A review article from the chapter notes that “one-third of those who complete a course of therapy, and nearly one-half of those who begin but do not complete treatment, will not make expected gains” – but despite this they conclude towards the end of the chapter when summing up that, “The absolute efficacy of CBT for OCD is positive and well-supported.” It makes you wonder which other conditions they talk about may technically ‘have an effect’ or ‘be well-supported’, yet lead to zero improvement for large groups of patients. A more thorough coverage of the treatment effects of a smaller number of conditions would probably have been advisable. There are other problems in this review – for example the coverage of CBT treatment effects of substance dependence/-abuse relies on material not reporting long-term results, making the results meaningless or worse – the authors note that long-term results are not reported, but the natural conclusion to draw from this problem is not drawn and it really should have been. For more on this topic see this post and Scott Alexander’s post to which I link in that post. Yet another problem is that in some cases the studies comparing the outcomes of CBT and pharmacological treatment options were undertaken so long ago (1980s) that they presumably no longer have much validity today, because they were comparing CBT to previous generations of pharmacotherapy. The problems with this chapter is part of why I don’t post much on this topic below despite being quite interested in this topic: Frankly I don’t really trust the authors’ conclusions, and I find the coverage severely lacking in detail. I should note that although chapter 2 wasn’t great, chapter 3 on ‘Cognitive Science and the Conceptual Foundations of Cognitive-Behavioral Therapy’ was significantly worse, and I actually decided against including anything from that chapter in the coverage below.
Some observations from the first third of the book below:
“At their core, CBTs share three fundamental propositions:
1. Cognitive activity affects behavior.
2. Cognitive activity may be monitored and altered.
3. Desired behavior change may be effected through cognitive change.”
“Three major classes of CBTs have been recognized, as each has a slightly different class of change goals [...] These classes are coping skills therapies, problem-solving therapies, and cognitive restructuring methods. [...] the different classes of therapy orient themselves toward different degrees of cognitive versus behavioral change. [...] Therapies included under the heading of “cognitive restructuring” assume that emotional distress is the consequence of maladaptive thoughts. Thus, the goal of these clinical interventions is to examine and challenge maladaptive thought patterns, and to establish more adaptive thought patterns. In contrast, “coping skills therapies” focus on the development of a repertoire of skills designed to assist the client in coping with a variety of stressful situations. The “problem-solving therapies” may be characterized as a combination of cognitive restructuring techniques and coping skills training procedures.”
“Briefly stated, the “mediational position” is that cognitive activity mediates the responses the individual has to his or her environment, and to some extent dictates the degree of adjustment or maladjustment of the individual. As a direct result of the mediational assumption, the CBTs share a belief that therapeutic change can be effected through an alteration of idiosyncratic, dysfunctional modes of thinking. Additionally, due to the behavioral heritage, many of the cognitive-behavioral methods draw upon behavioral principles and techniques in the conduct of therapy, and many of the cognitive-behavioral models rely to some extent upon behavioral assessment of change to document therapeutic progress. [...] one commonality among the various CBTs is their time-limited nature. In clear distinction from longer-term psychoanalytic therapy, CBTs attempt to effect change rapidly, and often with specific, preset lengths of therapeutic contact. Many of the treatment manuals written for CBTs recommend treatment in the range of 12–16 sessions [...] Related to the time-limited nature of CBT is the fact almost all applications of this general therapeutic approach are to specific problems. [...] A third commonality among cognitive-behavioral approaches is the belief that clients are, in a sense, the architects of their own misfortune, and that they therefore have control over their thoughts and actions [...] many CBTs are by nature either explicitly or implicitly educative.”
“Other criticisms pertain to research methodology. It has been argued that amalgamating placebo and waiting-list controls into a composite control condition confounds results (Parker, Roy, & Eyers, 2003). Specifically, Parker et al. asserted that participants assigned to a placebo condition are hopeful, because they assume that they are being treated, whereas participants assigned to a waiting-list control condition are discouraged, because they are not undergoing any treatment. They recommended that future research compare active treatments to different control conditions to disentangle potentially differing results. [...] In addition to limitations to the research base on the efficacy of CBT, there are limitations to efficacy research in general. Although RCTs are highly utilized and respected in efficacy research, the reelvance [sic] of their results to routine clinical practice has been questioned (Leichsenring et al., 2006). For example, the restrictive exclusion criteria of many RCTs may undermine the representativeness of the participants to the general population of people with the disorder. Also, comorbidities are common among disorders but are controlled for in RCTs through exclusionary criteria, or are simply not addressed. Also, researcher allegiance, or the tendency of the authors of a comparative treatment study to prefer one treatment over another, may introduce bias into the study design that results in findings supportive of the preferred treatment (Butler et al., 2006).”
“Most psychotherapists accept, at least in principle, the value of scientific inquiry, even while they differ widely in what they consider to be acceptable scientific methods. Despite this development, however, there has been a decided lag in the acceptance of scientific findings as the basis for setting new directions or for deciding what is factual among practicing therapists. Indeed for many practitioners, the true test of a given psychotherapy rests in both its theoretical logic and evidence from clinicians’ observations rather than data from sound scientific methods, even when the latter are available [...] What practitioners accept as valid hinges on both the methods used to derive results and the strength of their opinions. Practitioners prefer naturalistic research over randomized clinical trials, N = 1 or single-case studies over group designs, and individualized over group measures of outcome [...] They also tend to believe research favoring the brand that they practice over research that supports alternative psychotherapy approaches or equivalency among approaches. Since most psychotherapy research fails to comply with these values, psychotherapists often are quick to reject scientific findings that disagree with their own theoretical systems. Thus, while the reasons given for rejecting scientific evidence may be more sophisticated today than in the past, it may be no less likely to occur.”
“CT [cognitive therapy] is a specific form of the more general CBTs [...] Cognitive theory has been empirically based since its inception, in that it used findings from formal research to establish its theoretical principles. [...] CT may best be defined as the application of cognitive theory to a certain disorder and the use of techniques to modify the dysfunctional beliefs and maladaptive information-processing systems that are characteristic of the disorder [...] CT does not depend on the validity of insights into the nature of psychopathology for effectiveness in the therapeutic arena. First and foremost, cognitive theory emphasizes reliable observation and measurement in the assessment of the effects of treatment.”
“the efficacy of CT is differentially influenced by a variety of qualities characteristic of the patient and problem. Qualities such as patient coping styles, reactance levels, and complexity and severity of problems, among others, may influence the way that CT is applied. [...] One patient characteristic that has proven to predict patients’ response to CT is “coping style,” the method that an individual adopts when confronted with anxiety-provoking situations, and that typically is viewed as a trait-like pattern. CT has been found to be most effective among patients who exhibit an extroverted, undercontrolled, externalizing coping style [...] Internalization and externalization represent opposite poles on the traitlike dimension of coping style. Both coping styles may be used to reduce uncomfortable experience (i.e., provide escape or avoidance). Some patients cope by activating externalizing behaviors that allow either direct escape or avoidance of the feared environment. Alternatively, other patients may prefer behaviors (i.e., self-blame, compartmentalization, sensitization) that control internal experiences such as anxiety. Internalizing patients are typically characterized by low impulsivity and overcontrol of impulses, whereas externalizers generally exhibit highly impulsive or exaggerated behaviors. Additionally, internalizers tend to be more insightful and self-reflective. Internalizers typically inhibit feelings, tolerate emotional distress better than externalizers, and frequently attribute difficulties they encounter to themselves. On the other hand, externalizers tend to deny personal responsibility for either the cause or the solution of their problems, experience negative emotions as intolerable, and seek external stimulation. [...] Although the principles of treatment are the same as those for externalizers, the treatment of internalizing individuals is more complex.”
“The major impetus for psychotherapy integration comes from the evidence that no single school of psychotherapy has demonstrated consistent superiority over the others. Rather, psychotherapy research for specific problems, such as drug abuse or depression, has largely led to the conclusion that all approaches produce similar average effects [...] Unfortunately, the nonsignificance of treatment main effects often draws more attention than the growing body of research that demonstrates meaningful differences in the types of patients for whom different aspects of treatment are effective [...] For example, research indicates that for patients with symptoms of anxiety and depression [...] nondirective and paradoxical interventions are more effective than directive treatments in patients with high levels of pretherapy resistance (i.e., “resistance potential”[...]; and (3) therapies that target cognitive and behavior changes through contingency management [...] are more effective than insight-oriented therapies in impulsive or externalizing patients, but this effect is reversed in patients with less externalizing coping styles [...] The techniques of CT may be used with virtually any patient; however, the greatest benefit is achieved when the strategies or techniques are employed differentially, depending on patient dimensions such as coping style, type of problem, subjective distress, functional and social impairment, and level of resistance.”
“Patient resistance typically bodes poorly for treatment effectiveness, unless it is managed skillfully. It is generally assumed that some patients are more likely than others to resist therapeutic procedures. “Resistance” may be characterized as a dispositional trait and a transitory in-therapy state of oppositional (e.g., angry, irritable, and suspicious) behaviors. It involves both intrapsychic (image of self, safety, and psychological integrity) and interpersonal (loss of interpersonal freedom or power imposed by another) factors [...] “Reactance,” an extreme example of resistance, is manifested by oppositional and uncooperative behaviors. [...] Resistance is easily identifiable, and differential treatment plans for patients with high and low resistance are easily crafted. The successful implementation of these plans, however, is often quite a different matter. Overcoming patient resistance to the clinician’s efforts is difficult. It requires that the therapist set aside his or her own resistance to recognize that the patient’s oppositional behavior may actually be iatrogenic [...] therapists often [react] to patient resistance by becoming angry, critical, and rejecting, which are reactions that tend to reduce the willingness of patients to explore problems.” [This aspect of the treatment dimension was - perhaps not surprisingly - emphasized in Clark as well.]
- 180 grader
- alfred brendel
- Arthur Conan Doyle
- Bent Jensen
- Bill Bryson
- Bill Watterson
- Claude Berri
- current affairs
- Dan Simmons
- David Copperfield
- david lynch
- den kolde krig
- Dinu Lipatti
- Douglas Adams
- economic history
- Edward Grieg
- Eliezer Yudkowsky
- Ezra Levant
- Filippo Pacini
- financial regulation
- foreign aid
- Franz Kafka
- freedom of speech
- Friedrich von Flotow
- Fyodor Dostoevsky
- Game theory
- Garry Kasparov
- george enescu
- global warming
- Grahame Clark
- harry potter
- health care
- isaac asimov
- Jane Austen
- John Stuart Mill
- Jon Stewart
- Joseph Heller
- karl popper
- Khan Academy
- knowledge sharing
- Leland Yeager
- Marcel Pagnol
- Maria João Pires
- Mark Twain
- Martin Amis
- Martin Paldam
- mikhail gorbatjov
- Mikkel Plum
- Morten Uhrskov Jensen
- Muzio Clementi
- Nikolai Medtner
- North Korea
- nuclear proliferation
- nuclear weapons
- Ole Vagn Christensen
- Open Thread
- Oscar Wilde
- Pascal's Wager
- Paul Graham
- people are strange
- public choice
- rambling nonsense
- random stuff
- Richard Dawkins
- Rowan Atkinson
- Saudi Arabia
- science fiction
- Sun Tzu
- Terry Pratchett
- The Art of War
- Thomas Hobbes
- Thomas More
- walter gieseking
- William Easterly