Here’s the link. I was black. I’m currently on the top-100 tactics list on playchess (#68 right now), but you can’t tell that from this game.
Note that the result displayed is of course wrong – the game was a dead draw and draw was agreed. It also was not a 1 minute game (see the post title) – it was a regular tournament game with FIDE rules against a ~1750 Elo opponent. I shared the game using playchess’ game sharing option, because it involves very little work and doesn’t require people who want to view games to have stuff like java, but unfortunately I had to ‘superimpose’ the game on top of a bullet game in order to share the game that way.
Because you often run into the Four Knights Game when playing the Petroff – which I as mentioned before often do, however these days mostly against stronger players where a draw would be acceptable – and because I haven’t actually ever looked seriously at that stuff because I figured it wasn’t anything to be afraid of, I watched this very nice instructional video on the afternoon before the game. Of course all of that analysis was completely useless because my opponent played 1.d4.
As far as I can tell from a very brief computer analysis I did not make any major inaccuracies during this game. Of course a more careful analysis might tell a different story, but I’m not going to spend more time on it than I already have. Unfortunately my opponent did not make any major inaccuracies either. The computer evaluation is around equal – if anything black has a slight edge – at move 18, and it doesn’t change a great deal throughout the rest of the game. Incidentally in case you were wondering the computer agrees with my assessment that it was stronger (~0.35 pawns or so stronger, actually a quite significant difference given the variation in evaluation that this game was subject to overall) to take with the a-pawn on b6 than with the Queen and that this capture overall improves my position. It’s the sort of move that may perhaps make people who know a little bit about chess but not very much confused, because they’ve heard about doubled pawns being weaknesses and so on – in this case the ‘weakness’ can’t really be exploited, I get a half-open file, and the former a-pawn can theoretically end up being exchanged with a c-pawn eyeing the center – a really good trade. Also in general you want the Queen to help control the light squares in a position like this, and taking the knight distracts it from that role and loses time.
The position on its own does not tell the whole story; my opponent got into serious time trouble and I was certainly the only one playing for a win after the knight exchange. My opponent had only 3 minutes left for the last 10 moves before the time control (i.e. at move 30), whereas I had half an hour, and already at move 35 he had only one minute left in a position which was certainly far from completely clear. From a positional point of view I also had no problems justifying playing on as his pawns are fixed on dark squares and my king might (somehow?) be able to invade and get to b3. So I pressed, but ended up having to accept the draw. This was not a surprising outcome as the London system is in general a very solid opening which is quite hard to break (on the other hand it’s also quite difficult to argue that white has any sort of advantage out of this opening, and the unambitious nature of these setups is presumably part of the reason why they are uncommon in top level chess).
Slightly boring games like these do hold some important lessons, but most of what one learns from such games one learns from the mistakes made, and there unfortunately weren’t a lot of those here. I guess you can use it as an example of the level of play you need to master in order to with the black pieces draw an average tournament player (who chooses an unambitious, if solid, opening).
Back when I read Kenwood and Lougheed, the first economic history text I’ve read devoted to such topics, the realization of how much the world and the conditions of the humans inhabiting it had changed during the last 200 years really hit me. Reading this book was a different experience because I knew some stuff already, but it added quite a bit to the narrative and I’m glad I did read it. If you haven’t read an economic history book which tells the story of how we got from the low-growth state to the high-income situation in which we find ourselves today, I think you should seriously consider doing so. It’s a bit like reading a book like Scarre et al., it has the potential to seriously alter the way you view the world – and not just the past, but the present as well. Particularly interesting is the way information in books like these tend to ‘replace’ ‘information’/mental models you used to have; when people know nothing about a topic they’ll often still have ‘an idea’ about what they think about it, and most of the time that idea is wrong – people usually make assumptions based on what they know about, and when things about which they make assumptions are radically different from anything they know, they will make wrong assumptions and get a lot of things seriously wrong. To take an example, in recent times human capital has been argued to play a very important role in determining economic growth differentials, and so an economist who’s not read economic history might think human capital played a very important role in the Industrial Revolution as well. Some economic historians thought along similar lines, but it turns out that what they found did not really support such ideas:
“Although human capital has been seen as crucial to economic growth in recent times, it has rarely featured as a major factor in accounts of the Industrial Revolution. One problem is that the machinery of the Industrial Revolution is usually characterized as de-skilling, substituting relatively unskilled labor for skilled artisans, and leading to a decline in apprenticeship [...] A second problem is that the widespread use of child labor raised the opportunity cost of schooling (Mitch, 1993, p. 276).”
I mentioned in the previous post how literacy rates didn’t change much during this period, which is also a serious problem with human-capital driven Industrial Revolution growth models. Here’s some stuff on how industrialization affected the health of the population:
“A large body of evidence indicates that average heights of males born in different parts of western and northern Europe began to decline, beginning with those born after 1760 for a period lasting until 1800. After a recovery, average heights resumed their decline for males born after 1830, the decline lasting this time until about 1860. The total reduction in average heights of English soldiers, for example, reached 2 cm during this period. Similar declines were found elsewhere [...] in the case of England, it is clear that the decline in the average height of males born after 1830 occurred at a time when real wages were rising [...] in the period 1820–70, the greatest improvement in life expectancy at birth occurred not in Great Britain but in other western and northwest European countries, such as France, Germany, the Netherlands, and especially Sweden [...] Even in industrializing northern England [infant mortality] only began to register progress after the middle of the nineteenth century – before the 1850s, infant mortality still went up [...] It is clear that economic growth accelerated during the 1700–1870 period – in northwestern Europe earlier and more strongly than in the rest of the continent; that real wages tended to lag behind (and again, were higher in the northwest than elsewhere); and that real improvements in other indicators of the standard of living – height, infant mortality, literacy – were often (and in particular for the British case) even more delayed. The fruits of the Industrial Revolution were spread very unevenly over the continent”
A marginally related observation which I could not help myself from adding here is this one: “three out of ten babies died before age 1 in Germany in the 1860s”. The world used to be a very different place.
Most people probably have some idea that physical things such as roads, railways, canals, steam engines, etc. made a big difference, but how they made that difference may not be completely clear. As a person who can without problems go down to the local grocery store and buy bananas for a small fraction of the hourly average wage rate, it may be difficult to understand how much things have changed. The idea that spoilage during transport was a problem to such an extent that many goods were simply not available to people at all may be foreign to many people, and I doubt many people living today have given it a lot of thought how they would deal with the problems associated with transporting stuff upstream on rivers before canals took off. Here’s a relevant quote:
“The difficulties of going upstream always presented problems in the narrow confines of rivers. Using poles and oars for propulsion meant large crews and undermined the advantages of moving goods by water. Canals solved the problem with vessels pulled by draught animals walking along towpaths alongside the waterways.”
Roads were very important as well:
“Roads and bridges, long neglected, got new attention from governments and private investors in the first half of the eighteenth century. [...] Over long hauls – distances of about 300 km – improved roads could lead to at least a doubling of productivity in land transport by the 1760s and a tripling by the 1830s. There were significant gains from a shift to using wagons in place of pack animals, something made possible by better roads. [...] Pavement was created or improved, increasing speed, especially in poor weather. In the Austrian Netherlands, for example, new brick or stone roads replaced mud tracks, the Habsburg monarchs increasing the road network from 200 km in 1700 to nearly 2,850 km by 1793″
As were railroads:
“As early as 1801 an English engineer took a steam carriage from his home in Cornwall to London. [...] In 1825 in northern England a railroad more than 38 km long went into operation. By 1829 engines capable of speeds of almost 60 kilometers an hour could serve as effective people carriers, in addition to their typical original function as vehicles for moving coal. In England in 1830 about 100km of railways were open to traffic; by 1846 the distance was over 1,500 km. The following year construction soared, and by 1860 there were more than 15,000 km of tracks.”
How did growth numbers look like in the past? The numbers used to be very low:
“Economic historians agree that increases in per capita GDP remained limited across Europe during the eighteenth century and even during the early decades of the nineteenth century. In the period before 1820, the highest rates of economic growth were experienced in Great Britain. Recent estimates suggest that per capita GDP increased at an annual rate of 0.3 percent per annum in England or by a total of 45 percent during the period 1700–1820 [...] In other countries and regions of Europe, increases in per capita GDP were much more limited – at or below 0.1 percent per annum or less than 20 percent for 1700–1820 as a whole. As a result, at some time in the second half of the eighteenth century per capita incomes in England (but not the United Kingdom) began to exceed those in the Netherlands, the country with the highest per capita incomes until that date. The gap between the Netherlands and Great Britain on the one hand, and the rest of the continent on the other, was already significant around 1820. Italian, Spanish, Polish, Turkish, or southeastern European levels of income per capita were less than half of those occurring around the North Sea [...] From the 1830s and especially the 1840s onwards, the pace of economic growth accelerated significantly. Whereas in the eighteenth century England, with a growth rate of 0.3 percent per annum, had been the most dynamic, from the 1830s onwards all European countries realized growth rates that were unheard of during the preceding century. Between 1830 and 1870 the growth of GDP per capita in the United Kingdom accelerated to more than 1.5 percent per year; the Belgian economy was even more successful, with 1.7 percent per year, but countries on the periphery, such as Poland, Turkey, and Russia, also registered annual rates of growth of 0.5 percent or more [...] Parts of the continent then tended to catch up, with rates of growth exceeding 1 percent per annum after 1870. Catch-up or convergence applied especially to France, Germany, Austria, and the Scandinavian countries. [...] in 1870 all Europeans enjoyed an average income that was 50 to 200 percent higher than in the eighteenth century”
To have growth you need food:
“In 1700, all economies were based very largely on agricultural production. The agricultural sector employed most of the workforce, consumed most of the capital inputs and provided most of the outputs in the economy [...] at the onset of the Industrial Revolution in England , around 1770, food accounted for approximately 60 percent of the household budget, compared with just 10 percent in 2001 (Feinstein, 1998). But it is important to realise that agriculture additionally provided most of the raw materials for industrial production: fibres for cloth, animal skins for leather, and wood for building houses and ships and making the charcoal used in metal smelting. There was scarcely an economic activity that was not ultimately dependent on agricultural production – even down to the quill pens and ink used by clerks in the service industries. [...] substantial food imports were unavailable to any country in the eighteenth century because no country was producing a sufficient agricultural surplus to be able to supply the food demanded by another. Therefore any transfer of labor resources from agriculture to industry required high output per worker in domestic agriculture, because each agricultural worker had to produce enough to feed both himself and some fraction of an industrial worker. This is crucial, because the transfer of labor resources out of agriculture and into industry has come to be seen as the defining feature of early industrialization. Alternative paradigms of industrial revolution – such as significant increases in the rate of productivity growth, or a marked superiority of industrial productivity over that of agriculture – have not been supported by the empirical evidence.”
“Much, though not all, of the increase in [agricultural] output between 1700 and 1870 is attributable to an increase in the intensity of rotations and the switch to new crops [...] Many of the fertilization techniques (such as liming and marling) that came into fashion in the eighteenth century in England and the Netherlands had been known for many years (even in Roman times), and farmers had merely chosen to reintroduce them because relative prices had shifted in such a way as to make it profitable once again. The same may also be true of some aspects of crop rotation, such as the increasing use of clover in England. [...] O’Brien and Keyder [...] have suggested that English farmers had perhaps two-thirds more animal power than their French counterparts in 1800, helping to explain the differences in labor productivity. The role of horsepower was crucial to increasing output both on and off the farm [...] [Also] by 1871 an estimated 25 percent of wheat in England and Wales was harvested by mechanical reapers, considerably more than in Germany (3.6 percent in 1882) or France (6.9 percent in 1882)”
“It is no coincidence that those places where agricultural productivity improved first were also the first to industrialize. For industrialization to occur, it had to be possible to produce more food with fewer people. England was able to do this because markets tended to be more efficient, and incentives for farmers to increase output were strong [...] When new techniques, crop rotations, or the reorganization of land ownership were rejected, it was not necessarily because economic agents were averse to change, but because the traditional systems were considered more profitable by those with vested interests. Agricultural productivity in southern and eastern Europe may have been low, but the large landowners were often exceedingly rich, and were successful in maintaining policies which favored the current production systems.”
I think I talked about urbanization in the previous post as well, but I had to include these numbers because it’s yet another way to think about the changes that took place during the Industrial Revolution:
“On the whole, European urban patterns [in the mid-eighteenth century] were not very different from those of the late Middle Ages (i.e. between the tenth and the fourteenth centuries). The only difference was the rise of urbanization north of Flanders, especially in the Netherlands and England. [...] In Europe, in the early modern age, fewer than 10 percent of the population lived in urban centers with more than 10,000 inhabitants. At the end of the twentieth century, this had increased to about 70 percent. In 1800 the population of the world was 900 million, of which about 50 million (5.5 percent) lived in urban centers of more than 10,000 inhabitants: the number of such centers was between 1,500 and 1,700, and the number of cities with more than 5,000 inhabitants was more than 4,000. At this time Europe was one of the most urbanized areas in the world [...], with about one third of the world’s cities being located in Europe [...] In the nineteenth century urban populations rose in Europe by 27 million [...] (by 22.5 million in 1800–70) and the number of cities with over 5,000 inhabitants grew from 1,600 in 1800 to 3,419 in 1870. On the whole, in today’s developed regions, urbanization rates tripled in the nineteenth century, from 10 to 30 percent [...] With regard to [European] centers with over 5,000 inhabitants, their number was 86 percent higher in 1800 than in 1700, and this figure increased fourfold by 1870. [...] Between 1700 and 1800 centers with more than 10,000 inhabitants doubled. [...] On the world scale, urbanization was about 5 percent in 1800, 15–20 percent in 1900, and 40 percent in 2000″
There’s a lot more interesting stuff in the book, but I had to draw a line somewhere. As I pointed out in the beginning, if you haven’t read a book dealing with this topic you might want to consider doing it at some point.
At some point I should probably read his lectures, but I don’t see that happening anytime soon. In the meantime lectures like the ones posted below in this post are good, if imperfect, substitutes: They are very enjoyable to watch. He repeats himself quite a bit; I assume that part of the reason is that this stuff is from before internet lectures became a thing, and there would have been no way for people to learn what he’d said in previous lectures, making it a reasonable strategy for the lecturer to repeat main points made in previous lectures so that newcomers not be completely lost.
The sound is really awful in the beginning of the second lecture especially, but a lot of the stuff covered there is review and the sound problem gets fixed around 17 minutes in. More generally the sound quality varies somewhat and it isn’t that great. Neither is the image quality – it’s quite grainy most of the time and this sometimes makes it hard to see what he’s written/drawn on the blackboard. The last lecture in particular would presumably have been much easier to follow if you could actually tell the differences among the various colours of chalk he’s using. There are also problems in all videos with the image freezing up around the one-hour mark (the sound keeps working, so he’ll talk without you being able to see what he’s doing), but this problem fortunately lasts only a very short while (30 seconds or so). In my opinion minor technical issues such as these really should not keep you from watching these lectures – these are lectures given before I was even born, by a Nobel Prize winning physicist – the fact that you can watch them at all is quite remarkable.
I had fun watching these lectures. Here’s one neat quote from the third lecture: “Now in order to describe both the space and the time pictures, I’m going to make a kind of graph which we call… – which is very handy – if I call it by its name you’ll be frightened so I’m not going to call it by its name.” I couldn’t hold back a brief laugh at that point – I’m sure some of you understand why. Here’s another nice one, related to Eddington‘s work on the coupling constant: “The first idea was by Eddington, and experiments were very crude in those days and the number looked very close to 136, so he proved by pure logic that it had to be 136. Then it turned out that them experiments showed that that was a little wrong, that it was closer to 137, so he found a slight error in the logic and proved [loud laughter in the background] with pure logic that it had to be exactly the integer 137.” There are a lot more of these in the lectures and incidentally if you manage to watch these lectures without at any point feeling a desire to laugh, your sense of humour is most likely very different from mine. I’m sure you’ll have a lot more fun watching these lectures than you’ll have reading articles like this one.
I will emphasize that these lectures are meant for the general public. Knowledge about stuff like vector algebra, modular arithmetic and complex numbers is not required, even though he implicitly covers this kind of stuff in the lectures. He tries very hard to keep things as simple as possible while still dealing with the main ideas; if you’re the least bit curious don’t miss out on this stuff due to some faulty assumption that this stuff is somehow beyond you. Either way you’ll probably have fun watching these lectures, whether or not you understand all of the stuff he covers.
Oh right, the lectures:
(This is the one I talked about with really bad sound in the beginning. The issue is as mentioned resolved approximately 17 minutes in.)
I started reading this book yesterday. I’m not super impressed, but it’s not horrible either.
One chapter in the book, chapter 2, deals specifically with ‘The Evidence Base for Cognitive-Behavioral Therapy’, and although this would normally be the sort of thing I’d be very interested in, I actually thought that was a rather weak chapter despite its preferential reliance on RCTs and reviews/meta-analyses – mostly because the authors seem to only care about whether or not there’s an effect, not how large it is; effect sizes are rarely reported. To make matters worse, in one case where they do report effect sizes as well as answering the ‘does this stuff work better than doing nothing?’-question (…and is that actually the question these articles answer? More on this below…), in the case where they’re talking about the treatment effects of cognitive-behavioral therapy (-CBT) on obsessive-compulsive disorder (-OCD), you suddenly realize that a lot of patients will not benefit at all from this stuff. A review article from the chapter notes that “one-third of those who complete a course of therapy, and nearly one-half of those who begin but do not complete treatment, will not make expected gains” – but despite this they conclude towards the end of the chapter when summing up that, “The absolute efficacy of CBT for OCD is positive and well-supported.” It makes you wonder which other conditions they talk about may technically ‘have an effect’ or ‘be well-supported’, yet lead to zero improvement for large groups of patients. A more thorough coverage of the treatment effects of a smaller number of conditions would probably have been advisable. There are other problems in this review – for example the coverage of CBT treatment effects of substance dependence/-abuse relies on material not reporting long-term results, making the results meaningless or worse – the authors note that long-term results are not reported, but the natural conclusion to draw from this problem is not drawn and it really should have been. For more on this topic see this post and Scott Alexander’s post to which I link in that post. Yet another problem is that in some cases the studies comparing the outcomes of CBT and pharmacological treatment options were undertaken so long ago (1980s) that they presumably no longer have much validity today, because they were comparing CBT to previous generations of pharmacotherapy. The problems with this chapter is part of why I don’t post much on this topic below despite being quite interested in this topic: Frankly I don’t really trust the authors’ conclusions, and I find the coverage severely lacking in detail. I should note that although chapter 2 wasn’t great, chapter 3 on ‘Cognitive Science and the Conceptual Foundations of Cognitive-Behavioral Therapy’ was significantly worse, and I actually decided against including anything from that chapter in the coverage below.
Some observations from the first third of the book below:
“At their core, CBTs share three fundamental propositions:
1. Cognitive activity affects behavior.
2. Cognitive activity may be monitored and altered.
3. Desired behavior change may be effected through cognitive change.”
“Three major classes of CBTs have been recognized, as each has a slightly different class of change goals [...] These classes are coping skills therapies, problem-solving therapies, and cognitive restructuring methods. [...] the different classes of therapy orient themselves toward different degrees of cognitive versus behavioral change. [...] Therapies included under the heading of “cognitive restructuring” assume that emotional distress is the consequence of maladaptive thoughts. Thus, the goal of these clinical interventions is to examine and challenge maladaptive thought patterns, and to establish more adaptive thought patterns. In contrast, “coping skills therapies” focus on the development of a repertoire of skills designed to assist the client in coping with a variety of stressful situations. The “problem-solving therapies” may be characterized as a combination of cognitive restructuring techniques and coping skills training procedures.”
“Briefly stated, the “mediational position” is that cognitive activity mediates the responses the individual has to his or her environment, and to some extent dictates the degree of adjustment or maladjustment of the individual. As a direct result of the mediational assumption, the CBTs share a belief that therapeutic change can be effected through an alteration of idiosyncratic, dysfunctional modes of thinking. Additionally, due to the behavioral heritage, many of the cognitive-behavioral methods draw upon behavioral principles and techniques in the conduct of therapy, and many of the cognitive-behavioral models rely to some extent upon behavioral assessment of change to document therapeutic progress. [...] one commonality among the various CBTs is their time-limited nature. In clear distinction from longer-term psychoanalytic therapy, CBTs attempt to effect change rapidly, and often with specific, preset lengths of therapeutic contact. Many of the treatment manuals written for CBTs recommend treatment in the range of 12–16 sessions [...] Related to the time-limited nature of CBT is the fact almost all applications of this general therapeutic approach are to specific problems. [...] A third commonality among cognitive-behavioral approaches is the belief that clients are, in a sense, the architects of their own misfortune, and that they therefore have control over their thoughts and actions [...] many CBTs are by nature either explicitly or implicitly educative.”
“Other criticisms pertain to research methodology. It has been argued that amalgamating placebo and waiting-list controls into a composite control condition confounds results (Parker, Roy, & Eyers, 2003). Specifically, Parker et al. asserted that participants assigned to a placebo condition are hopeful, because they assume that they are being treated, whereas participants assigned to a waiting-list control condition are discouraged, because they are not undergoing any treatment. They recommended that future research compare active treatments to different control conditions to disentangle potentially differing results. [...] In addition to limitations to the research base on the efficacy of CBT, there are limitations to efficacy research in general. Although RCTs are highly utilized and respected in efficacy research, the reelvance [sic] of their results to routine clinical practice has been questioned (Leichsenring et al., 2006). For example, the restrictive exclusion criteria of many RCTs may undermine the representativeness of the participants to the general population of people with the disorder. Also, comorbidities are common among disorders but are controlled for in RCTs through exclusionary criteria, or are simply not addressed. Also, researcher allegiance, or the tendency of the authors of a comparative treatment study to prefer one treatment over another, may introduce bias into the study design that results in findings supportive of the preferred treatment (Butler et al., 2006).”
“Most psychotherapists accept, at least in principle, the value of scientific inquiry, even while they differ widely in what they consider to be acceptable scientific methods. Despite this development, however, there has been a decided lag in the acceptance of scientific findings as the basis for setting new directions or for deciding what is factual among practicing therapists. Indeed for many practitioners, the true test of a given psychotherapy rests in both its theoretical logic and evidence from clinicians’ observations rather than data from sound scientific methods, even when the latter are available [...] What practitioners accept as valid hinges on both the methods used to derive results and the strength of their opinions. Practitioners prefer naturalistic research over randomized clinical trials, N = 1 or single-case studies over group designs, and individualized over group measures of outcome [...] They also tend to believe research favoring the brand that they practice over research that supports alternative psychotherapy approaches or equivalency among approaches. Since most psychotherapy research fails to comply with these values, psychotherapists often are quick to reject scientific findings that disagree with their own theoretical systems. Thus, while the reasons given for rejecting scientific evidence may be more sophisticated today than in the past, it may be no less likely to occur.”
“CT [cognitive therapy] is a specific form of the more general CBTs [...] Cognitive theory has been empirically based since its inception, in that it used findings from formal research to establish its theoretical principles. [...] CT may best be defined as the application of cognitive theory to a certain disorder and the use of techniques to modify the dysfunctional beliefs and maladaptive information-processing systems that are characteristic of the disorder [...] CT does not depend on the validity of insights into the nature of psychopathology for effectiveness in the therapeutic arena. First and foremost, cognitive theory emphasizes reliable observation and measurement in the assessment of the effects of treatment.”
“the efficacy of CT is differentially influenced by a variety of qualities characteristic of the patient and problem. Qualities such as patient coping styles, reactance levels, and complexity and severity of problems, among others, may influence the way that CT is applied. [...] One patient characteristic that has proven to predict patients’ response to CT is “coping style,” the method that an individual adopts when confronted with anxiety-provoking situations, and that typically is viewed as a trait-like pattern. CT has been found to be most effective among patients who exhibit an extroverted, undercontrolled, externalizing coping style [...] Internalization and externalization represent opposite poles on the traitlike dimension of coping style. Both coping styles may be used to reduce uncomfortable experience (i.e., provide escape or avoidance). Some patients cope by activating externalizing behaviors that allow either direct escape or avoidance of the feared environment. Alternatively, other patients may prefer behaviors (i.e., self-blame, compartmentalization, sensitization) that control internal experiences such as anxiety. Internalizing patients are typically characterized by low impulsivity and overcontrol of impulses, whereas externalizers generally exhibit highly impulsive or exaggerated behaviors. Additionally, internalizers tend to be more insightful and self-reflective. Internalizers typically inhibit feelings, tolerate emotional distress better than externalizers, and frequently attribute difficulties they encounter to themselves. On the other hand, externalizers tend to deny personal responsibility for either the cause or the solution of their problems, experience negative emotions as intolerable, and seek external stimulation. [...] Although the principles of treatment are the same as those for externalizers, the treatment of internalizing individuals is more complex.”
“The major impetus for psychotherapy integration comes from the evidence that no single school of psychotherapy has demonstrated consistent superiority over the others. Rather, psychotherapy research for specific problems, such as drug abuse or depression, has largely led to the conclusion that all approaches produce similar average effects [...] Unfortunately, the nonsignificance of treatment main effects often draws more attention than the growing body of research that demonstrates meaningful differences in the types of patients for whom different aspects of treatment are effective [...] For example, research indicates that for patients with symptoms of anxiety and depression [...] nondirective and paradoxical interventions are more effective than directive treatments in patients with high levels of pretherapy resistance (i.e., “resistance potential”[...]; and (3) therapies that target cognitive and behavior changes through contingency management [...] are more effective than insight-oriented therapies in impulsive or externalizing patients, but this effect is reversed in patients with less externalizing coping styles [...] The techniques of CT may be used with virtually any patient; however, the greatest benefit is achieved when the strategies or techniques are employed differentially, depending on patient dimensions such as coping style, type of problem, subjective distress, functional and social impairment, and level of resistance.”
“Patient resistance typically bodes poorly for treatment effectiveness, unless it is managed skillfully. It is generally assumed that some patients are more likely than others to resist therapeutic procedures. “Resistance” may be characterized as a dispositional trait and a transitory in-therapy state of oppositional (e.g., angry, irritable, and suspicious) behaviors. It involves both intrapsychic (image of self, safety, and psychological integrity) and interpersonal (loss of interpersonal freedom or power imposed by another) factors [...] “Reactance,” an extreme example of resistance, is manifested by oppositional and uncooperative behaviors. [...] Resistance is easily identifiable, and differential treatment plans for patients with high and low resistance are easily crafted. The successful implementation of these plans, however, is often quite a different matter. Overcoming patient resistance to the clinician’s efforts is difficult. It requires that the therapist set aside his or her own resistance to recognize that the patient’s oppositional behavior may actually be iatrogenic [...] therapists often [react] to patient resistance by becoming angry, critical, and rejecting, which are reactions that tend to reduce the willingness of patients to explore problems.” [This aspect of the treatment dimension was - perhaps not surprisingly - emphasized in Clark as well.]
I love Crawford’s lectures, and this one is great as usual. Much of this will presumably be review if you’ve explored wikipedia a bit (lots of good astronomy stuff there), but there’ll probably be some new stuff as well and her delivery is really good.
I’m very skeptical about some of the numbers presented in this lecture, and this kind of stuff – insufficiently sourced (/unsourced) numbers which are hard to look up, also on account of other information being constantly added to the mix – is an aspect of lectures which I really don’t like. Not a great lecture in my opinion, but I figured I might as well post it anyway.
As usual it’s annoying that you can’t see where the lecturer is pointing when talking about stuff on a given slide, but the lecture has some interesting stuff and it’s worth watching it despite this problem.
I’ve never really thought about myself as ‘gifted’, but during a conversation with a friend not too long ago I was reminded that my parents discussed with my teachers at one point early on if it would be better for me to skip a grade or not. This was probably in the third grade or so. I was asked, and I seem to remember not wanting to – during my conversation with the friend I brought up some motives I had (…may have had?) for not wanting to, but I’m not sure if I remember the context correctly and so perhaps it’s better to just say that I can’t recall precisely why I was against this idea, but that I was. Neither of my parents were all that keen on the idea anyway. Incidentally the question of grade-skipping was asked in a Mensa survey answered by a sizeable proportion of all Danish members last year; I’m not allowed to cover that data here (or I would have already), but I don’t think I’ll get in trouble by saying that grade-skipping was quite rare even in this group of people – this surprised me a bit.
Anyway, a snippet from the article:
“There are widespread myths about the psychological vulnerability of gifted students and therefore fears that acceleration will lead to an increase in disturbances such as anxiety, depression, delinquent behavior, and lowered self-esteem. In fact, a comprehensive survey of the research on this topic finds no evidence that gifted students are any more psychologically vulnerable than other students, although boredom, underachievement, perfectionism, and succumbing to the effects of peer pressure are predictable when needs for academic advancement and compatible peers are unmet (Neihart, Reis, Robinson, & Moon, 2002). Questions remain, however, as to whether acceleration may place some students more at risk than others.”
Note incidentally that relative age effects (how is the grade/other academic outcomes of individual i impacted by the age difference between individual i and his/her classmates) vary across countries, but are usually not insignificant; most places you look the older students in the classroom do better than their younger classmates, all else equal. It’s worth having both such effects as well as the cross-country heterogeneities (and the mechanisms behind them) in mind when considering the potential impact of acceleration on academic performance – given differences across countries there’s no good reason why ‘acceleration effects’ should be homogenous across countries either. Relative age effects are sizeable in most countries – see e.g. this. I read a very nice study a while back investigating the impact of relative age on tracking options of German students and later life outcomes (the effects were quite large), but I’m too lazy to go look for it now – I may add it to this post later (but I probably won’t).
ii. Publishers withdraw more than 120 gibberish papers. (…still a lot of papers to go – do remember that at this point it’s only a small minority of all published gibberish papers which are computer-generated…)
Nope, this is not another article about how drinking during pregnancy is bad for the fetus (for stuff on that, see instead e.g. this post – link i.); this one is about how alcohol exposure before conception may harm the child:
“It has been well documented that maternal alcohol exposure during fetal development can have devastating neurological consequences. However, less is known about the consequences of maternal and/or paternal alcohol exposure outside of the gestational time frame. Here, we exposed adolescent male and female rats to a repeated binge EtOH exposure paradigm and then mated them in adulthood. Hypothalamic samples were taken from the offspring of these animals at postnatal day (PND) 7 and subjected to a genome-wide microarray analysis followed by qRT-PCR for selected genes. Importantly, the parents were not intoxicated at the time of mating and were not exposed to EtOH at any time during gestation therefore the offspring were never directly exposed to EtOH. Our results showed that the offspring of alcohol-exposed parents had significant differences compared to offspring from alcohol-naïve parents. Specifically, major differences were observed in the expression of genes that mediate neurogenesis and synaptic plasticity during neurodevelopment, genes important for directing chromatin remodeling, posttranslational modifications or transcription regulation, as well as genes involved in regulation of obesity and reproductive function. These data demonstrate that repeated binge alcohol exposure during pubertal development can potentially have detrimental effects on future offspring even in the absence of direct fetal alcohol exposure.”
I haven’t read all of it but I thought I should post it anyway. It is a study on rats who partied a lot early on in their lives and then mated later on after they’d been sober for a while, so I have no idea about the external validity (…I’m sure some people will say the study design is unrealistic – on account of the rats not also being drunk while having sex…) – but good luck setting up a similar prospective study on humans. I think it’ll be hard to do much more than just gather survey data (with a whole host of potential problems) and perhaps combine this kind of stuff with studies comparing outcomes (which?) across different geographical areas using things like legal drinking age reforms or something like that as early alcohol exposure instruments. I’d say that even if such effects are there they’ll be very hard to measure/identify and they’ll probably get lost in the noise.
iv. The relationship between obesity and type 2 diabetes is complicated. I’ve seen it reported elsewhere that this study ‘proved’ that there’s no link between obesity and diabetes or something like that – apparently you need headlines like that to sell ads. Such headlines make me very, tired.
v. Scientific Freud. Seemed like a relevant link to post now – I have an appointment with a shrink next week. I have been considering reading the Handbook of Cognitive Behavioral Therapy beforehand, but I haven’t gotten around to it.
vi. If people from the future write an encyclopedic article about your head, does that mean you did well in life? How you answer that question may depend on what they focus on when writing about the head in question. Interestingly this guy didn’t get an article like that.
I’m currently reading this book.
This is not the first economic history text I read on ‘this’ topic; a while back I read the Kenwood and Lougheed text. However as that book ‘only’ covers the time period from 1820-2000 and does not limit the coverage to Europe I’ve felt that I’ve had some gaps in my knowledge base, and reading this book was one way for me to try to fill the gaps. The book also partly bridges the gap between Whittock (coverage ends around 1550) and K&L. K&L is a good text, and although this book is also okay so far I’m far from certain I’ll read the second volume as it seems unnecessary – part of the justification for reading this book was precisely that the time period covered does not perfectly overlap with K&L. Interestingly, without really having had any intention to do so I have actually over the last few years covered a very large chunk of British history (Britain was the biggest player in the game during the Industrial Revolution, so naturally the book spends quite a few pages on her in this book); I’ve also in the past dealt with the Roman invasion of Britain, Harding had relevant stuff about Bronze Age developments, Heather had stuff about both the period under Roman rule and about later Viking Age developments, and of course then there’s Whittock. Include WW1 and WW2 book reading and offbeat books like Bryson’s At Home as well as stuff like Wikipedia’s great (featured) portal about the British Empire, which I’ve also been browsing from time to time, and it starts to add up – thinking about it, I’m probably at the point where I’ve read more (/much more?) British history than I have Danish history…
Anyway, back to the book. It has a lot of data, and I love that. Unfortunately it also spends some pages talking about macro models which have been used to try to make sense of that data (or was that actually what they were meant to do? Sometimes you wonder…), and I don’t like that very much. Most models assume things about the world which are blatantly false (which makes it easy for me to dismiss them and hard for me to take them seriously), a fact which the authors fortunately mention during the coverage (“the “Industrial Revolution in most growth models shares few similarities with the economic events unfolding in England in the 18th century””) – and I consider many of these and similar models to be, well, to a great extent a load of crap. An especially infuriating combination is the one where economic theorists have combined the macro modelling approach and historicism and have tried to identify ‘historical laws’. Mokyr and Voth argue in the first chapter that:
“A closer collaboration between those who want to discern general laws and those who have studied the historical facts and data closely may have a high payoff.”
To which I say: The facts/data guys should stay the hell away from those ‘other people’ (this was where I ended up – I called them different things in earlier drafts of this post). The views of people who’re working on trying to identify general Historical Laws should be disregarded altogether – they’re wasting their time and the time of the people who read their stuff. The people who do should read Popper instead.
The data which is included in the book is nice, and the book has quite a few tables and figures which I had to omit from the coverage. I’d say most people should be able to read the book and get a lot out of it, but people who’re considering reading it should keep in mind that it’s an economic history textbook and not ‘just’ a history text – “The approach is quantitative and makes explicit use of economic analysis, but in a manner that is accessible to undergraduates” – so if you’ve never heard about, say, the Heckscher–Ohlin model for example, there’ll be some stuff which you’ll not understand without looking up some stuff along the way. But I think most people should be able to take a lot away from the book even so. I may be biased/wrong.
Below some observations from the first three chapters, I’ve tried to emphasize key points for the readers who don’t want to read it all:
“the transition to modern economic growth was a long-drawn-out process. Even in the lead country, the United Kingdom, the annual growth rate of per capita income remained less than 0.5 percent until well into the nineteenth century. Only after 1820 were rates of growth above 1 percent per annum seen, and then only in a handful of countries.” [a 'growth argument' was incidentally, if I remember correctly, part of the reason why K&L decided to limit their coverage to 1820 and later.]
“The population–idea nexus [the idea that larger populations -> more ideas -> higher growth] is key in many unified growth models. How does this square with the historical record? As Crafts (1995) has pointed out, the implications for the cross-section of growth in Europe and around the world are simply not borne out by the facts – bigger countries did not grow faster. Modern data reinforce this conclusion: country size is either negatively related to GDP per capita, or has no effect at all. The negative finding seems plausible, as one of the most reliable correlates of economic growth, the rule of law (Hansson and Olsson, 2006), declines with country size. [...] the European experience after 1700 [also] does not suggest that the absolute size of economies is a good predictor of the timing of industrialization.”
“Most “constraints on the executive” took the form of rent-seeking groups ensuring that their share of the pie remained constant. Unsurprisingly, large parts of Europe’s early modern history read like one long tale of gridlock, with interest groups from local lords and merchant lobbies to the Church and the guilds squabbling over the distribution of output. [...] None of the groups that offered resistance to the centralizing agendas of rulers in France, Spain, Russia, Sweden, and elsewhere were interested in growth. Where they won, they did not push through sensible, longterm policies. They often replaced arbitrary taxation by the ruler with arbitrary exactions by local monopolies. [...] Economically successful but compact units were frequently destroyed by superior military forces or by the costs of having to maintain an army disproportionate to their tax base. The only two areas that escaped this fate enjoyed unusual geographical advantages for repelling foreign invasions – Britain and the northern Netherlands. Even these economies were burdened by high taxation [...] A fundamental trade-off [existed]: a powerful central government was more effective in protecting an economy from foreign marauders, but at the same time the least amenable to internal checks and balances.”
“In many models of long-run growth, the transition to self-sustaining growth is almost synonymous with rising returns to education, and a rapid acceleration in skill formation. [...] Developments during the Industrial Revolution in Britain appear largely at variance with these predictions. Most evidence is still based on the ability to sign one’s name, arguably a low standard of literacy (Schofield, 1973). British literacy rates during the Industrial Revolution were relatively low and largely stagnant [...] School enrollment rates did not increase much before the 1870s [...] A recent literature survey, focusing on the ability to sign one’s name in and around 1800, rates this proportion at about 60 percent for British males and 40 percent for females, more or less at a par with Belgium, slightly better than France, but worse than the Netherlands and Germany [...] The main conclusion appears to be that, while human-capital-based approaches hold some attractions for the period after 1850, few growth models have much to say about the first escape from low growth.”
“The average population growth rate in Europe in 1700–50 was 3.1 percent, ranging between 0.3 percent in the Netherlands and 8.9 percent in Russia [...] Figure 2.1 [...] shows two measures of fertility for England, 1540–2000. The first is the gross reproduction rate (GRR), the average number of daughters born per woman who lived through the full reproductive span, by decade. Such a woman would have given birth to nearly five children (daughters plus sons), all the way from the 1540s to the 1890s. Since in England 10–15 percent of each female cohort remained celibate, for married women the average number of births was nearly six. The demographic transition to modern fertility rates began only in the 1870s in England, as in most of Europe, but then progressed rapidly. [...] population growth [after 1750] occurred everywhere in Europe. Annual rates of growth were between 0.4 percent and 1.3 percent, except for France and Ireland. Europe’s population more than doubled in 1800–1900, compared with increases of 32 percent in 1500–1600, 13 percent in 1600–1700, and 56 percent in 1700–1800 [...] population growth was, at best, weakly associated with economic development [...] [From] 1800–1900, France grew by 65 percent, from 29 million to 41 million. In the same period England and Wales grew from under 9 million to over 30 million, and Germany grew from about 25 million to 56 million.”
“Mortality, especially for infants, remained extremely high in eastern Europe. Blum and Troitskaja (1996) estimate that life expectancy at birth in the Moscow region at mid-century [~1850] was about twenty-four years, compared with life expectancies of around forty years in western Europe. Birth rates in eastern Europe were also much higher than in the west.”
“The population of Europe in 1815 was 223 million. By 1913, 40 million people had emigrated to the New World. [...] By 1900, more than a million people a year were emigrating to the United States, the primary destination for most Europeans. [...] More than half of some nationalities returned to Europe from the United States [...] Internally there was substantial migration of population from country to city as incomes rose. From 1815 to 1913 the rural population [in Europe] grew from 197 to 319 million. But the urban population expanded from 26 million in 1815 to about 162 million in 1913 (Bairoch, 1997).” [26 million out of 223 million is roughly 10 percent of Europe's population living in urban areas at that time; 10 percent is a very small number - it corresponds to the proportion of the English population living in towns around the year 1000 AD... (link).]
“This positive correlation of fertility and income [they talk a little about that stuff in the text but I won't cover it here - see Bobbi Low's coverage here if you're interested, the Swedish pattern is also observed elsewhere] became negative in Europe in the period of the demographic transition after 1870, and there seems to be no association between income and fertility in high-income–low-fertility societies today. The numbers of children present in the households of married women aged 30–42 in both 1980 and 2000 were largely uncorrelated with income in Canada, Finland, Germany, Sweden, the United Kingdom, and the United States [...] This suggests that the income–fertility relationship within societies changed dramatically over time.”
“Between 1665 and 1800 total revenue in England rose from 3.4 percent of GDP to at least 12.9 percent. In France, meanwhile, taxes slipped from 9.4 percent in the early eighteenth century to only 6.8 percent in 1788 [...] In 1870 central government typically raised only between 20 and 40 percent of their revenue through taxes on wealth or income. The remainder came from customs and, especially after the liberalization of trade in the 1850s and 1860s, excise duties [...] In most countries the tax burden was often no higher in 1870 than it had been a century earlier. Most central governments’ taxes still amounted to less than 10 percent of GDP.”
“by 1870 institutions were more different across Europe than they had been in 1700. Suffrage where it existed in 1700 was generally quite restricted. By 1870 there were democracies with universal male suffrage, while other polities had no representation whatsoever. In 1700 public finance was an arcane art and taxation an opaque process nearly everywhere. By 1870 the western half of Europe had adopted many modern principles of taxation, while in the east reforms were very slow.”
I’ve read some articles etc. about this stuff before, but I’ve never read ‘the textbook’. I have now. Well, I’ve read a textbook anyway. I am not super impressed by the book, and I decided to give it two stars on goodreads. Maybe it deserves three, it’s in that neighbourhood.
So what’s the book about? Here’s what they write in the introduction:
“In this book the fundamental approach is to describe the classification of diabetes, risk factors for diabetic retinopathy and lesions of diabetic retinopathy, and explain the significance of these lesions in terms of progression of the disease, recommended treatment and consequences for vision. Methods of screening for diabetic retinopathy and other retinal conditions that are more frequent in diabetes or have similar appearances to diabetic retinopathy are also discussed.”
They deal with main concepts and they provide a lot of examples and case histories along the way. As is always the case in books like these many of the case histories are really quite depressing – I was considering skipping them altogether at one point after a particularly ‘bad one’, but I decided to read those parts anyway; they make up a substantial part of the book.
As you might have inferred from the remarks above, diabetic retinopathy is diabetes-related eye disease. How many diabetics are impacted by this? A rather large number, it turns out (well, I already knew that and I’ve talked about it before, but…):
“Diabetic retinopathy is a leading cause of adult blindness in the US, reported by Fong et al. in 2004 to result in blindness for over 10,000 people with diabetes per year. Moss reported the 10-year incidence of blindness in the Wisconsin Epidemiological study of Diabetic Retinopathy to be 1.8%, 4.0% and 4.8% in the younger-onset, older-onset taking insulin, and older-onset not taking insulin groups, respectively. Respective 10-year rates of visual impairment were 9.4%, 37.2% and 23.9%. [...] In the Wisconsin study, proliferative retinopathy occurred in 67% of people with type 1 diabetes for 35 or more years. One would therefore expect that two-thirds of people with type 1 diabetes would need laser treatment for proliferative diabetic retinopathy during their lifetime. [...] In patients with type 2 diabetes, the rate of proliferative diabetic retinopathy is not as high but it is estimated that 1 in 3 patients with type 2 diabetes will develop sight-threatening diabetic retinopathy requiring laser during their lifetime. [...] Despite major advances in treatment and early detection of diabetic eye disease, the ageing demographic and increased incidence of diabetes is resulting in greater numbers of diabetic visually impaired people in the population.” [my emphasis. Numbers differ across countries and there are a lot more numbers in the book, but these estimates provide some context; this is a complication that affects a huge number of diabetics.]
In the book they talk a lot about how you can use tiny (with sizes measured in microns!) and very short-lasting laser pulses to treat the damaged blood vessels in the eyes, and that stuff’s quite interesting. Equally interesting is the fact that people seem to be treating without really knowing exactly why the treatment works:
“The effectiveness of focal laser treatment may be due, in part, to the closure of leaky microaneurysms, but the specific mechanisms by which focal photocoagulation reduces macular oedema is not known. Studies have shown histopathological changes and biochemical changes,[19,20] which have been suggested as mechanisms for improvement in macular oedema although some investigators have suggested alternative mechanisms for clearance of the oedema such as the application of Starling’s law and improved oxygenation. [...] the mechanism by which laser treatment improves the prognosis of sight-threatening diabetic retinopathy is ill-understood.”
A lot has happened when it comes to treatment over the last decades, as patients in the pre-laser era would often simply lose their vision because no good treatment options existed. A lot of people still do lose their vision to diabetes as mentioned above, but with the advent of laser treatments the prognosis has improved a lot. There are some adverse effects associated with these treatments, e.g. in the form of laser scars or scotomas and (paradoxical?) development of macular oedema afterwards (“McDonald showed that 43% of the treated eyes in his study developed increased macular oedema 6–10 weeks following laser treatment.”), and it doesn’t always work (“if there is ischaemia that involves the central fovea, laser treatment in isolation is unlikely to improve the vision.” “It is not uncommon to successfully treat one area of leakage and subsequently find leakage appearing in a completely different area around the fovea of the same eye.”). But it’s still a big step in the right direction. Laser therapy is however surgical management of tissue damage, and some people are of course hoping to develop pharmacological treatment options as well. In that context I should note that in a way it was fun to read a medical textbook written by people who know less about some aspects of the stuff I’m reading about than do people I’ve met personally (people like Toke Bek). Latanoprost is being evaluated in a clinical trial right now as a drug which might be used to slow the progression of diabetic retinopathy in diabetics, but they don’t talk about that at all in the ‘Future advances in the management of diabetic retinopathy’-chapter (however on the other hand you can’t really blame them for not including this stuff, as that idea postdates the book..).
It should be noted – and they do this repeatedly throughout the book – that the damage to the small blood vessels in the eyes and the subsequent retinal ischaemia/bleeding etc. leading to vision loss in diabetics is strongly linked to factors such as glycemic control and (systemic) blood pressure. This means that improvements in glycemic control and blood pressure management will, if they can be achieved, also translate into better outcomes along these dimensions over time. A factor pulling in the other direction (‘more blind people’) is the high number of current and future undiagnosed type two diabetics who’ll incur extensive tissue damage without knowing it before getting their diagnoses:
“In the UKPDS study it was observed that up to 50% of [type 2] patients had some detectable form of tissue damage at diagnosis, the majority of this being background diabetic retinopathy. [...] Retinopathy is the commonest finding, with about 30% of all subjects newly diagnosed having detectable retinal lesions.”
This patient population poses some problems also because these people will by definition not be included in national screening programs. A related point they do not touch upon in the book is of course that non-compliant patients, the ones most likely to benefit from participation, would also be expected to be less likely than other patient groups to participate in screening programs; so even in places where you have national screening programs and so on you’ll likely still have some ‘theoretically preventable’/'excess’ diabetes-related blindness in the future. Perhaps I talk about screening programs as if I think they’re a good idea, but if that’s the case it’s because some forms of them are almost certainly pretty much a no-brainer – see e.g. this post. The book also spends a chapter on that stuff, unsurprisingly coming to the conclusion that screening is probably a good idea (there’s also consensus about which method of screening is best: “There is widespread agreement that digital photography is the best method of screening for sight-threatening DR.”). It’s worth noting in the context of the complication rates that it’s easier to spot eye damage than other types of tissue damage, and that this may provide part of the explanation for why this complication is so often found at diagnosis compared to other types of complications – here’s a relevant passage from the book:
“Retinopathy is often the easiest complication to detect because the smallest of lesions (microaneurysms) can be visualized long before any change to the subjective function of the eye would be apparent. Retinopathy tracks closely with nephropathy, and so careful screening of renal function needs to be carried out in those who have retinopathy and vice versa.”
The book has a lot more stuff, but I know that most readers probably aren’t too interested in this topic so I figured a rather limited coverage of the book would be preferable to most readers. One of multiple reasons why I did not give it a higher rating is that they repeat themselves quite a few times, covering the same stuff in multiple chapters. Unless you’re a diabetic there’s also no good reason why you should read the book as it is quite technical. Most diabetics will probably find it hard to read.
This book actually probably didn’t really merit two posts, but given that I wrote a part one earlier on I felt I had to write a part two as well now that I’ve finished it. I’ve also recently read Josephine Tey’s The Daughter of Time and Agatha Christie’s Cards on the Table, but as mentioned earlier I’ve been thinking about getting rid of the fiction coverage here and so I won’t cover those here in any detail – all I’ll say is that Cards on the Table was awesome (5 stars on goodreads), and Tey was an enjoyable read (4 stars … do recall if you read it that it’s a work of fiction).
Goerling’s book is a neat little book – I liked it. It’s not super comprehensive, but it’s the kind of book that can be read without problems by both patients and their caregivers as well as doctors and other health care professionals. Many people will/would probably benefit from reading this book. Occasional talk about stuff like ‘Mindfulness-Based Stress Reduction’ and similar stuff subtracted a star or two along the way, but most of the stuff is actually quite interesting. I’ve added a few observations from the second half of the book below:
“With the favorable trend regarding survival of cancer in the Western world, there is an increasing focus among patients, clinicians, researchers, and politicians regarding cancer survivors’ health and well-being. Their number is rapidly growing and more than 3 % of the adult populations in Western countries have survived cancer for 5 years or more. Cancer survivors are at increased risk for a variety of late effects after treatment, some life-threatening such as secondary cancer and cardiac diseases, others might negatively impact on their daily functioning and quality of life. The latter might include fatigue, anxiety disorders and difficulties returning to work while depression does not seem to be more common among survivors than in the general population. [...] Today, the relative 5-year survival is 60–65 % for patients diagnosed with cancer (American Cancer Society 2012, Verdecchia et al. 2007). In Norway, cancer survivors alive ≥5 years from diagnosis represent 3.3 % of the total population (The Cancer Registry of Norway 2010). For some cancer types such as testicular cancer, breast cancer, and Hodgkin’s lymphoma, the 5-year relative survival exceeds 90 %. According to cancer types the most common survivor groups are survivors of female breast, prostate, colorectal, and gynecologic cancer (American Cancer Society 2012).”
“Treatment-related solid second cancers are usually diagnosed at a latency of 10–30 years after radiotherapy, and their development is related to the radiation dose within the target field, but also to scattered irradiation beyond the field borders. [...] During the last two decades increasing documentation has emerged that cytotoxic drugs in a dose-dependent manner are carcinogenic leading to an increased risk of leukemia [...], but also of solid tumors [...] Dependent of their previous treatment long-term cancer survivors may develop asymptomatic or symptomatic left ventricle dysfunction, heart failure, premature coronary atherosclerosis, arrhythmia, or sudden cardiac death, most often due to myocardial infarction (Lenihan et al. 2013). Mediastinal radiotherapy and treatment with certain cytotoxic drugs (antracyclines, trastuzumab) represent well-known cardiotoxic risk factors, with clear dose–effect associations to cardiac dysfunction.” [treatment for cancer can be really bad for you, but often the alternative isn't great either...]
“For the cancer survivor to be able to make the optimal decisions regarding own present and future health, they need information regarding the long-term health risks they face and how to best handle them. The literature indicates that today’s cancer survivors are not aware of their risks for later adverse health events [...] These findings might not only relate to lacking information per se. We must also assume that the survivors have an ambivalent wish for information about future health risks.”
“CBT strives to be evidence based and much effort has been put in scientific research, including large randomized controlled studies. In patients suffering from cancer, CBT has been demonstrated to improve anxiety and depressive symptoms, self-esteem, immune functions, quality of life, optimism, self-efficacy, compliance, coping effectiveness and satisfaction, and to decrease cancer-related fatigue, cortisol levels, pain, and distress (Andersen et al. 2007; Daniels and Kissane 2008; Greer et al. 1992; Hopko et al. 2005; Lee et al. 2006; Manne et al. 2007; Mefford et al. 2007; Moorey et al. 1998; Osborn et al. 2006; Penedo et al. 2007; Tatrow 2006; Witek-Janusek et al. 2008; Wojtyna et al. 2007).”
“psycho-oncological interventions seem to influence treatment adherence, but its relevance for survival is controversial (Chow et al. 2004; Smedslund and Ringdal 2004; Spiegel et al. 1989). A systematic Cochrane review examining the effectiveness of psychosocial interventions in breast cancer patients on survival outcome showed insufficient evidence for such an effect (Edwards et al. 2008).”
“In a very impressive paper Laurie Lyckholm (2001) reports on handling stress, burnout, and grief in the practice of oncology. Causes of stress are seen in insufficient personal or vacation time, a sense of failure, unrealistic expectations, anger, frustration, as well as feelings of inadequacy or self preservations, reimbursement, and other issues related to managed care and third party payers, and last but not least grieving. Burnout can manifest itself in substance abuse, marital conflict, overeating and substantial weight gain, higher frequency of mistakes in clinical care, inappropriate emotional outbursts, interaction problems, depression and anxiety disorders, and even suicide. Lack of or inadequate training of communication and management skills are also considered causes of burnout (Ramirez et al. 1996). In a survey of 7,288 physicians in the United States, 45.8 % reported at least one of the following symptoms of burnout: loss of enthusiasm for work, feelings of cynicism (depersonalisation), and low sense of personal accomplishment (Shanafelt et al. 2012).”
Parts of this book hit relatively close to home and I should probably have read something along these lines some years ago, rather than now. Anyway.
Some critical remarks first. The book is not super great and parts of it are just beyond horrible, so I don’t recommend it. I gave it two stars, but this one was closer to one star than three. I wasn’t that impressed with Juth and Munthe (see also this post), but that book handles the screening stuff much better than does this one. Most of the authors of this book seem convinced that implementing some form of screening mechanism for depression in diabetics may be a good idea, but I’m far from convinced it can actually be justified. Cost aspects are somewhat neglected in the coverage, and cost-effectiveness is a key parameter in the justification process of screening initiatives; and despite what one author would like to have us believe, there’s almost zero chance such a scheme will save money in the long run – preventative medicine almost never does (Glied & Smith included a somewhat comprehensive review of these things in their coverage) and assuming otherwise is borderline arguing in bad faith. Especially problematic in terms of those things is the fact that many authors seem to agree that a screening procedure on its own, without follow-up mechanisms in place to deal with the patients after the identitification phase, probably is not justified, whereas a scheme with such mechanisms in place may be (as they put it in the introduction: “Screening for emotional problems without a comprehensive management plan has not proven to be efficacious in reducing depression and emotional problems in people with diabetes”), they don’t really talk a great deal about how this requirement of implementing proper follow-up etc. impacts the cost-effectiveness variable. Another problem is that the literature seem to find that psychiatric interventions impact quality of life metrics a lot more than they do Hba1c (in this context you can think of the latter as a variable determining to a significant extent the likelihood of developing expensive diabetes complications in the future); some authors mention this, but they are not completely clear on how this affects the cost-benefit side of the equation. The basic idea here is that if depression leads to poorer self-care behaviours among diabetics (this is not really an assumption, it’s clear that this is the case), part of this depression-mediated behavioural change may relate to lower adherence to the treatment regimen, and if so then one might think that psychiatric interventions might improve both quality of life measures and medical adherence/glycemic control measures. As mentioned it’s not clear that there’s much of an effect on glycemic control – some studies have found statistically significant effects, but their clinical relevance are questionable. Quality of life improvements are nice, don’t get me wrong, but without associated improvements in glycemic control it gets harder to justify screening – you save a lot more money by preventing a person from going blind than you do by making the guy feel better.
Some more personal comments of a less critical nature are probably in order as well. I should note that one of the most important observations made in this book – and part of why I actually didn’t really like giving it such a low rating, because it’s a very neat insight – is that it made me aware of how I may have been thinking the wrong way about depression, depressiveness and related stuff. In the past, I’ve mostly thought about depression as a dichotomous variable; either you are suffering from (major) depression or you’re not – if you do, there are specific symptom complexes which should be expected/observed (long term sleep disturbances, -changes in appetite, and so on and so forth), and if you don’t, whatever is wrong, if anything, probably isn’t a big deal. I have been thinking this way about this stuff because that’s how the DSM-IV (and V, if I’m not mistaken) approach the topic – focused on symptoms, with specific and well-defined cut-offs. The conclusion drawn on my part was that I don’t suffer from depression, because it seemed I did not meet the criteria.
If you let go of the dichotomy and start thinking about depressiveness as a continuous variable, things change. For one thing they probably get somewhat iffier in terms of empirical stuff. Mood states can change a lot over short amounts of time, and ‘objective criteria’ like weight gain may be better than unobservable self-report measures – this is presumably all part of why current criteria are the way they are. However a potential problem is that you may miss out on a lot of relevant variation by upholding a strict dichotomy, because mood states are not distributed that way in the real world (they can take on more than two values). In some patient subpopulations upholding a strict demarcation may be a lot more problematic than in others, on account of different distributions of realized mood states within subpopulations. Diabetics are probably one of the groups where it makes a lot of sense to at least think a little about how to approach people who don’t quite make the formal cut-offs (given observations made in the psycho-oncology textbook I’m currently reading, cancer patients would be another relevant patient group – and no, these two diseases are not actually that different in terms of some of the associated emotional responses to the disease; when measuring fear of progression scores based on the Fear of Progression Questionnaire, Berg et al. (2011) e.g. found quite similar scores for diabetics and cancer patients (see Goerling, page 14)). Here are some relevant remarks from the book on this topic:
“Subclinical depression is a term used when an individual presents with depressive symptoms but does not meet the criteria for a diagnosis of clinical depression. Recent reports note that approximately one-third of people with type 1 diabetes and 37–43% of people with type 2 diabetes report symptoms of depression [56, 57]. These rates were far higher than the proportion of people who had been given an actual diagnosis of clinical depression  . Rather than receiving treatment for depression, however, such individuals often have to cope with their symptoms alone. The impact on family, social life, and overall quality of life remains unknown to a large extent and is an area where further research is clearly needed. [...] The natural course of depression is to worsen “
The group of individuals with subclinical depression is likely highly heterogenous and there are some complications when dealing with this group which matter when it comes to how to approach screening mechanisms. One problem is whether the psychological distress is directly diabetes-related or not (there are measures one can use to separate non-directly-diabetes-related psychological distress from other forms of psychological distress) – this matters because different intervention types are optimal for different patient subpopulations. Another problem is that poorly regulated diabetes may actually cause physiological symptoms which mimic symptoms of depression, and that not all available screening tools which might be applied to the patient group take this into account.
With all that out of the way, a few observations from the book:
“In recent years, most research studying emotional problems in people with diabetes has focused on depression or elevated depressive symptoms. This has meant that depression in diabetes is the best understood emotional problem in people with diabetes. Depression rates in people with diabetes are roughly doubled compared to the general population. A meta-analysis of 42 studies demonstrated that clinical or major depression [...] occurred in 11.4% of people with diabetes, whereas the prevalence in nondiabetic people was 5% . People with diabetes also reported more intense depressive symptoms, without fulfilling the criteria for clinical or major depression. Elevated depressive symptoms were reported by 31% of diabetic patients, whereas only 14% of nondiabetic subjects reported elevated depressive symptoms. The doubling of depression rates in people with diabetes compared to nondiabetic people has been confirmed by a more recent meta-analysis .”
“The negative impact of the comorbidity of diabetes and depression on quality of life is greater than the sum of diabetes and depression alone, indicating an exponential detrimental effect of depression on quality of life in people with diabetes. Although depression is a rather common condition in chronic diseases , a WHO World Health Survey on quality of life in different chronic diseases (arthritis, asthma, angina, and diabetes) showed that quality of life was most impaired in diabetic patients with depression .”
“In a prospective study with 7-year follow-up, Black and colleagues demonstrated that the risk for macrovascular complications was more than three times higher if depressive symptoms were present in diabetic patients at the start of the study . The risk of developing microvascular complications or functional disabilities in diabetic patients with minor depression is increased by a factor of 8.6 or 6.9, respectively. Interestingly, the risk difference for late complications between those with mild and more severe depression was rather small. Thus, it seems that even milder forms of depression have to be taken seriously. [...] the experience of depressive symptoms that would not meet the diagnostic threshold for MDD is a risk factor for negative health outcomes in patients living with diabetes [...] data clearly demonstrate an incremental relationship between symptoms of depression and negative health outcomes in diabetes, a relationship observed even at subclinical levels of depression severity. [This] challenge[s] the model of MDD in diabetes, which conceptualizes the problem of depression as a categorical construct that is either present or not.”
“Until recently, there has been a paucity of evidence about the treatment of depression in people with diabetes, and consequently there has been uncertainty about the most effective and safe way to do so [...] The effectiveness of psychological interventions in people with diabetes has [however now] been demonstrated in a systematic review of 25 randomized controlled trials of psychological therapies, mostly CBT. Both psychological distress and glycemic control were improved in people receiving active psychological interventions . A further systemic review of 29 trials and meta-analysis of 21 trials by the same group showed that psychological interventions improved glycated hemoglobin by approximately 0.5% (5 mmol/mol) in children but not in adults . [...] recent reviews by David-Ferdon and Kaslow  and prior work by Kazdin and Weisz  highlight the following components as primary targets of CBT: (1) increase participation in pleasant activities (that enhance mood), (2) increase and improve social interactions, (3) improve conflict resolution and social problem-solving skills, (4) reduce physiological tension or excessive affective arousal, and (5) identify and modify depressive thoughts and attributions.”
“Diabetes management in older patients presents unique challenges. Clinical (e.g., comorbidity, complications) and functional (e.g., impairment, disability) heterogeneity in the older population require special attention. Most diabetes patients have at least one comorbid condition  and as many as 40% have three or more distinct conditions .”
“Diagnosis and treatment of comorbid depression in older patients is a considerable challenge in routine diabetes care. Depression is frequently under-recognized and under-treated [51–54], with less than 25% of diabetes patients’ depression successfully identified and treated in clinical practice .”
“The risk of incident foot ulcers has been found to be increased twofold in individuals with comorbid depression compared to diabetic patients who are not depressed . Depressed patients with diabetic neuropathy are more prone to developing first foot ulcers than nondepressed individuals, independently of biological risk factors and foot care . [...] There is also strong evidence of an inverse association between diabetes complications and depression. Patients burdened by diabetes complications are more likely to develop depression than are those without complications, especially in the case of nephropathy and neuropathy . [...] Depression is common in patients with erectile dysfunction, which reflects a continuous interplay between diabetes-related and psychological factors. [...] There is substantial heterogeneity between type 1 and type 2 diabetes comorbidity with depression, which is partly explained by their different etiologies .”
“Overall, findings derived from reviews and individual studies suggest that more research-based evidence is needed to support the case for the widespread introduction of screening for depression in people with diabetes in primary care, or indeed in other settings. A recurrent message is that screening alone is unlikely to have a strong impact on patient outcomes unless case-finding is linked to other aspects of patient management. [...] it remains to be shown that formal pro-active screening has benefits over improved methods of incorporating recognition and management of depression into routine models of care of people with diabetes.”
I finished Lloyd et. al, but I’ll cover that one later – I’m a bit behind on the book blogging as there are a few books I haven’t covered, but I don’t really give a crap about that right now. I might get to those books or I might not. On a related note I’ve been thinking about dropping the fiction book blogging altogether and just limit coverage of those books to whatever I can be arsed to write on goodreads.
I usually don’t find it hard to justify spending time reading a specific book – I don’t have many non-inferior ways to spend my time – but in the grand scheme of things this one was/is particularly easy to justify reading. I consider it not particularly likely that I’ll get cancer (other causes of death are statistically much more likely, and many of them can be expected to kill me before, say, those prostate basal cells start acting up enough for me to get a cancer diagnosis), but assuming I’m still alive in a decade or so there’s a high likelihood someone in my family and/or social circle will have gotten cancer in the meantime. When that happens, it’s probably a good idea to have read a book like this. At least it can’t hurt. I should note that although I did not know this when I started out, some of the observations in the book are quite relevant to areas outside the cancer setting. For instance I behaved like a jerk towards a good friend last week, and I’d have at least to some extent decreased the likelihood of behaving in such a manner if I’d read and taken to heart the remarks on how to optimize communication strategies included in chapter 3 of this book before that specific social exchange took place.
On a related note, “Those who suffer from depression tend to withdraw from friendships and relationships, causing loneliness and isolation. Maintaining networks of family and friends may prevent this from happening.” This quote is actually from Lloyd et al. but I figured I should include it here; as that book also makes clear, the mental health profile of people with chronic diseases like DM is somewhat complicated and I’m not sure how to categorize my current state of mind, but there are some depressive thoughts there and I’m really trying to remind myself of stuff like this these days. Yesterday I went to the chess club despite having absolutely no desire to go at all, and today I went to a Mensa meeting for the first time in a few months – not because I wanted to, but because my social interaction patterns over the last few weeks in particular have been deeply problematic (i.e., I have had pretty much no social interaction).
Anyway, enough blather – below some observations from the first half of the book:
“Several meta-analyses and large multi-centre studies have shown that, during the time of cancer diagnosis, about 30 % of the patients suffer from a mental health condition (Mitchell et al. 2011; Singer et al. 2010, 2013a; Vehling et al. 2012). Less is known however about the course of those conditions during the cancer trajectory. Available evidence suggests that their frequency does not decrease considerably over time (Bringmann et al. 2008).
Known risk factors for mental disorders in cancer patients are pain, high symptom burden, fatigue, mental health problems in the past and disability [...] There are no consistent correlates of depression in cancer patients (Mitchell et al. 2011).”
“Vocational rehabilitation of cancer patients differs remarkably between countries. For example, while in Scandinavia about 63 % of all patients returned to work after a total laryngectomy (Natvig 1983) and 50 % did so in France (Schraub et al. 1995) only 11 % could return in Spain (Herranz et al. 1999). Predictors of successful return to work are flexible working arrangements, counseling, training and rehabilitation services, younger age, educational attainment, male gender, less physical symptoms and continuity of care (Mehnert 2011).”
“Although the data reviewing sex and/or gender as a primary variable in cancer is quite limited there is a body of literature that is highly informative and is worth a brief review. As it relates to psychological distress, women report more psychological distress overall than do men. This information has been confirmed by many international studies using a wide variety of screening instruments and in diverse cancer populations [...] In terms of willingness to report vulnerabilities based on gender, women do report more requests for help (Merckaert et al. 2010) and accept more help (Curry et al. 2002)”
“Pistrang and Barker found that male partner support (high empathy and low withdrawal) plays a pivotal role in the woman’s adaptation and psychological well-being (Pistrang and Barker 1995). [...] in a large study of caregivers, Kim et al. (2006) reported that female cancer patients felt that their male partners were very supportive when it came to practical tasks but that they did not provide the emotional support that was so important to them. In essence, men were much more comfortable with demanding and ongoing practical and physical tasks than with the emotional components of the experience. This misalignment has significant implications not only for couples but whenever men and women try to support and connect with each other during times of stress or crisis. [...]
“One of the well-documented gender differences found in the literature is the stress response. When under stress, women have been shown to reach out to others and to ‘‘tend and befriend,’’ (Taylor et al. 2000) as an initial response to control their sense of danger and fear. Women feel secure in reaching out to others when trying to manage the stress associated to their vulnerability and do not experience any diminution of self-esteem by asking for help. [...] Unlike women, men may experience a sense of diminished self-esteem by sharing their vulnerabilities with others. Although women are adept at prospectively sharing their emotional concerns to reduce their immediate sense of threat, it is only in retrospect that men are generally comfortable sharing their fears and concerns with others, once the sense of threat is reduced to manageable levels. The ways in which many women and men manage their vulnerabilities (women seeking emotional connection and men seeking space and time to think) have significant implications within the context of caregiving. [...] when people are under stress they are more likely to revert into their habitual behavioral patterns. In essence, they become more like caricatures of themselves. There are some common behaviors that men and women produce in different frequencies that are generalizations (to be at least considered but never assumed) in the clinical setting.”
“although women are still the primary caregivers for seriously ill family members, men are increasingly taking on the role as primary caregiving role from 25 % in 1987 to 39 % in 2004, (Kim et al. 2006, 2007).”
“There is growing evidence that early integration of palliative care—several months prior to death—not only reduces distress and improves quality of life, but also decreases health care utilization and lastly costs (Temel et al. 2010, 2011; Zhang et al. 2009). Evidence seems to be sufficient for the American Society for Clinical Oncology (ASCO) to recommend early palliative care as best practice in some cancer diagnoses (Smith et al. 2012).”
“Anxiety [...] plays an important if not dominant role in symptom perception and expression especially in pain. It is well known from multiple studies in neuropsychology and -physiology that uncertainty and pain are directly linked (Brown et al. 2008; Yoshida et al. 2013).”
“The lack of a common metric makes it difficult to precisely assess the extent of psychological impairment among cancer family caregivers, and the subgroup of caregivers who are at greatest risk; however, it is noteworthy that, across almost all metrics, caregivers consistently have anxiety, depression, and psychological distress rates two or more times that of the general population (Kurtz et al. 2004; Grov et al. 2005; Grunfeld et al. 2004; Northouse et al. 2001; Williams et al. 2013). The lack of precision in the research literature around caregiver psychological impairment in no way obscures what is undoubtedly a major burden for cancer family caregivers. Several studies which concurrently measured psychological impairment in patients and family caregivers, found the family caregivers had higher rates of impairment than the patients with cancer (Braun et al. 2007; Kim et al. 2005; Matthews 2003; Mellon et al. 2006).”
Here’s what I wrote on goodreads:
“The book is well sourced and actually does a good job of covering much of the material. But the editor has done a poor job, and as a result the book seems very sloppy compared to similar scientific publications. There are multiple spelling errors and typos along the way, and it frankly seems as if the book was ‘published too fast’, before all the errors could be corrected. At first I punished this severely when I rated it by only giving the book 2 stars, but I realized this was too harsh. There’s a lot of interesting stuff included in the book.”
Here’s the kind of thing I’m talking about:
“Numerous cardiovascular abnormalities may be encountered in obese subjects (Table 6.4) it is not written properly in the PDF files that I have but this version seems correct. Health service usage and medical costs associated with obesity …”
That comment was one of a kind (fortunately), but there are a lot of errors and typos. At one point they talk about a marginally insignificant finding with an associated P-value of 0.52. This kind of stuff makes you look sloppy. The book is a Wiley-Blackwell publication and you kind of expect a bit more from books like these.
I’ve dealt with many of the topics covered in the book before (e.g. here, here and here, Khan Academy, etc.). I got the book in part to have a book in which I knew I could easily find a reference if/when I needed one, so that I wouldn’t have to look around a lot, and I think it’ll serve that purpose reasonably well. I gave the book 3 stars on goodreads. The book deals with many of the things you’d expect a book like this to cover; lipid and lipoprotein metabolism, insulin resistance and its role in cardiovascular disease, the obesity epidemic, hypertension, type 2 diabetes and the metabolic syndrome, tobacco use and cardiovascular disease and the role of physical exercise and nutrition, among other things. There was some interesting stuff in the book, but not a lot which was all that surprising. I really liked parts of chapter 11 on diabetes management and cardiovascular risk reduction; the chapter went over some reviews and a few major studies well known to people who’re interested in these things (ACCORD, ADVANCE), and the interpretation of the data by the author was somewhat different from interpretations I’ve seen in the past. One main point in the chapter is that lowering of Hba1c may be more effective in preventing cardiovascular events/disease progression among patients without overt cardiovascular disease; the argument being that lowering of blood glucose may protect vessels from getting damaged, but once they’re damaged lowing of Hba1c may not do much difference because it’s basically too late (in part because glycemic control may play a greater relative role in the early course of the disease process, compared to other factors, than it does in the later stages, where other mechanisms may conceivably take over to a greater extent – he doesn’t spell this out explicitly but I’d be surprised if he has not been thinking along those lines). In terms of previous trials looking at the link between glycemic control and cardiovascular disease (CVD), researchers have usually looked disproportionately at diabetics with manifest CVD; this is understandable as these patients are high risk. But such applied selection mechanisms in the past may mean (among other things) that these studies may have been underpowered to find the effects they were looking for. This is an interesting line of argument I have not seen before. If you’re wondering why this is important, it’s important because whereas the link between small-vessel disease and glycemic control is incontrovertible and has been for a long time, the link between macrovascular complications (CVD, etc.) and glycemic control has long been questionable, with a lot of mixed findings. Study selection designs and similar mechanisms may help partially explain why previous studies have not been able to establish a clear relationship. There are of course other complicating factors as well. As I think I’ve said before, until it’s perfectly clear to me that glycemic control and macrovascular disease are unrelated (or at least until we know in more detail how they are related), I’ll pretend that better glycemic control may have a protective effect on both small and large blood vessels. Note that the reason why this is important is also that diabetics make up a huge proportion of all heart disease patients; in Denmark the Danish Endocrine Society noted in a report published a few years ago (I can no longer find it online, unfortunately) that roughly half of all Danish patients with chronic ischaemic heart disease, AMI or heart failure have diabetes (of course a lot of them didn’t know that they did, but that’s a different discussion).
I’ve added some observations from the book below as well as a few comments:
“a general rule is that CVD risk approximately doubles for each 20mmHg increment of systolic BP and 10mmHg increment of diastolic BP above 115/75mmHg [...] a substantial excess risk of stroke death among those who are overweight or obese may be largely accounted for by a higher blood pressure .”
“Despite the fact that obesity has been shown to be an independent risk factor for CVD, many studies have reported that obese patients with established CVD have a better prognosis than do patients with ideal bodyweight; the socalled “obesity paradox.” [...] The improved survival of obese individuals is paradoxical principally because of the assumption that excessive weight is always and invariably injurious. As a matter of fact, among patients with congestive heart failure, subjects with higher BMI are at decreased risk for death and hospitalization compared with patients with a “healthy” BMI . Further, obesity was associated, in a prospective cohort study, with lower all-cause and cardiovascular mortality after unstable angina/non-ST-segment elevation myocardial infarction treated with early revascularization . The obesity paradox may reflect the lack of discriminatory power of BMI to adequately reflect body fat distribution [20,87,90]. Since BMI measures total body mass, i.e. both fat and lean mass, it may better represent the protective effect of lean body mass on mortality. This negative confounding may have been under-appreciated in prior studies that did not adjust for measures of abdominal obesity. It is possible that the favorable prognosis implications associated with mildly elevated BMI might actually reflect intrinsic limitations of BMI to differentiate adipose tissue from lean mass. The lack of specificity of BMI could dilute the adverse effects of excess fat with the beneficial effects of preserved or increased lean mass . [...] Another issue to consider is that normal-weight patients may have a significantly higher percentage of high-risk coronary anatomy compared with obese patients . [...] Another limitation in most studies reporting an obesity paradox in patients with CVD is that non-intentional weight loss, which would be associated with a poor prognosis, is not assessed as BMI is measured only at the beginning of the study. Patients who have decompensated heart failure may lose weight because of extensive caloric demands associated with the increased work of breathing [...] the excess health risk associated with a higher BMI declines with increasing age. An explanation for the lack of a positive association between BMI and mortality at older ages is that, in older persons, higher BMI is a poor measure of body fat and may simply represent a measure of increased physical activity with preserved lean mass. Sarcopenic obesity, which is defined as excess fat with loss of lean body mass, is a highly prevalent problem in the older individual. [...] in view of the importance of body fat distribution, one could argue that, instead of targeting bodyweight per se, one should pay more attention to the WC [waist circumference] and conservation of lean mass as a critical goal in intervention programs .”
“Self-reported diabetes mellitus is often used in studies, but that approach underestimates the true prevalence of diabetes mellitus, and may misclassify a sizable fraction of the participants. [...] it has been estimated that the lifetime risk of T2DM for persons born in the USA in 2000 is approximately 33% for men and 39% for women .”
“Summary analyses have reported that about 65% of deaths among diabetic patients are from vascular or heart disease, 13% are from diabetes itself, 13% are from neoplasms, and the rest are from other causes . Most data concerning diabetes and death in adults are concerned with T2DM, and the limited data on mortality associated with type 1 diabetes mellitus have suggested that approximately one-third are from diabetes itself, one-third are from kidney disease, and one-third are from cardiovascular disease [15,16].” [I should note that some of these numbers sound wrong to me, but for now I'll just report the numbers. I may have a closer look at the studies later. Note that 'deaths from diabetes' is a variable which is incredibly hard to get right in general; everybody dies, but diabetics die faster - deaths incontrovertibly 'directly attributable' to diabetes like DKA or hypoglycemic coma don't make up all the 'excess deaths'.] Researchers have investigated the effect of diabetes on life expectancy. An Iowa study showed that estimated life expectancy was 59.7 years at birth for diabetic men and 69.8 years in diabetic women, and it was estimated that diabetes reduced the lifespan by 9.1 years in diabetic men and 6.7 years in diabetic women . From US national survey data it has been estimated that men known to have diabetes at age 40 years will lose 11.6 life-years and similarly affected women will lose 14.3 life-years .” [Again, for now I'll just report the numbers...]
“The Centers for Disease Control reported that there were 8 million diabetic American adults with CVD in 1997 and the number increased to more than 11 million in 2007 [...] reports suggest that diabetic patients continue to experience CVD at a high rate and are surviving, which has resulted in an increased prevalence of diabetic patients with CVD . [...] Fewer diabetes complications such as mortality, renal failure, and neuropathy have been observed for adult T1DM patients in the Pittsburgh Epidemiology of Diabetes Complications Study over recent years. On the other hand, risk of proliferative retinopathy, overt nephropathy, and clinical CAD have not declined over the long-term follow-up interval of 30 years . [...] Overall 1-, 2-, and 5-year survival after myocardial infarction in a population-based Swedish cohort was 94%, 92%, and 82%, respectively, in non-diabetic patients and 82%, 78%, and 58%, respectively, in diabetic patients.” [I.e., the proportion of diabetics who can expect to survive one year after an MI corresponds to the proportion of non-diabetics who can expect to survive five years.]
“In the mid-1990s there was considerable interest in the potential benefit of antioxidant nutrients and CVD risk reduction [100–103]. Since that time a series of randomized controlled intervention trials have failed to demonstrate a benefit of vitamin E or other antioxidant vitamin supplementation on CVD risk [104, 105]. The most recent work focusing on vitamins C and E confirm these earlier trials . At this time the data do not support a recommendation to use antioxidant vitamins for the prevention or management of CVD. [...] The three major dietary omega-3 polyunsaturated fatty acids (PUFAs) are alphalinolenic acid (ALA, 18:3n-3), eicosapentaenoic acid (EPA, 20:5n-3), and docosahexaenoic acid (DHA,22:6n-3). The later two fatty acids are sometimes referred to as very-long-chain n-3 fatty acids. [...] a number of studies have reported an inverse association between dietary n-3 fatty acids, CVD and stroke risk . Intervention data have demonstrated that EPA and DHA, but not ALA, benefit cardiovascular outcomes in primarily and secondary prevention studies  [...] Of note, the relationship between arrhythmea and EPA and DHA has recently been questioned . The major source of ALA in the diet is soybean and canola oils, whereas the major source of EPA and DHA is marine oils found in fish.”
“The lipoproteins are defined by their density, for example, very low density (VLDL), low-density (LDL), and high-density (HDL). In this instance, “density” is mostly related to the triglyceride and cholesterol content; the more lipids in a lipoprotein the lower its density, as measured by how readily it floats toward the top of a tube during ultracentrifugation. TG-rich lipoproteins transport an energy source, triglyceride, to muscle and adipose tissue for use and storage. TG-rich lipoproteins also contain cholesterol, and can deliver the cholesterol to peripheral tissues and the arterial wall. LDL is a transporter of primarily cholesterol from the liver to peripheral tissues. HDL also functions to transport cholesterol but in the reverse direction as VLDL and LDL, from peripheral tissues to the liver. Lipoproteins also are required to transport fat-soluble vitamins.”
“Relatively consistent evidence indicates that increasing the carbohydrate content of the diet at the expense of fat results in dyslipidemia [7–9]. The majority of the evidence suggests that carbohydrate-induced hypertriglyceridemia results from an increased rate of hepatic fatty acid synthesis [10,11] and subsequent production of hepatic triglyceride-rich particles, very-low-density lipoprotein (VLDL) [...] Within the context of a stable bodyweight, replacement of dietary fat with carbohydrate results in higher triglyceride and VLDL cholesterol concentrations, lower HDL cholesterol concentrations and a higher (less favorable) total cholesterol to HDL cholesterol ratio [16–21]. [...] Sedentary individuals characterized by visceral adiposity are at particularly high risk for carbohydrate-induced hypertrygliceridemia . [...] Studies performed in the mid 1960s demonstrated that changes in dietary fatty acid profiles altered plasma total cholesterol concentrations in most individuals [...] Many studies have since confirmed these early observations using a variety of different experimental designs . When carbohydrate is displaced by saturated fatty acids, LDL cholesterol concentrations increase, whereas when carbohydrate is displaced by unsaturated fatty acids LDL cholesterol concentrations decrease, with the effect of polyunsaturated fatty acids greater than monounsaturated fatty acids [...] When carbohydrate is displaced by saturated, monounsaturated or polyunsaturated fatty acids, HDL cholesterol concentrations are increased, with saturated fatty acids having the greatest effect and polyunsaturated fatty acids having the least effect.”
“Some agents affect HDL and TG in the same direction. Drinking alcoholic beverages and postmenopausal estrogen treatment raise HDL and TG. Testosterone lowers HDL and TG. Since we do not have a way as yet to evaluate the function of HDL in reverse cholesterol transport [one of the chapters spends a significant amount of time on that one - there's a lot more to be said about that stuff than what's in the wiki article], we cannot be confident that these or any changes in HDL concentration affect atherosclerosis in the direction expected from the relation of HDL concentrations and CHD risk [59,65]. There is also no clear relation between genetic variants in enzymes or transporters in HDL metabolism that cause either very low or high HDL cholesterol concentrations and CHD .” [HDL is usually termed 'good cholesterol', but in reality it's much more complicated than that. We are very sure by now that high 'anything which is not HDL' is bad for you, though - in fact:] “The combination of VLDL cholesterol and LDL cholesterol, named “non-HDL cholesterol” , or perhaps better “atherogenic cholesterol,” is a measurement that generally predicts CVD better than LDL-C [LDL-Cholesterol].”
I decided to write another post about this book (first post here). I’m almost finished with Metabolic Risk for Cardiovascular disease which I’ve been reading over the last few days, but I figure coverage of that one can wait a little (it’s not that great).
I have tried to pick out passages in the coverage below which should not be too hard for the ‘uninitiated’ to understand, and I hope that I have been successful. I’ve added links here and there to help making the post easier to read. I don’t really have a lot of new stuff to say about book, so I’ll get right to it.
“Badlands occur worldwide and are especially common in the Northern Great Plains of North America. [...] Badlands worldwide are formed by the forces of gravity and running water, especially by the process of slopewash erosion, which reaches its maximum potential under the combination of: (1) steep local topography; (2) weakly cemented, poorly indurated, readily eroded bedrock; and (3) a semiarid continental climate that supports a sparse vegetation cover, yet delivers precipitation in relatively high-magnitude, short-lived convective storms. This combination ensures that hillslopes are highly vulnerable to erosion, and erosive forces of running water reach their maximum expression on Earth. [...]
Erosion has been the primary landscape-forming process [in the Great Plains badlands] over the past 5 million years during Plio- Pleistocene time (Bluemle 2000). Naturally, this erosion was not uniform. Well-indurated, freshwater limestone (lake) deposits and coarse-caliber, gravel (stream) deposits served as resistant caprocks on lowlying parts of the Miocene landscape. As the overall landscape was eroded, these former low spots (lakes and rivers) were more resistant to erosion and became isolated as topographic high spots – modern-day buttes and mesas [...] This process of creating high topographic points from formerly low points is known as topographic inversion.”
“Conceptually, the formation of Grand Canyon should be very simple to explain. The Colorado River floods annually in the spring from snowmelt in the Rocky Mountains. These floods exert large tractive (erosional) forces against the bed of the river. Over many millions of years the river cut the magnificent canyon into the adjacent plateaus. In actuality, the processes are far more complicated and have not been completely explained.”
“The evolution of the quartzite landscape of the Gran Sabana has been a very long-term process. The rocks are very ancient, the region geologically stable, and the highest planation surfaces, forming the tepui summits and the plains of the surrounding Gran Sabana have been exposed for more than 70 million years, probably since the mid-Mezozoic (Jurassic?) [...] It is this aspect of very, very, long periods of time for weathering, longer than most places on Earth, which is probably critical for the development of the striking landscapes of the Gran Sabana.” (Below a picture of what it looks like, from the wiki:)
“The Iguazu Falls are one of the most beautiful in the world because of the combination of a high and wide structural step across a fluvial system with large water discharge and the tropical environmental location that sustains an exuberant forest and high biodiversity. The geology of the area consists of three layers of basalts that give a staircase-type shape to the falls. The Iguazu River is about 1,500 m wide above the falls and forms many rapids between rock outcrops and small islands. The falls have a sinuous arch-like head 2.7 km long, and part of water volume enters a canyon 80–90 m wide and 70–80 m deep, forming the spectacular “Garganta do Diabo” (Devil’s Gorge). Part of river water enters the canyon by its left side and generates a front with 160–200 individual falls that form a unique wall of water during floods. Although no absolute ages exist on the evolution of the fluvial system, it has been suggested that the falls have been continuously wandering upstream to its present position by progressive headwater erosion at a rate of 1.4–2.1 cm/year in the last 1.5–2.0 million years.”
“South America is drained by huge and complex drainage systems, especially in tropical areas, and [...] the largest South American rivers contribute to 28% of the total fresh water to the oceans [...] The Paraná River basin started to develop concurrently with the rifting processes, related to the opening of the South Atlantic, when Gondwanaland was dismembered. River valley formation surely started after the prevailing desert conditions during part of the Cretaceous age, when a big sand sea spread along a large part of southeastern Brazil (Caiua Group). During this time the development of endorreic drainage under arid to semi-arid conditions could be the first evidence of a fluvial system being present in the former Paraná Basin after the immense basaltic extrusion event linked to the Gondwana partition. Despite its long history, the Paraná fluvial system is not well-understood [...] there are strong controversies about the age of the present Paraná River system.”
“The Dry Valleys comprise an ice-free part of the Transantarctic Mountains in Antarctica, bounded by the East Antarctic Ice Sheet on the landward side and a coastal ice dome at the coast. Weathering rates are among the lowest on Earth and reflect the persistent hyper-arid, cold polar desert climate. Some buried ice has survived for over 8 million years. The main escarpments and valleys were created on a passive continental margin as Antarctica split from Australia some 55 million years ago. [...] Geomorphological evidence of weathering rates suggests that the climate of the last 13.6 million years has been stable and remained dry and cold throughout. The important implication is that the ice sheet that controls the climate has also been present throughout. [...] Intriguingly, the climate and processes of the higher parts of the Dry Valleys overlap with conditions on Mars.”
“If there is a single landform that might typify the landscape of Africa, then this would be an isolated bare rock hill or mountain rising from the vast plains. Hills of this sort struck an early German explorer of East Africa, Walter Bornhardt, so much that he invented a special term and called them inselbergs, literally meaning “island hills” [...] Inselbergs vary in terms of lithology, dimensions, height, and shape, but have a few characteristics in common. First, as the name implies, they stand in isolation and are surrounded by a flat or gently rolling topography. Second, the topographic boundary between the hillslope and the plain around is fairly abrupt. Third, inselbergs are residual landforms due to wearing down of the surrounding terrain. Hence, volcanic cones and up-faulted blocks are traditionally not regarded as members of the inselberg family. The vast majority of inselbergs is built by strong and resistant rock, and very often this rock is granite [...]
Spiztkoppe is one of the tallest, if not the tallest inselberg on Earth. Its summit rises to 1,728 m a.s.l. and overlooks the adjacent plains by 600 m. [...] The evolution of the granite landscape of the Spitzkoppe inselbergs has been a complex and long-lasting process. The granite belongs to the family of Early Cretaceous (137–124 Ma ago) intrusions [according to the wiki, 'The granite is more than 700 million years old' - having read the chapter about it in this book, I'd say that claim merits a 'citation needed'] [...] The intrusions were emplaced at depths of several km below the ground surface that existed at the time. When the granites of Spitzkoppe were exposed to daylight is not known with precision [...] The mean denudation rate was high in the late Cretaceous/early Tertiary, and perhaps several kilometers of rock were lost, but then surface lowering proceeded at a lower rate, decreasing further since the Miocene. Cosmogenic isotope dating suggests that the mean denudation rate in the last 10 million years has been of the order of 5 m/1 million years. Hence, by simple extrapolation and assuming (unrealistically!) no denudation at the site of the future inselberg, 120 million years would be required to produce a 600 m high residual hill. Allowing for an increasing denudation rate prior to 10 million years ago, these rates indicate that the tops of the inselbergs may have been exposed as early as in the late Cretaceous, ~80–70 million years ago. Over time, their height increased as the surrounding terrain built of less resistant rock was worn down. Evidently [...] inselbergs have a very long history”
“The Afar Triangle is a barren lowland bounded by the Red Sea and the two blocks of Ethiopian Highlands [...] its terrain is a casebook of tectonic geomorphology. Plate divergence is at its most obvious where the Red Sea has opened, and is still opening, between the Arabian and African plates. The African plate is breaking apart along the well-known East African Rifts, separating the Somalian plate from the main continental block (often known as the Nubian plate in the north). These three divergent boundaries have a triple junction at the Afar. The Triangle is the one place where the coastlines and plateau margins cannot be fitted neatly back into their pre-divergent entity – because the locally excessive constructive process of basalt generation has created anew the youthful lowland that is the Afar. [...] ‘Hostile environment’ is a term tailor-made for the awful, hot, barren desert of the Afar [...] Just one river enters it, and none leaves it. A few salt lakes contain almost the only water not yet lost to solar evaporation. Daily temperatures are 30–40°C in the cool of winter; summer regularly sees shade temperatures of 50°C on the floor of the Danakil Depression – and there is no shade.”
“The largest single feature of the Afar is the Danakil Depression, which descends to 126 m below sea level over the line of current plate divergence, and would be larger except that half its floor is occupied by shield volcanoes. [...] Of the 34 volcanoes listed within the Afar Triangle, five have recorded activity within historical time. The largest and most frequently eruptive is Erta Ale, rising from the floor of the Ethiopian sector of the Danakil Depression. It is a classic shield volcano [...] Its perimeter is more than 100 m below sea level within the depression, and its summit rises to 613 m above sea level [...] Erta Ale is unique in that it has contained lava lakes that, between them, have been persistently active for at least 100 years [...] The currently active lake lies within the central vent, which is a spectacular pit crater, developed by collapse when magma pressure declined beneath it. Only 60 m across when first recorded in 1968, it is now 150 m across, and about 80 m deep. A lava lake normally covers all or part of its floor, and has periodically overflowed. The lava has a temperature of about 1,200°C, while the rafts of chilled crust that cover most of its lake surface are at about 500°C. [...] The continuing survival of Erta Ale’s lava lake relies on a substantial heat supply to match its thermal loss into the atmosphere [...] This heat supply is from rising magma that is feeding a zone of active emplacement of dykes and sills. Within the immediate vicinity of the volcano, these intrusions currently fill new fissures to keep pace with the plate divergence, so that they largely prevent further fault-related subsidence in the Erta Ale sector of the main graben.”
I finished the book today – I liked it and gave it 3 stars on goodreads.
The Earth has a lot of interesting and beautiful places. Having a closer look at some of these wonderful places and trying to understand in some detail how those places came to be, how they formed and evolved, what they’re made of, etc. is what this book is all about. From the Mackenzie Delta in Canada over various awesome parts of the US, on to the Cockpit Country of Jamaica and the Southern Patagonian Andes to the McMurdo Dry Valleys of Antarctica, crossing Africa while talking about among other things the Namib Sand Sea, Spitzkoppe and the Afar triangle (considering the fact that you have lava lakes(!) here and that this area in general is just crazy – it includes a rift valley half-filled with volcanos… there’s incredibly little material about this place on e.g. wikipedia), having a look at Europe (the Dolomites, Dorset (this article is better than most of the above, I’d say from a brief skim), the Norwegian Fjords, amongst others things), moving on to Asia (Western Ghats, Pokhara Valley, some talk about Fenglin karst, Mt. Fuji, Mulu – among other things) and to Australia (Uluru, Bungle Bungle) and New Zealand. The book was written in order to help you to understand which physical processes have caused beautiful places all over the world to look the way they do today. I think such knowledge makes the beauty of all these places easier to appreciate – it always adds, I don’t understand how it subtracts. As for some of the places it seems obvious to me that you can only truly appreciate the beauty of those places if you also understand some of the underlying processes. It’s a bit like comparing the looks of a potential partner about whom you know nothing with the looks of the same potential partner after he or she has just told you his/her deeply fascinating life story (at least I think this is a good analogy; I’m not sure as I don’t have a lot of experience with these sorts of things).
There’s a snag, though. The book will certainly help you understand some of these things better if you’ve read a geology textbook or two before you start out – a book like this one. I should point out that one textbook may not be enough – despite having read Press and Siever I was often struggling to make sense of the material, having to look up stuff frequently, but that may be related to the fact that I read that book a while ago and so have forgotten a lot of stuff in the meantime. However if you have no background at all in physical geography and haven’t read anything about this kind of stuff, you’ll get very little out of this book. You’ll probably have a ‘…yes, I understand a few of those words…’-experience instead (and oftentimes the words you’ll understand will not help you understand what the authors are trying to tell you…). Of course you can read the words and look up all the ones you don’t understand, but most normal people are not going to read 350 pages that way if there are a lot of words in the second category. And although there are some (sometimes amazing, breathtaking..!) pictures in there, that’s probably not going to be enough…
“In basic terms, the geology at the western part of the entrance of the Guanabara Bay, including the Sugar Loaf and its vicinity, is represented by Meso- Neoproterozoic high grade metasedimentary rocks intruded by Neoproterozoic syn- to post-tectonic granitoid rocks and thin Cretaceous diabase dikes (Silva and Ramos 2002; Valeriano et al. 2003).”
In basic terms indeed – it would be interesting to see their contributions to a simple-wikipedia article. To be fair, you can say quite a bit more than that and they spend some pages talking about these things. Here’s the abstract to a chapter (all chapters have abstracts in the beginning, like all Springer publications) from the middle of the book, chapter 18 (I picked this abstract because it was in the middle of the book – there are 37 chapters altogether; I don’t think it’s a ‘special’ abstract as such):
“The landform history of the sandstone scarpland, inselberg and plains landscape of the greater Djado region, part of the presently hyper-arid central Sahara, in north-eastern Niger, begins with all-encompassing Paleogene etchplanation under humid-tropical conditions, followed by the almost compete stripping of the original soil cover and silcrete induration of the uniform etchplain during the Oligocene. Still under very humid conditions, deep-reaching karstification then penetrates silcrete, contemporaneous ferricrete and, above all, the saprolitic sandstones, also creating numerous poljes. Gradually decreasing humidity up to the end of the Neogene, resulting in increasingly restricted etchplanation, leads to the formation of scarplands, inselbergs, intra-plateau basins, pediments, and still sandstone-karstrelated scarpfoot depressions. During a final relapse to quite humid conditions, landslide fringes form along all heterolithic escarpments at the onset of the Pleistocene, later on merely subject to fluvial dissection in the context of at least three phases of Quaternary pluvial river aggradation and terrace formation. A thorough reshaping of most of the region under Quaterary arid conditions is effected by more than one phase of aeolian corrasion, as part of the largest wind-corrasion landscape on Earth.”
It makes more sense in context (well, maybe it makes perfect sense to you already – if so you’ll probably have no problems with the book), but I have actually punished the authors for the technical nature of the publication this time; if you’re a geologist this book is probably a four or five star publication, but some of these guys have made the book really hard to read for people outside the field, and this is part of the reason why I only gave it three stars. It’s a bit too much trouble to read and understand the book, and that’s a damn shame because it’s interesting stuff they cover. Having said this I should add though that I’ll probably try to remember in the future that I have this book, in case I suddenly become very rich and feel a great desire to travel the world. You’ll want to know stuff like the stuff in this book if you’re ever going to visit those places – if you don’t know this kind of stuff your experience will be partly spoiled (compared to the alternative) on account of your ignorance. And there’s a lot of stuff in this book you’ll not be able to find covered online unless you really know where to look, and are willing to spend some time looking.
The book will tell you about how some waterfalls have ‘travelled’/retreated many kilometres over time because steep slopes and huge amounts of water lead to powerful erosive forces impacting the edges of the waterfalls in particular. It’ll also tell you about how some prominent features in various landscapes are really only prominent because they‘re all that’s left standing – everything else has been eroded away over millions of years. Mountains don’t always rise from the ground in the way that we usually conceptualize it – see also these images (no good wiki article about these, unfortunately). Most people probably think of cinder cone volcanos as majestic structures which have been around for a very long time. Some of them are a bit like this, but even if they are they probably have a complicated history; lava used to come out other places than you’d think, maybe the volcano has migrated over time. They sometimes do this. But then there are other kinds of volcanos. A couple of years before my father was born Parícutin was an unexceptional bit of dirt and ground on a Mexican cornfield. Then something interesting happened. A fissure formed in the ground on the 20th of February 1943 – within the first day the cone that popped up shortly after had grown to a height of 6 metres. On the 23rd of February it was 60 metres high. Half a year later, in October, the cone had reached a height of 300 metres (I should note that the numbers in the wikipedia article are somewhat different, but different numbers are provided by different sources and although the numbers differ a bit they tell a rather similar story – this volcano became very big very fast, and it basically grew out of nothing). Lava flows started running on the 23rd of February, and these ended up covering approximately 25 km2 of the surrounding areas, impacting five nearby villages (as the book notes, other effects had much wider consequences: “Ash-fall impact was much higher: the 1 m isopach covered an area of 61 km2, while the 25 cm isopach had an envelope of 233 km2, and the 1 mm isopach up to 60,000 km2.”). The volcano is no longer active – it was ‘only’ active for roughly 10 years. Nobody died. People living on Iceland were not so lucky when their island was hit by a major eruption roughly 200 years ago; that eruption killed roughly one in five people living on the island.
But talking about these things is perhaps problematic in that a big point is easy to miss; the history of the Earth is long. It’s incredibly long. This book deals with a smallish volcano showing up almost out of thin air within the lifetime of some of the readers, because that’s actually pretty amazing. But there are lots of different types of ‘amazing’ out there. A lot of interesting places have taken a long time to get to where they are now. Geological processes have been going on for a long time. A really long time. Things used to look very different, in ways which are hard to even imagine now. Continents used to be located in very different places from the places they’re located today, and the reasons why they no longer are located where they used to be located may sometimes explain why they look the way they do. Some places which are parts of various continents now used to be at the bottom of the ocean, and this includes some pretty prominent mountain ranges. It’s not perfectly clear in detail when ice became a really big deal in Antarctica, but it’s safe to say that until roughly 35 million years ago it certainly wasn’t anything much to write home about, and that near-complete ice cover didn’t happen until much later. One should also not forget that both Europeans and Americans actually had plenty of the stuff just a few tens of thousands of years ago – remember those Norvegian Fjord’s we were talking about?
The world is an interesting place, and learning more about it usually makes it more interesting. Although many of the things covered in the book are not well covered on wikipedia, I thought I should at least add some links to relevant articles covering related stuff (I bookmarked some of the articles I looked up while reading the book in order to make it easier to cover on the blog; I was questioning quite early on whether it would make sense to quote extensively from the book here), so I’ve done this below:
I probably haven’t covered this book in the amount of detail it deserves. It’s a good book – and some chapters were really fascinating – it’s just a bit hard to read. If I don’t get a lot of reading done within the next days I may decide to add some more detailed coverage of this book here on the blog.
“Out-of-pocket payments play a dominant role in LMICs [low- and middle-income countries] where they cover about 50 percent of health care expenditures. [They] are less important in the high-income countries [but] there seems to be a tendency toward an increase of patient cost-sharing in countries where it traditionally has played a minor role [...] This is not only explained by a concern to fight moral hazard and overconsumption, but it also reflects the increasing pressure on the public financing part of the system.” [In 'low-income-countries' out-of-pocket expenditures in 2008 made up on average 67.4 % of total expenditures in health, whereas the corresponding numbers for 'lower middle income countries', 'upper middle income countries' and 'high income countries' were 46.8%, 30.2% and 14.4% respectively. The global average was 22.5%. Note that total out-of-pocket expenditures incurred in high-income countries (in e.g. dollars) may make up a much larger share of the total global out-of-pocket expenditures than you might believe from those numbers alone - recall that high income countries spend approximately 100 times as much money on health per person than do low-income countries (low income countries spent on average $23 on health per capita in 2008, whereas high income countries spent $2414 per capita).]
“User charges do have a negative effect on health care consumption. The evidence is overwhelming for co-payments in the developed insurance systems. [...] The evidence is almost equally strong for the effects of user charges in LMICs. Introducing or increasing user fees has almost always and everywhere led to a decrease of utilization [...] both in developed health insurance systems and in LMICs, the evidence suggests that the decrease in utilization may have negative effects on the quality of care [...] Most studies find that cost sharing leads to a decrease in the utilization of essential medication, defined as medication that is necessary to maintain or improve health. Often adherence to a regimen of maintenance medication goes down with patients skipping doses or stretching out refills. With a few exceptions [...], higher cost-sharing for, and therefore lower utilization of, prescription drugs, has led to greater use of inpatient and emergency medical services by chronically ill patients [effects like these, I should point out, may well make cost-sharing a less than ideal cost-saving mechanism; emergency services are incredibly expensive compared to 'routine management'] [...] cross-price effects are also significant. Again, the evidence for the developed countries and the LMICs goes in the same direction. Two- or three-tier plans for prescription drugs in the US, introducing differentiated cost sharing for different categories of drugs, have clear effects on the pattern of drug use. [...] user charges are a strongly regressive component in the health care financing structure of developed countries [...] A large majority of studies suggest that user charges lead to a stronger reduction in utilization among the poor than among the rich (James et al. 2006).”
Insurance and the demand for medical care:
“two main empirical findings from research to date are these: (1) the aggregate or average consumer demand curve, whether Marshallian (uncompensated) or Hicksian (compensated), slopes downward and to the right. (2) Demand curves are significantly price responsive at all consumer income levels. These conclusions are at variance with common perceptions of medical care demand by non-economists, who traditionally have asserted that non-poor consumers only use medical care when they have to do so because they are sick or are ordered to do so by their physician, and that only lower income households would restrain their demand for needed care because of cost sharing.”
“When the consumer has price-sensitive demand for care, the influence of deductibles on spending is complex because a deductible in effect faces the consumer with a two part block tariff: full price up to a certain level of spending, and then low or zero marginal price. Since the marginal price is different depending on whether the deductible is covered or not, the consumer has to consider the distribution of expected expenses [...] While the actual analytics of demand responsiveness are complicated by a deductible [...] the main intuitive finding is obvious: the lower the deductible the higher the demand for care, other things equal.”
“A traditional discussion about choosing the “right” (desired) hospital output is associated with the role of quality and the trade-off between lower costs and higher quality. This trade-off is based on the assumption that higher quality implies more costs. This is likely to be so in efficient hospitals. However, inefficient hospitals may have room to improve simultaneously in both dimensions.”
“Whenever hospitals are funded by case payments they prefer to receive more patients for treatment while hospitals funded by capitation (to treat people in a defined catchment area) will invest more in keeping patients treated at primary care level when clinically feasible. [...] several health systems make referral by a GP a necessary condition to visit a specialist. [...] Gatekeeping determines to a considerable extent the demand faced by the hospital. Moreover, referrals to the hospital depend on both the incentives faced by GPs and on the formal relationship between primary care and hospitals.”
“One of the main issues in measuring economies of scale (productivity, in general) in hospitals is the role of quality. More efficient hospitals are more likely to have a lower marginal cost of providing quality, and accordingly they may supply a higher quality level in equilibrium (which is likely to raise costs and mask their efficiency advantage). [...] Under regulated prices, quality is the main “competitive tool” of hospitals and it is used intensively. Whenever both price and quality are available instruments to the hospital, the effort to attract patients is spread over both of them. [...] response to an unexpected demand surge for hospital services is more likely to be met by early discharges to free up capacity than rationing admissions.”
The economics of the biopharmaceutical industry:
“The US research-based biopharmaceutical industry invests 15-17 percent of sales in R&D, and the R&D cost of bringing a new compound to market is estimated at over $1bn. [...] The cost of developing an approved new medical entity (NME), measured as a discounted present value at launch, [was] $138 million in the 1970s [...] the global nature of pharmaceutical R&D raises issues of appropriate cross-national price differentials and cost sharing. National regulators have incentives to free-ride, driving domestic prices to country-specific marginal cost, leaving others to pay for the joint costs of R&D. The long R&D lead times – on average roughly twelve years from drug discovery to product approval – make the incentives for short run free-riding by individual countries particularly acute because negative effects will be delayed for years and hard to attribute. [...] In practice, the ability of pharmaceutical firms to price-discriminate is undermined by government policies [...] the design of each country’s price regulatory system affects not only its prices and availability of drugs but also availability in other countries through price spillovers in the short run, and through R&D incentives in the long run. [...] North America accounted for 45.9 percent of global pharmaceutical sales in 2007, compared to 31.1 percent for Europe“
“The theoretically optimal insurance/reimbursement contract for drugs must deter both insurance-induced over-use by patients and excessive prices by manufacturers, while paying prices sufficiently to reward appropriate R&D, taking into account the global scope of pharmaceutical sales. [...] An important conclusion is that patient cost sharing alone cannot simultaneously provide optimal incentives for efficient use of drugs, control of patient moral hazard and optimal provider incentives for R&D. In addition, given the global nature of pharmaceutical utilization, creating optimal R&D incentives require appropriate price differentials across countries [...] generally, regulatory systems that induce price convergence across countries are likely to reduce social welfare. [...] Overall, countries that use direct price controls do not consistently have lower prices than countries that use other indirect means to constrain prices”
“In the US, generics now account for almost seventy percent of all prescriptions but only about 16 percent of sales, due to their low prices. Although US prices for on-patent drugs are on average 20-40 percent higher in the US than in other industrialized countries, US generic prices are lower [...] many middle and low income countries have relatively high generic prices [...] and uncertain generic quality. [...] Empirical studies of generic entry has shown, not surprisingly, that generic prices are inversely related to number of generic competitors [...]; generic entry is more likely for compounds with large markets [...], [and in] chronic disease markets”
“the FDA is required by statute to consider risks and benefits to patients. Costs [...] is beyond the FDA’s purview. [...] Currently the US lags other countries in the use of comparative and/or cost-effectiveness as an input to reimbursement decisions.”
A little bit of stuff from around the web:
i. “Most people lack the motivation and self-discipline to teach themselves entirely new subjects while sitting alone at their computers. “Will I ever need this? Will I ever be asked to prove I know this? Will anyone ever be impressed by my knowing this? Will anyone I care about care if I don’t know it?” If the answer to any of these questions ever seems like it might be “no”, boom, window closed, game over. Or rather game on. Time for some XBox!”
Quote from the comment section of this post, where the main topic is MOOCs. I wrote some comments related to that quote, but later decided not to post the stuff I’d written – but I figured I might as well post the original quote and include the link here.
“To date, research on the disturbed experience of body size in Anorexia Nervosa (AN) mainly focused on the conscious perceptual level (i.e. body image). Here we investigated whether these disturbances extend to body schema: an unconscious, action-related representation of the body. AN patients (n = 19) and healthy controls (HC; n = 20) were compared on body-scaled action. Participants walked through door-like openings varying in width while performing a diversion task. AN patients and HC differed in the largest opening width for which they started rotating their shoulders to fit through. AN patients started rotating for openings 40% wider than their own shoulders, while HC started rotating for apertures only 25% wider than their shoulders. The results imply abnormalities in AN even at the level of the unconscious, action oriented body schema. Body representation disturbances in AN are thus more pervasive than previously assumed: They do not only affect (conscious) cognition and perception, but (unconscious) actions as well.”
Much more at the link.
iii. I posted Zach Weiner’s video earlier, but this one is pretty good too:
iv. A chess game I played recently. In related news (?), Magnus Carlsen has started a youtube channel:
Sorry for the infrequent blogging – I don’t have a good excuse so you can just ascribe it to lack of motivation and self-discipline (see above). On a more serious note I have been working a lot and I have not been feeling particularly great. This comic actually for a short while made me seriously consider if it would be worth it to deliberately gain 20-30 kg just in order to cut a decade or two off my life expectancy; the associated costs seemed a lot lower to me than they usually are for other people, and there would be benefits as well – although it’s hardly a good coping device, food is certainly better than quite a few of the alternatives (e.g. alcohol, hard drugs).
I should note that if people don’t at least occasionally add links/comments/reading suggestions/questions etc. to these posts, I’ll consider retiring the Open Threads again. On a related matter I like it when people provide feedback via the rating system, but people rarely use this feature.
Here’s the first post about the book. If you want to know what I’ve been doing over the last few days, look at the red thingy. I’ve read roughly 700 pages so far. Book-blogging takes time, so I’ve been emphasizing reading over blogging.
This book covers a lot of stuff. There’s a lot more in there than I can justify covering here. That said, in a way I also feel that it’s necessary to note how little stuff the book actually covers: In more than a few chapters I’ve added remarks such as, ‘this topic is covered in much more detail in Juth and Munthe‘, or ‘for a much more comprehensive review, see Goldstein’s book‘. The book also covers stuff covered in greater detail here, here and here; as mentioned before I know a bit about these things already, though I haven’t felt like I’ve really had a great overview of the material. Having read books like this one, this one or perhaps this one may help understand some issues presented in specific chapters better, but you don’t really need to have done that – in most cases the chapters can stand on their own. I should mention that in one specific chapter (about addiction) I basically wrote in the margin at one point that the authors didn’t seem to know what they were talking about here, and that they should familiarize themselves with the medical- and neuroscientific research on the stuff they’d written about there (addiction) before writing any more stuff on that (specific) subtopic. It wasn’t a big part of the chapter though, and that has only happened once; most chapters are great, and none are what I’d really term ‘weak’ – I’m currently at either four or five stars on goodreads, probably a little bit closer to five than four. I should note that I have had similar ‘these guys don’t seem to know a lot about what the non-economists have found out about this stuff’- experiences as I had when reading the addiction chapter previously a few times when covering labour economics topics during my coursework; sometimes it seems to me that economists who’re very fond of their models (and the models of their antecessors) don’t really have a clue what’s really going on because they refuse to learn what people in other fields have already found out (perhaps because they assume that related work will not help them in their model-building efforts? (…if so, I think they’re wrong)) – it always bothers me.
Anyway, some observations from the book below:
“From an economist’s perspective, infectious diseases are distinguished from many other health issues by the central role played by externalities.1 Control of infectious diseases yields both positive externalities (prevention and treatment can delay or reduce spread of infection to uninfected individuals) and negative externalities (overuse of treatment can lead to drug resistance, which has global consequences for treatment effectiveness). [...] vaccination, an important tool in the prevention of infectious diseases, presents a classic public goods problem. Society gains from individual vaccination because of herd immunity, but this value is not recognized by individuals, who have an incentive to free-ride on vaccination by other individuals. [...] disease reporting and eradication efforts are also public goods. [...] a country’s incentives to control a freely moving disease like malaria are determined as much by its ability to stop the inflow of infected individuals as by the ability to control the disease within its own borders. Reducing malaria in a country could have transboundary benefits by incentivizing infection control in its neighboring countries as well. This principle also applies more generally to the challenge of global disease eradication. [...] Eradication is a binary public good: the maximum benefits are achieved when the disease is completely gone.”
“Together, all infectious diseases account for more than 25 percent of premature death globally.”
“In sum, obtaining accurate information about potential epidemics is as much about reporting incentives as it is about detection technology.”
“from an economic perspective, disease burden may be a poor criterion to use for allocating treatment resources.”
“(OECD) nations commonly spend between 5 percent and 14 percent of their health dollar on mental health care [...] this implies that OECD countries devote between 0.3 percent and 1.1 percent of their national incomes to treatment of mental disorders.2 [...] It is important to note that the patterns of spending on mental health care are different from those observed in international comparisons of health care spending. [...] there is more variation in mental health spending levels across nations than there is for health care. [...] The commitment by OECD countries to promote community-based treatment and inclusion of people with mental disorders into the mainstream of society while also accepting the responsibility for public protection creates a policy tension that [...] shapes public mental health spending. [...] there have been notable reductions in the inpatient psychiatric capacity in virtually all OECD countries [since the 1960s]. [...] [There is] growing variation in how each society sees the function of the psychiatric hospital. [...] in France and the United States, two countries that spend similar shares of GDP on mental health care, France allocates roughly 80 percent of mental health spending on inpatient care (Verdoux 2007) and the United States about 36 percent (Mark et al. 2007). [...] Mental health spending in the US as a share of total health spending has declined from nearly 11 percent in the 1970s to 6.2 percent in 2003″
“Cost-effectiveness evaluations of evidence-based treatments for depression suggest that they produce gains in Quality Adjusted Life Years (QALYs) at levels comparable to other medical treatments [...] rates of treatment for the mental disorders, with some of the strongest effectiveness of care evidence, such as depression and anxiety disorders, are quite low [...] mental health services are frequently funded and/or supplied by several bureaucratic departments all operating under fixed budgets. [...] There may therefore exist opportunities for cost shifting. That is, strict rationing of mental health services may be seen as an opportunity to expand monies available for general medical care while allowing people with mental disorders to obtain care from the social care sector. [...] recently the creation of combined trusts (mental health and social care) has tried to use organizational design to blunt incentives to cost shift created by fragmentation in financing.”
Public sector health care financing:
“In general, it can be shown to be efficient for the consumer’s cost-share to be lower when he or she incurs large health care costs, but higher with relatively low costs. This can be accomplished via a plan with an initial deductible (under which consumers are responsible for 100 percent of their health care costs in a given period of time, up to the limit of the deductible), followed by one or more intervals of partial cost sharing, perhaps up to some maximum (a “stop-loss” provision) beyond which the plan pays 100 percent of any additional costs. [...a related observation from another chapter: "In the pure theory of insurance, Arrow (1963) showed that, with proportional administrative loading, optimal coverage is full coverage above a deductible" - this result is called 'Arrow's theorem of the deductible' and lots of people have written stuff about that one...] [...] The theoretical analysis of the efficient degree of consumer cost-sharing has focused on the trade-off between the gain from more complete insurance against the associated inefficiency of over-utilization, but in practice, the appropriate degree of cost-sharing should also depend on certain other factors, in particular, on the relative costs of administering plans with different degrees of cost-sharing. [...] Patient cost-sharing as a means of controlling health services utilization and aggregate health care costs is an example of what in the health economics literature is called “demand-side incentives” (that is, incentives that affect the patients who use health services). A prominent theme in the health economics literature in recent years has been that services utilization and total health care spending in a given population also depend strongly on the incentives of the providers of health services who treat the patients and advise them on what services they should utilize (“supply-side incentives”). If utilization can be effectively controlled through supply-side incentives, the case for high user fees is less strong”
“In comparing the equity and efficiency properties of the social insurance model of funding health care with the general-revenue financing model, the first point that should be made is that, for those populations for which membership in the public plan is compulsory (which may be the entire population), the contributions that the insured are required to pay toward funding the plan [...] are equivalent to a tax. [...] This equivalence has two important consequences. First, it means that the equity and efficiency properties of the social insurance system can only meaningfully be analyzed as part of the overall system of raising government revenue for all purposes: As previously argued, it is not meaningful to separately analyze the equity and efficiency properties of the revenue raised for some particular purpose. In this sense, therefore, social insurance funding of health care involves the same issues as those arising when funding is from general revenue. Second, once it is recognized that the contributions paid into the social insurance system is only one of many sources of government revenue, it becomes clear that it is not in general efficient to match the revenues raised from this source with a particular kind of spending (health care). If one wants to explain why many countries still try, at least to some extent, to match health care expenditures under their public plans to specific types of revenue (such as social insurance contributions), one must appeal to other factors [...], not economic efficiency or equity.”
Health care cost growth:
“The dominant factor contributing to rising spending is the development and diffusion of new medical technology [...] The conclusion that technology is a primary driver of cost growth is based on a wide body of literature [...] Alarm over health care cost growth is typically centered on the rise in health care expenditures at the population level. Expenditures reflect both unit costs (prices) and utilization patterns (quantities). Some interventions may reduce unit prices, but, because of the utilization response, may not reduce expenditures. [...] This helps explain why innovative technology often raises expenditures in the health care sector, even though it is perceived to lower cost in other industries. For example, as technology reduced unit cost in the information technology sector, spending growth in the overall sector increased 26 percent annually from 1982 to 1996 (Haimowitz 1997). Expenditures are also not limited to any particular disease. Individuals cured of one disease inevitably get another. It is possible that reductions in expenditures on one disease may increase overall spending if competing conditions are more expensive. Finally, cost growth at the population level may not reflect trends in cost growth for particular services. Efforts to constrain spending in one area may simply generate greater spending in other areas. For example, in the United States, as inpatient spending growth slowed following implementation of prospective payment systems (PPS), outpatient spending soared (Miller and Sulvetta 1992).”
“In assessing cost containment strategies it is crucial to distinguish between those interventions that affect the trajectory of cost growth versus those that affect the level. [...] This distinction is important in assessing the ability of systems which are more conservative in their adoption of new technology to control cost growth. A system that adopts new technology more slowly than another system may have the same rate of cost growth if the baseline level of costs is lower. For example, if a given country has a base spending rate that is 20 percent below that of another country, it will experience the same cost growth if it utilizes a new technology 20 percent less frequently.”
“decreased utilization associated with cost sharing does not disproportionately impact necessary care, as proponents of cost sharing would hope and standard economic theory would predict. Patients apparently reduce use of appropriate and inappropriate care in similar proportions [...] Consistent with this view, many recent studies suggest patients reduce use of prescription drugs when faced with modestly higher copayments [...] cost sharing has been demonstrated to have disproportionately negative effects on the quality and delivery of health care among low-income populations [...] Adverse events, lower adherence, and decreased management of illness are associated with increased patient cost sharing [...] the longer term consequences on health associated with lower utilization of high value services have yet to be fully evaluated. [...] Because cost sharing is associated with lower costs, many health care payers view cost sharing as a means to reduce growth in health care (Chernew 2004). Yet there is virtually no evidence examining the impact of cost sharing on cost growth. It is possible higher cost sharing lowers spending, but does not alter the trajectory of spending growth. [...] Although the debate about the relationship between physician and hospital supply and spending and costs will continue, it is important to note that much of this literature is related to the level of costs, not the trajectory. The limited evidence on cost growth suggests that even in the most successful settings [...] the share of GDP devoted to health care still rises, albeit at a somewhat slower rate than in other markets.”
“Many observers have noted that the health care expenditures of individuals with chronic disease are much greater than expenditures of individuals without such disease [...] The share of obese Medicare beneficiaries increased from 9.4 percent in 1987 to 22.5 percent in 2002 [...] For this reason, some believe that initiatives aimed at improving health will save money. [...] [However] most preventive services are not cost saving from a societal perspective. [...] In general [...] evidence of [...] savings associated with disease management and pay for performance is weak. [...] it is likely too optimistic to assume that better health will substantially lower the trajectory of health care spending. Health care costs were growing rapidly well before the epidemic of obesity and health care cost growth among the healthy persists. [...] Because healthier beneficiaries live longer, and may demand a range of quality of life improving services, it would not be prudent to assume that better health, as desirable as it is, will substantially slow cost growth.”
I don’t know in how much detail I’ll cover this book in the week to come but there’s some interesting stuff in there and I figure I might as well start ‘work-blogging’ a bit again. It is my goal to read at least 100 pages/day in this book over the next week’s time, meaning I should be finished at the end of next week (the book has 937 pages, excluding the index at the end – I’ve read 165 pages so far). 100 pages per day isn’t actually all that much and I may decide to work more than that, but the book is occasionally a little technical and I set that goal also because I knew it would be achievable (plus it’s not the only work I’ll be doing). I’ll probably cover plenty of stuff here which will end up not being directly relevant to my work, but I don’t think any of you will object strongly to this – almost none of the stuff I cover here on the blog has anything to do with my university activities anyway. I’m in the very rare situation that stuff which I might have read anyway because I find it interesting actually turns out to be relevant to my work/studies.
I don’t know how difficult the material covered in this book will be to understand for people without a background in economics; it’s much easier to make correct assumptions about such things if you’re covering stuff which is technically not within ‘your field’ either (e.g. psychology textbooks). Fortunately unlike Mas-Colell this is not a (not very well…) disguised math textbook, so I think it actually makes sense to cover some of the stuff from the book here on the blog – it made pretty much no sense to cover the stuff in Mas-Colell which was a big part of why I didn’t do it. I don’t know to which extent it’ll be necessary to add links and comments etc. along the way to aid understanding and I will probably underestimate the need – if you don’t get what they’re talking about even though I seem to be assuming that you ought to get it, just ask questions in the comments and I’ll try to clarify. I have done work on some of the topics covered in the book before, so it’s not like I don’t know anything about these things. Only one chapter so far has been what I’d term ‘model-heavy’, and although I probably won’t talk much about that one there should still be plenty of stuff in the book which it makes sense to cover here. I should note that I’m very aware of the fact that time spent covering the book is time not spent reading, and although I may derive a small benefit from covering the material here I won’t let blogging interfere with the reading goal. So I don’t know how much I’ll blog in the week to come – we’ll see.
Below some observations from the first 7 chapters/160 pages:
“Health care spending has been capturing a growing share of the GDP in all OECD countries between 1970 and 2008. In the median OECD country, health care spending was 5.1 percent of GDP in 1970 and increased to 9.1 percent by 2008 [...] A persistent outlier in health care spending has been the US, where health care spending increased from 7.0 percent of the GDP in 1970 to 16.0 percent in 2008. [...] Denmark was also an early outlier, actually spending a greater percentage of its GDP on health than the US (7.9%) in 1970. By 2008, Denmark had similar health care spending levels as the median OECD country, a result of a very slow rate of growth in health care spending between 1970 and 2008.” [Actually one might argue that this last part shouldn't be news to 'regular readers' as I've covered the Danish numbers before here on the blog in a post in Danish. Note that the data in that post seem to indicate that there may have been a structural shift around the year 2000 or so - the cost share has risen relatively fast since that time, compared to earlier. Denmark is above 11% of GDP now.]
“Growth rates in health care spending are [...] compared using the average annual growth rates of health care spending adjusted for inflation and population growth. Using this measure, the average annual rate of health care spending growth was 3.9 percent in the median OECD country from 1970 to 2008 [...] The rate of health care spending growth was higher than inflation in every OECD country during the overall time period. [...] In 2008, approximately three-quarters of the health spending was from public funds in the median OECD country, while the remaining quarter was from private funds. [...] Of the private health care dollars in the median OECD country, 72 percent were out-of-pocket expenses in 2008. Private health insurance is responsible for a small proportion of health spending in most OECD countries. [...] Across the OECD, about 15 percent of all tax revenue is devoted to health care - a proportion that is steadily increasing.”
“Three sectors of health care represents over half of the total health care spending in most OECD countries: inpatient hospital care, outpatient medical services, and pharmaceuticals. [...] Inpatient hospital spending declined rapidly during the period from a median of 48.5 percent in 1970 to 32.3 percent in 2008. [...] During the same time period, the length of stay for inpatient care has fallen by approximately 58 percent. [...] The median OECD country has seen a slight increase in the share of pharmaceutical expenditures from 17.5 percent [...] to 13.8 percent [sic; I'm sure those numbers were mixed up during editing.] [...] Real GDP per capita increased by 120 percent from 1970 to 2008 in the median country of the OECD. [...] health spending per capita increased by 314 percent [...] after controlling for inflation and population growth [...] Health spending grew an average of 1.8 percent per year faster than GDP in the median OECD country [...] The percent of the population over the age 65 [increased] 40 percent in the median OECD country. [...] Life expectancy increased by 8.8 years [...] while fertility rates declined by 31 percent. [...] Every OECD country had an increase in the number of physicians per 1000 capita between 1970 and 2008 [...] The median OECD country had a 223 percent increase.” [...but note also that: "on the whole, healthcare wages does not drive health spending growth"]
“Chronic diseases is creating a growing burden on health care spending. [...] within the US, 85 percent of health spending was attributed to people with chronic diseases in 2006 (Anderson 2007).”
“Low and middle-income countries (LMICs) [...] account for 84 percent of the world’s population, 90 percent of the world’s disease burden [...], 24 percent of the world’s GDP, and only 13 percent of global health expenditure. [...] The lower the country income level, the higher tends to be the share of out-of-pocket payments [...] and the lower the share of revenues (e.g. tax, insurance premiums) which flows through financing agents. [...] Even in a country like India, 83 percent of total expenditures on health is from private sources, and of this 94 percent is from out-of-pocket payments. [...] [there is] only 0.3 physicians and one nurse per 1000 people in low-income countries. SSA has the lowest density of physicians (one doctor for every 5.000 people), and South Asia of nurses (one nurse for every 1430 people).” [I was very surprised when looking at the data in this chapter; everybody knows that South-Saharan Africa sucks, but South Asia do almost as badly on a lot of health metrics - and on some of them they do even worse than SSA.] [...] “Within overall low coverage levels, there are considerable within-country inequalities by socioeconomic group [...] In low-income countries, children from the highest wealth quintile have double the measles immunization coverage of the lowest wealth quintile, and there is a seven-fold difference between highest and lowest wealth quintiles in presence of a skilled birth attendant at birth” [note that this latter difference likely translates into a lot of dead babies; infant mortality rates are high these places.]
“There is some evidence from developing countries to suggest that while the public share of revenue may increase as countries grow richer, the public share of [health care] provision shrinks.”
“In low-income countries [...] resource limitations make it difficult to provide universally even a limited package of high priority interventions [...] The commonly recommended solution is to target resources to the poorest, but there is little evidence so far that such targeting can be done effectively, or that it is cost-effective relative to broader approaches”
“In sum, education is strongly related to health, with both reserve causality and direct effects. However, the extent to which the correlation between education and health reflects direct causality, reverse causality, or omitted factors is not known. Although mechanisms by which health affect educational attainment are well-understood, how education affects health is not. [...] it seems unlikely that any one mechanism alone can explain the effect of education on health.” [I wrote a review paper on this topic a while back, coming to some roughly similar conclusions. I won't cover this stuff in detail here because it'd just be review, but if you want to know more you're welcome to ask. I incidentally think I may have written about this topic on the blog previously, but I'm not certain how detailed my coverage was.]
“Both income and wealth have strong independent correlations with health, net of education and other measures of SES. Assessing causality is difficult, however. Income and wealth improve access to health inputs (such as medical care and food), but health improves one’s ability to participate in the labour market and earn a decent wage. Illness also raises health care spending, thus reducing wealth [...do note here that: "onset of a new illness reduces household wealth by far more than the household's out-of-pocket health expenditures [...] A large share of this reduction in wealth is attributable to a decline in labor earnings.]. Additionally, “third factors” – such as education – may determine both financial ressources and health status. Despite these caveats, many public health researchers have attributed the health-income gradient to a causal effect running from income to health. Some have even gone as far as labeling income “one of the most profound influences on mortality” (Wilkinson 1990: 412). Initial research seemed to support this view – in one such study, McDonough et al. (1997) estimated that a move from a household income of $20.000-$30.000 to a household income greater than $70.000 (in 1993 dollars) was associated with a halving of the odds of adult mortality. It was difficult to fathom that an association so large could be entirely due to omitted variables or reverse causality. However, more recent studies suggest that the direction of causality is far from clear and, furthermore, that it varies considerably by age. Among adults, the negative impact of poor health on income and wealth appears to account for a sizeable part of the correlation between financial ressources and health. [...] Careful studies that look for the effect of income on health find little evidence to support this causal link in samples of older individuals in developed countries. [...] a preponderance of evidence suggests that in developed countries today, income does not have a large causal effect on adult health, whereas adult health has a large effect on adult income. [...] In the last two decades, economists’ most substantial contributions to this literature have involved untangling causal mechanisms.” [This is not news to me as I've done some work on this subject during my studies. But it seems like this often surprises people who don't know what (at least some) economists/econometricians do.]
I gave it three stars on goodreads, which is a sort of ‘compromise rating’; the variance in the quality of the material included is, in my opinion, very large. Some chapters are really good, and a few of them are just awful, with a lot of chapters somewhere in between. The annoying thing is that the awful chapters are still full of ‘science-y language’ and technical remarks which may make some readers believe the stuff covered isn’t just a load of ¤#£$; some authors are obviously trying in their allotted chapters to defend (unsuccessfully) their own areas of research, even though those areas are, well… The really infuriating part is that some of the distinctions made in the bad chapters may actually be relevant to some extent (i.e., ‘it’s not all crud’) and some of the observations may not be completely bogus, but the quality of the research is just too poor to figure out what’s going on and the conclusions drawn from the research are ridiculously out of line with what you can say with any degree of certainty (this aspect was really noticeable in one of the last chapters, which is probably why I focus in on this aspect as I’m writing this). Anyway I’d have to repeat that the good chapters make up for much of the other stuff, and in general I found the book a very interesting read. Naturally I did, some might say; this is a 600+ page handbook and if I hadn’t found it interesting there’s no way in hell I’d have completed it. Speaking of handbooks, on a related note I’ve recently started reading The Oxford Handbook of Health Economics. However that book is at least to some extent a ‘work-related’ book (…to which extent is still questionable as I’m reading it partly in order to figure out how ‘work-related’ I might be able to make it).
In my last post, I mentioned that I’d cover ‘the need for cognition’ and ‘the need for cognitive closure’ in my next post about the book – and I do this below. I’ll cover a few other topics as well. Naturally I cannot possibly cover more than a very small part of the stuff I’ve read here – there’s material for at least 10 other posts in there, which is something I’ve known for a while and which is something that has certainly made me more hesitant along the way to provide detailed coverage of the stuff I’ve read (I’d never finsh and I’d get tired of covering this stuff – the book has already taken a lot of hours to read). As for ‘general remarks relating to the coverage and the overall topic’ I’ll also not go into more detail in this post than I already have – I’ll restrict myself to coverage of specific topics below, you should see the previous posts for more general remarks.
The topics covered in the last parts of the book which I’ve yet to mention/talk about here are: ‘Conscientiousness’, ‘achievement motivation’, ‘belonging motivation’, ‘affiliation motivation’, ‘power motivation’, ‘social desirability’, ‘sensation seeking’, ‘rejection sensitivity’, ‘psychological defensiveness: Repression, blunting, and defensive pessimism’ (all these belong to part V, on ‘motivational dispositions’), and: ‘Private and public self-consciousness’, ‘independent, relational, and collective-interdependent self-construals’, ‘self-esteem’, ‘narcissism’, ‘self-compassion’, and ‘self-monitoring’ (these chapters belong to part VI, on ‘self-related dispositions’). As you can probably tell from the coverage below, there are a lot of topics I have chosen not to cover here. Lack of coverage of a given topic should not be taken to imply that the chapter in question was bad.
Okay, on to the more specific coverage. In the sequences below I decided to bold some sequences in part in order to allow people who’ll only skim the post to still get at least something out of it (it’s a long post).
“As conceptualized by Cacioppo and Petty (1982), the need for cognition (NC) refers to the tendency for people to vary in the extent to which they engage in and enjoy effortful cognitive activities. Some individuals have relatively little motivation for cognitively effortful tasks, whereas other individuals consistently engage in and enjoy cognitively challenging activities. Of course, people can fall at any point in the distribution. For people high in NC, thinking satisfies a desire and is enjoyable. For people low in NC, thinking can be a chore that is engaged in mostly when some incentive or reason is present.“
Someone at some point pointed out that he or she disliked reading long italized sequences, so although this was how I’d originally presented my comments to this I decided to make an exception here. Anyway, Petty is a co-author of the chapter in question. More to the point, I’m not actually sure I’d (self-?)categorize as being high NC; I often feel that I’m a lazy thinker compared to some people, and I don’t really think I’m going out of my way to find stuff that I know will challenge me a great deal – for example such laziness is part of the reason why I’m not currently reading books like this, or this, or this). On the other hand if such a scale is to have any meaning when applied to people in general I probably have to belong to the high group, and naturally the same thing is likely to apply to people who read a blog like this. This is perhaps both a good illustration of how sensitive these measures sometimes may be to stuff like response strategies/impression management, and a more general illustration (although I stated I wouldn’t include any of these in the coverage…) of how I was often uncertain while reading this book as to to which extent variables presented in the book were actually behaviourally relevant to me personally (I’m sure this is a common experience). If the variable measured behaviour, I’d be high NC because I spend a lot of time reading/thinking/etc. But it doesn’t – it asks people if they like complex task to simple tasks and similar questions. I’m not sure how I’d answer those – I could easily see myself answering a self-report thing like that with answers to the effect that I’m a lazy thinker who tries to avoid doing hard brain work whenever I can get away with it. Construct validity aspects and potential problems are always there, lurking in the background. In the specific case such stuff is probably irrelevant to the population-wide behavioural correlates, but such aspects are potentially important and again underscore how difficult it is to construct good metrics that measure what you want to measure and nothing else. According to the authors the scale most widely used has high internal consistency and high test-retest reliability so it’s not a bad metric. …but back to the text coverage:
“the available evidence indicates that as NC increases, people are more likely to think about a wide variety of things, including their own thoughts. This enhanced thinking often produces more consequential (e.g., enduring) judgments and can sometimes provide protection from common judgmental biases. At other times, however, enhanced thinking can exacerbate a bias [...] it is preferable to refer to NC as tapping into the tendency to engage in extensive thinking. To the extent that this thinking is influenced (biased) by irrational intuitions, emotions, or images, the outcome of the thinking need not be rational.”
“The idea that NC taps into differences in motivation rather than ability is supported by research showing that NC is only moderately related to measures of cognitive ability (e.g., verbal intelligence) and continues to predict relevant outcomes after cognitive ability is controlled (see Cacioppo et al., 1996).”
“Considerable research has suggested that individuals low in NC are, absent some incentive to the contrary, more likely to rely on simple cues in a persuasion situation [...] and on stereotypes alone in judging other people [...] than are those high in NC. [...] The psychology of persuasion focuses on which variables produce changes in individuals’ beliefs and attitudes and the mechanisms by which they do so. Consistent with the idea that NC is associated with effortful thinking, people high in NC tend to form attitudes on the basis of an effortful analysis of the quality of the relevant information in a persuasive message [...] In contrast [...] individuals low in NC tend to treat variables as simple cues. These include factors such as the attractiveness [...] or credibility [...] of the message source [...], the appearance and frame (e.g., positive vs. negative, gains vs. losses) of the message [...], and their own emotional states”
“Because individuals high (vs. low) in NC typically engage in more thinking, they also tend to have stronger attitudes [...] they tend to form stronger automatic associations among attitude objects [...], and to generalize their changes to other, related beliefs [...] At the most basic level, NC affects the amount of thought that goes into a decision. Thus, those high in NC tend to think more about available options prior to making a decision [...] and are more likely to search for additional information before coming to a judgmental conclusion [...] Perhaps surprisingly, both high and low levels of NC have been related to various biases in judgment. Across a variety of studies, those low in NC tend to show greater amounts of bias when this bias is created by a reliance on mental shortcuts. Alternatively, when the bias is created through effortful thought, individuals high in NC tend to be more strongly affected.”
“A number of general conclusions emerge from this chapter. First, and most important, individuals high in NC tend to think more than those low in NC about all kinds of information, including their own thoughts (metacognition). Second, however, it is noteworthy that individuals low in NC are capable of and can be motivated to exert extensive thinking, and individuals high in NC can decide not to think under certain circumstances, such as when the message does not seem challenging. Third, these differences in the extent of thinking between individuals high and low in NC can result in different outcomes in response to the same treatment. [...] different levels of NC can be associated with both positive or negative, accurate or inaccurate, and rational or irrational outcomes, depending on the circumstances involved.”
Enough about need for cognition. What about ‘need for cognitive closure’?
“The need for closure (NFC) has been defined as a desire for a definite answer to a question, as opposed to uncertainty, confusion, or ambiguity (Kruglanski, 1989). It is assumed that the motivation toward closure varies along a continuum anchored at one end with a strong NFC and at the other end with a strong need to avoid closure. The NFC is elevated when the perceived benefits of possessing closure and/or the perceived costs of lacking closure are high [...] Likewise, the need to avoid closure is elevated when the perceived benefits of lacking closure and the perceived costs of possessing closure are high. These benefits and costs vary according to situational factors and individual differences. [...] People exhibit stable personal differences in the degree to which they value closure. Some people may form definitive, and perhaps extreme, opinions regardless of the situation, whereas others may resist making decisions even in the safest environments.“
“it seems that people who are high in NFC will restrict the number of hypotheses that they will entertain before reaching a given judgment. One may expect that generating fewer hypotheses would lead to lower confidence in one’s decision. Ironically, however, a reduction in hypothesis generation may lead to the opposite effect. Individuals who are high in NFC may be less aware of competing judgmental possibilities and, therefore, may be more confident that their selection is correct. Indeed, elevated judgmental confidence under heightened NFC has been manifested in numerous studies [...] people with a heightened need for closure, by virtue of their assurance in their decisions, show an inverse relationship between judgmental confidence and the extent of information processing. [...] individuals high in NFC may seek less information about another person before reaching a conclusion or forming a definite impression about this person. [...] A primacy effect refers to the tendency to base one’s social impressions on early information about that person, to the relative neglect of subsequent, potentially relevant information. [...] when individuals are high in NFC, primacy effects are augmented [and] the higher the individual’s NFC, the stronger the magnitude of the primacy effect. [...] Taken together, the research on intrapersonal processes demonstrates that people who are high in NFC seek less information, generate fewer hypotheses, and rely on early, initial information when making judgments. Paradoxically, despite the reliance on less, and perhaps incomplete, information, individuals high in NFC display greater confidence in their decisions.“
“In summary, individual differences in NFC, as well as situational differences in NFC, have important implications for social interaction. Individuals high in NFC (vs. low in NFC) have greater difficulty taking other people’s perspectives and empathizing with them. While communicating with others, individuals high in NFC are focused on their own perspective, making it more difficult for others to understand their views and communications. Individuals high in NFC prefer to use abstract labels, which can be applied across various situations. Lastly, individuals high in NFC are quick to apply significant-other schemas to individuals who resemble them superficially, potentially producing substantial errors of person perception [i.e., they tend to assume people who remind them of someone they know are more like that person than they really are, and this assumption impacts behaviour in potentially problematic ways]“
“Taken together, the research on group processes and NFC indicates that individuals with high NFC desire consensus and homogeneity among group members. As such, they are willing to engage in activities perceived as likely to achieve and maintain stability, including focusing on the task at hand, pressuring others to change their opinions, rejecting those who hold different opinions, sharing less information with others, and supporting an autocratic leadership style. [...] individuals high in NFC prefer ingroups that are homogeneous as well as similar to themselves; once those groups are established, they support attempts to maintain the group and exclude others from the group.”
Lastly, a few observations of interest from other chapters:
“Goldberg (1993) used the term Big Five for a reason. The Big Five are big. Not big in the sense that they are important, but big in the sense that each of the Big Five is best considered a broad domain of traits, not a unitary construct. This point seems to be increasingly lost on the current generation of personality inventory consumers, as the preference appears to be to use short measures under the assumption that measuring a single dimension of conscientiousness, or any of the remaining Big Five, is a sufficient representation of the domain. This is like arguing that oranges, apples, and bananas are interchangeable because they are all fruit. Conscientiousness is clearly not unidimensional and consists of several relatively distinct facets that, like different fruit, are not identical.”
“When it comes to decision making, research paints a portrait of people with low self-esteem as being less decisive [...] and more likely to procrastinate [...] Persons lower in self-esteem are also more easily persuaded than those high in self-esteem [...] those who suffer from negative self-views are prone to tolerate various forms of poor treatment [...] people low as compared with high in self-esteem are also more risk averse when making decisions, most likely because they have relatively low expectations of success [...] and are motivated to avoid feelings of regret should a risky decision yield negative consequences [...] some research suggests that persons high in self-esteem pursue goals with an eye to achieving excellence, whereas those low in self-esteem seek merely to attain adequacy [...] Moreover, higher self-esteem is associated with superior self-regulation during goal pursuit.”
“people actively seek and embed themselves within social environments that sustain their stable self-views. Evidence of this tendency appears in people’s choices of relationship partners, careers, home and work environments, group memberships, and even home and office decor [...] people low in self-esteem tend to withdraw and isolate themselves from others, whereas those high in self-esteem more readily seek others’ company [...] Once they enter social settings, people’s stable self-views predict their preferences for specific interaction partners. Whereas people with favorable self-concepts tend to seek out relationship partners who view them favorably, those with negative self-concepts prefer the companionship of those who view them unfavorably [...] Such tendencies should ensure that people surround themselves with relationship partners, feedback sources, and environments that bolster, rather than challenge, their self-esteem and self-concepts. Moreover, to the extent that a given relationship or environment disconfirms people’s self-concepts or self-esteem, they are likely to leave in search of a better fitting niche [...] people generally seek positive information about their favorable self-views and negative information about their unfavorable self-views”
“Just as people seek information that is consistent with their self-views, they pay more attention to evaluatively consistent than inconsistent information. In general, people low as compared with high in self-esteem attend more to negative information and events [...] When it comes to self-relevant information, people with negative self-concepts pay more attention to unfavorable than favorable evaluations of themselves, whereas the reverse is true among those with positive self-concepts [...] In the wake of failure feedback, persons with low self-esteem focus attention on their weaknesses, whereas those high in self-esteem increase attention to their strengths [...] people high in self-esteem are more likely than those low in self-esteem to focus on the ways in which their own outcomes compare favorably to the outcomes obtained by the friends, acquaintances, and strangers that they encounter in daily life [...] Perhaps reflecting these differences in attention, people display better memory for feedback and experiences that are congruent relative to incongruent with the valence of their self-esteem, and they recall incongruent feedback and experiences as more congruent than they really were”
“The manner in which people interpret their own and other people’s behaviors and outcomes is linked predictably with their self-esteem and self-concepts. For instance, people interpret feedback that is congruent with their self-concepts as accurate, whereas they dismiss incongruent feedback as inaccurate [...] Moreover, a large body of research on attribution processes shows that people high in self-esteem take credit for their successes and blame their failures on external factors [...] In contrast, people low in self-esteem are less inclined to take credit for their successes and more inclined to assume responsibility for their failures [...] Furthermore, people with low self-esteem may not even interpret their own success experiences as successes unless a credible outsider tells them explicitly that they have done well (Josephs, Bosson, & Jacobs, 2003).”
“people who are higher in self-esteem tend to experience fewer negative emotions such as depression, anxiety, and hostility. Indeed, the negative association between self-esteem and depression is so strong (r ~ .80; Watson et al., 2002) that some suggest conceptualizing self-esteem and depression as end points of a bipolar continuum (Suls, 2006).” [they don't cover intervention studies in the chapter at all, but as far as I can remember the evidence is decidedly mixed - I'm sure I've covered some of the literature before here on the blog, but I can't be bothered to look it up. Such correlations certainly provide part of the reason why this may be the case (good luck convincing me that the causal arrow goes only from self-esteem to depression). They implicitly go into other reasons in the chapter why interventions focusing on this variable may not be successful, e.g. by their distinction between implicitly and explicitly assessed self-esteem and the very low to non-existent correlation between the two, but I think it would go too far to cover all the details of that stuff here. The authors of this particular chapter were very careful not to draw causal inferences, which serves them credit; you can easily write entire books devoted to these topics, and this stuff is never easy when you're dealing with human beings and social behaviour.]
“thinking about how one is viewed by others does not always translate into an accurate understanding of the perspective of others [...] Instead, preoccupation with oneself as a public object of attention often leads to unwarranted or exaggerated assumptions about the extent to which one is the target of others’ thoughts. That is, rather than clarifying knowledge about oneself, thinking about oneself as a social object sometimes heightens its accessibility and subjective importance, resulting in a biasing of mental judgments. [...] Much of this research [on self-consciousness] suggests a psychological equivalent to the Heisenberg uncertainty principle: Observing a phenomenon changes it. Self-attention or self-consciousness often seem to change the nature of whatever aspect of self is being observed. When attention or thoughtful scrutiny is directed toward self-relevant psychological phenomena such as ongoing affect [...], judgments regarding causality for one’s behavior [...], the reasons underlying one’s attitudes or feelings [..], or the self as a social target [...], the effect of “looking inward,” rather than being clarifying or “insightful,” instead tends to alter or distort whatever aspect of the self is being thought about. Although it has sometimes been assumed that self-consciousness facilitates self-knowledge, research suggests some skepticism regarding the accuracy of the self-insights gained through self-focused attention.“
“Several parallels have been noted between the characteristics of depression and those of private self-consciousness, such as self-blaming tendencies and difficulty in engaging in self-deceptive or positive illusions (e.g., Musson & Alloy, 1988). In addition, studies have consistently found positive correlations between measures of private self-focus and depression”
I thought this part was cute, and I can’t refrain from covering it here:
“People cannot always obtain the degree of acceptance and belonging that they desire, either because opportunities for acceptance are not currently available or because they have been explicitly rejected. In such instances, people may use tactics that make them feel accepted even when actual acceptance is unavailable. Research suggests that people who are high in belonging motivation use such tactics more commonly than those who are low in it.
Some people seem to derive emotional benefits from the parasocial relationships that they have with actors, newscasters, and celebrities that they see on television. Research on parasocial relationships shows that TV viewers regard favorite television performers as emotionally closer to them than an acquaintance but not as close as a friend [...], reflecting a notable degree of interpersonal connection. Theorists have assumed that people form parasocial relationships with public figures to fill unmet social needs and reduce loneliness [...], but little is known regarding when people use parasocial relationships to bolster feelings of acceptance. A series of studies by Knowles and Gardner (2003, 2008) showed that people who scored high in belonging motivation have closer and more intense attachments to their favorite television characters, even seeking “social support” from television characters, who keep them company when they are alone.”
A little more on related stuff from that chapter:
“Gardner, Pickett, Jefferies, and Knowles (2005) suggested that, in the same way that people may snack in order to tide them over to the next full meal, people who feel inadequately connected may “snack” on symbolic reminders of their social connections until they can engage in actual supportive interactions. Social snacking may take the form of rereading letters or e-mail messages from friends and loved ones, reminiscing about previous times when one was accepted or loved, daydreaming about significant others, or looking at photographs of family, friends, or romantic partners. Importantly, social snacking is more common among people who score high in belonging motivation”
“Most motives tend to become stronger, or at least more salient, when they remain unsatisfied. Along these lines, Baumeister and Leary (1995) suggested that the degree to which people desire acceptance and belonging increases when their need for belonging is unmet, as does their experience of negative emotions. Although Baumeister and Leary were discussing state-like changes in belonging motivation, their analysis raises the question of whether stable individual differences in belonging motivation are tied to feeling disconnected, rejected, lonely, or left out and to the tendency to experience negative emotions. [...] Leary and colleagues (2008) presented considerable evidence that belonging motivation is not related to the degree to which people believe that they are accepted and belong. [...] although state-like desires for acceptance may increase when people feel inadequately accepted at a particular moment in time [...], individual differences in belonging motivation do not appear to arise from a perceived lack of social connections. [...] Although the desire for acceptance is distinct from the frequency with which people interact with others, social acceptance is necessarily facilitated by interpersonal contact. Thus one would expect that people who more highly desire to be accepted by other people might tend to seek more opportunities for social interaction than those who have a weaker desire for acceptance. Consistent with this expectation, studies show that people who more greatly desire acceptance and belonging are more likely to be extraverted than introverted and tend to score higher on measures of sociability than people who desire acceptance and belonging less strongly“
I figured that as I was already spending time reading stuff on related matters, I might as well cover this topic as well (intelligence is not a personality variable they spend many paragraphs discussing in the Handbook) – but given that I’m not that interested in this stuff, I also figured I didn’t want to spend too much time on it. So reading a book with the subtitle ‘a very short introduction’ made sense.
The book is not very technical, and I was seriously considering as I was reading the first few pages to just throw it away. But I decided that I’d give it a few more pages, and having done that I realized that even though the coverage was somewhat superficial I might as well finish it as it would take very little effort. From the outset I sort of expected the book to be a ‘downgraded’ version of a standard Springer publication. It turned out that it was not, the level was significantly lower than that – either that or my conceptualization of how such a ‘downgraded’ (‘more accessible!’) book looks like was erroneous. Either way I was somewhat disappointed. I ended up giving the book two stars on goodreads. There was too much fluff and he spent a lot of time dealing with simple stuff.
Some of the conceptual and methodological approaches applied in this line of research are also applied in other areas of psychologic research covered in Leary & Hoyle, but you certainly don’t need to have read anything about psychology, psychometrics etc. in order to read and understand this book. To give an example of what I mean by the first part, in various areas of personality research it’s common for researchers to in some sense look for ‘common factors’ that tend to cluster together – the existence of such common factors relate very closely to the existence of such a thing as personality traits in the first place. The idea is that people who are in some sense ‘alike’ along one ‘dimension’ of personality/behaviours are likely to also be ‘alike’ along ‘similar’ related ‘dimensions’, and once you add the various elements in such clusters together and construct new variables and use these to have a closer look at stuff you might be interested in, these constructs can be used to gain a better understanding of behavioural links, because they tend to predict behaviour better than do the elements they’re made up of. You can add stuff together at more than one level if you like. The search for ‘g’ in the area of intelligence research is in some sense just a hunt for such a ‘common factor’; a factor useful in explaining variation in peoples’ performances at various cognitive tasks. The important point here being that most intelligence researchers agree that it makes sense to look for such a common factor, because it looks a lot as if such a factor exists in the data. A bit from the first chapter of the book about this stuff:
“Carroll’s strata of mental abilities emerged as an optimal result from a standardized statistical procedure, not from his imposing a structure on the data. He discovered rather than invented the hierarchy of intelligence differences [...] Among psychologists working in this field there is no longer any substantial debate about the structure of human mental ability differences. Something like John Carroll’s three-stratum model almost always appears from a collection of mental tests. A general factor emerges that accounts for about half of the individual differences among the scores for a group of people, and there are group factors that are narrower abilities, and then very specific factors below that. Therefore, we can nowadays describe the structure of mental test performances quite reliably [...]
The principal dissidents from this well-supported view are on the semi-popular fringes of scientific psychology. Howard Gardner’s popular writings on ‘multiple intelligences’ have suggested that there are many forms of mental ability and that they are unrelated. The facts are that some of Gardner’s supposedly separate intelligences are well known to be correlated positively and linked thereby to general mental ability, such as his verbal, mathematical, and musical intelligences. Some of his so-called intelligences, though valued human attributes, are not normally considered to be mental abilities, i.e. not within man’s ‘cognitive’ sphere. For example, physical intelligence is a set of motor skills and interpersonal intelligence involves personality traits.”
Here’s a little bit about ageing from the book (in the chapter he also talks a bit about the distinction between crystallized and fluid intelligence, among other things):
“what ages when we talk of intelligence ageing is something very general – some broad capability of the brain to handle ideas is changing, not just specific aspects of mental function [...] what seems like a kaleidoscope of mental change can to a great extent be explained by one simple fact: as we get older our rate of processing information in the brain slows down.”
And below a few observations from chapter 3:
“there is a modest positive correlation between head size and brain size [...] There is a modest association between brain size and psychometric intelligence. People with bigger brains tend to have higher mental test scores.” [...]
“Psychologists today often refer to the ‘mental speed’ or ‘information processing speed’ ‘theory’ of intelligence. What they mean by that is that people who score better on intelligence tests might in part be cleverer because some key aspect(s) of the brain proceeds faster. My principal problem with this overall idea is that my colleagues can’t make up their mind how to measure this mental speed. Some use reaction times. Some use inspection times. Some use the brain’s electrical responses. Some even measure how long it takes electrical impulses to travel along people’s nerves. But these are all different measures, and it is an odd theory that can be tested without a common yardstick, and some of these mental speed ‘yardsticks’ don’t relate to each other very well at all. The truth is that we do not have an agreed measure of how fast the brain processes information, and that is because the workings of the nerve cells and their networks are largely mysterious. We must summarize by concluding, therefore, that intelligence is related to many things that involve speed of processing information, but that scientists have difficulty in conceptualizing ‘mental speed’ in a uniform way.”
I don’t really think it’s worth the trouble to cover more of the book in detail here, as a lot of the stuff covered in the book has already been covered here on the blog before – instead of reading the book you can just have a look at some of the stuff I’ve posted on intelligence before here – links like these: 1 (link vi.), 2, 3, 4, and 5 (not all of the stuff at the links is covered in the book, but I can’t be bothered to find all the matching papers and if you read the links in those posts you’ll probably learn more than you will from reading the book).
- 180 grader
- alfred brendel
- Arthur Conan Doyle
- Bent Jensen
- Bill Bryson
- Bill Watterson
- Claude Berri
- current affairs
- Dan Simmons
- David Copperfield
- david lynch
- den kolde krig
- Dinu Lipatti
- Douglas Adams
- economic history
- Edward Grieg
- Eliezer Yudkowsky
- Ezra Levant
- Filippo Pacini
- financial regulation
- foreign aid
- Franz Kafka
- freedom of speech
- Friedrich von Flotow
- Fyodor Dostoevsky
- Game theory
- Garry Kasparov
- George Carlin
- george enescu
- global warming
- Grahame Clark
- harry potter
- health care
- isaac asimov
- Jane Austen
- John Stuart Mill
- Jon Stewart
- Joseph Heller
- karl popper
- Khan Academy
- knowledge sharing
- Leland Yeager
- Marcel Pagnol
- Maria João Pires
- Mark Twain
- Martin Amis
- Martin Paldam
- mikhail gorbatjov
- Mikkel Plum
- Morten Uhrskov Jensen
- Muzio Clementi
- Nikolai Medtner
- North Korea
- nuclear proliferation
- nuclear weapons
- Ole Vagn Christensen
- Open Thread
- Oscar Wilde
- Pascal's Wager
- Paul Graham
- people are strange
- public choice
- rambling nonsense
- random stuff
- Richard Dawkins
- Rowan Atkinson
- Saudi Arabia
- science fiction
- Sun Tzu
- Terry Pratchett
- The Art of War
- Thomas Hobbes
- Thomas More
- walter gieseking
- William Easterly