i. “A drawback to success in life is that failure, when it does come, acquires an exaggerated importance.” (P. G. Wodehouse).
ii. “Truth is the cry of all, but the game of the few.” (George Berkeley).
iii. “It is always the best policy to speak the truth, unless, of course, you are an exceptionally good liar.” (Jerome K. Jerome).
iv. “I don’t believe any man ever existed without vanity, and if he did he would be an extremely uncomfortable person to have anything to do with. He would, of course, be a very good man, and we should respect him very much. He would be a very admirable man—a man to be put under a glass case and shown round as a specimen—a man to be stuck upon a pedestal and copied, like a school exercise—a man to be reverenced, but not a man to be loved, not a human brother whose hand we should care to grip. Angels may be very excellent sort of folk in their way, but we, poor mortals, in our present state, would probably find them precious slow company. Even mere good people are rather depressing. It is in our faults and failings, not in our virtues, that we touch one another and find sympathy. We differ widely enough in our nobler qualities. It is in our follies that we are at one.” (-ll-).
v. “A shy man’s lot is not a happy one. The men dislike him, the women despise him, and he dislikes and despises himself. […] A shy man means a lonely man—a man cut off from all companionship, all sociability. He moves about the world, but does not mix with it. Between him and his fellow-men there runs ever an impassable barrier—a strong, invisible wall that, trying in vain to scale, he but bruises himself against. He sees the pleasant faces and hears the pleasant voices on the other side, but he cannot stretch his hand across to grasp another hand. He stands watching the merry groups, and he longs to speak and to claim kindred with them. But they pass him by, chatting gayly to one another, and he cannot stay them. He tries to reach them, but his prison walls move with him and hem him in on every side. In the busy street, in the crowded room, in the grind of work, in the whirl of pleasure, amid the many or amid the few—wherever men congregate together, wherever the music of human speech is heard and human thought is flashed from human eyes, there, shunned and solitary, the shy man, like a leper, stands apart. His soul is full of love and longing, but the world knows it not. The iron mask of shyness is riveted before his face, and the man beneath is never seen.” (-ll-).
vi. “We cannot tell the precise moment when friendship is formed. As in filling a vessel drop by drop, there is at last a drop which makes it run over; so in a series of kindnesses there is at last one which makes the heart run over.” (James Boswell).
vii. “Men might as well project a voyage to the Moon as attempt to employ steam navigation against the stormy North Atlantic Ocean.” (Dr. Dionysus Lardner (1793-1859). Many more quotes of a similar nature here).
viii. “We pity in others only those evils which we have ourselves experienced.” (Jean-Jacques Rousseau).
ix. “All that time is lost which might be better employed.” (-ll-).
x. “Virtue is a state of war, and to live in it means one always has some battle to wage against oneself.” (-ll-).
xi. “Remorse sleeps during a prosperous period but wakes up in adversity.” (-ll-).
xii. “Hatred, as well as love, renders its votaries credulous.” (-ll-).
xiii. “He that is choice of his time will be choice of his company, and choice of his actions.” (Jeremy Taylor).
xiv. “To say that a man is vain means merely that he is pleased with the effect he produces on other people. A conceited man is satisfied with the effect he produces on himself.” (Max Beerbohm).
xv. “Moderation is the silken string running through the pearl chain of all virtues.” (Joseph Hall).
xvi. “If you make people think they’re thinking, they’ll love you; but if you really make them think, they’ll hate you.” (Donald Marquis).
xvii. “Some luck lies in not getting what you thought you wanted but getting what you have, which once you have got it you may be smart enough to see is what you would have wanted had you known.” (Garrison Keillor)
xviii. “Once I believed that sooner or later I would come across a really wise person; today I couldn’t even say what wisdom is.” (Fausto Cercignani).
xix. “If you are living in the past or in the future, you will never find a meaning in the present.” (-ll-)
xx. “A secret remains a secret until you make someone promise never to reveal it.” (-ll-)
Update: According to the category count, this is the 150th post of quotes here on this blog (the category cloud seems to be slow to update the number, but I assume it’ll do it eventually).
It’s probably worth pointing out to new readers in particular that if you like this post and perhaps have liked a few of the previous posts in the series, you can access a collection of all the other posts in the series simply by clicking the blue category link, ‘quotes’, at the bottom of this post, or by clicking the ‘quotes’ link provided in the category cloud in the sidebar to the right.
[Warning: Long post].
I’ve blogged data related to the data covered in this post before here on the blog, but when I did that I only provided coverage in Danish. Part of my motivation for providing some coverage in English here (which is a slightly awkward and time consuming thing to do as all source material is in Danish) is that this is the sort of data you probably won’t ever get to know about if you don’t understand Danish, and it seems like some of it might be worth knowing about also for people who do not live in Denmark. Another reason for posting stuff in English is of course that I dislike writing a blog post which I know beforehand that some of my regular readers will not understand. I should perhaps note that some of the data is at least peripherally related to my academic work at the moment.
The report which I’m covering in this post (here’s a link to it) deals primarily with various metrics collected in order to evaluate whether treatment goals which have been set centrally are being met by the Danish regions, one of the primary political responsibilities of which is to deal with health care service delivery. To take an example from the report, a goal has been set that at least 95 % of patients with known diabetes in the Danish regions should have their Hba1c (an important variable in the treatment context) measured at least once per year. The report of course doesn’t just contain a list of goals etc. – it also presents a lot of data which has been collected throughout the country in order to figure out to which extent the various goals have been met at the local levels. Hba1c is just an example; there are also goals set in relation to the variables hypertension, regular eye screenings, regular kidney function tests, regular foot examinations, and regular tests for hyperlipidemia, among others.
Testing is just one aspect of what’s being measured; other goals relate to treatment delivery. There’s for example a goal that the proportion of (known) type 2 diabetics with an Hba1c above 7.0% who are not receiving anti-diabetic treatment should be at most 5% within regions. A thought that occurred to me while reading the report was that it seemed to me that some interesting incentive problems might pop up here if these numbers were more important than I assume they are in the decision-making context, because adding this specific variable without also adding a goal for ‘finding diabetics who do not know they are sick’ – and no such goal is included in the report, as far as I’ve been able to ascertain – might lead to problems; in theory a region that would do well in terms of identifying undiagnosed type 2 patients, of which there are many, might get punished for this if their higher patient population in treatment as a result of better identification might lead to binding capacity constraints at various treatment levels; capacity constraints which would not affect regions which are worse at identifying (non-)patients at risk because of the existence of a tradeoff between resources devoted to search/identification and resources devoted to treatment. Without a goal for identifying undiagnosed type 2 diabetics, it seems to me that to the extent that there’s a tradeoff between devoting resources to identifying new cases and devoting resources to the treatment of known cases, the current structure of evaluation, to the extent that it informs decision-making at the regional level, favours treatment over identification – which might or might not be problematic from a cost-benefit point of view. I find it somewhat puzzling that no goals relate to case-finding/diagnostics because a lot of the goals only really make sense if the people who are sick actually get diagnosed so that they can receive treatment in the first place; that, say, 95% of diabetics with a diagnosis receives treatment option X is much less impressive if, say, a third of all people with the disease do not have a diagnosis. Considering the relatively low amount of variation in some of the metrics included you’d expect a variable of this sort to be included here, at least I did.
The report has an appendix with some interesting information about the sex ratios, age distributions, how long people have had diabetes, whether they smoke, what their BMIs and blood pressures are like, how well they’re regulated (in terms of Hba1c), what they’re treated with (insulin, antihypertensive drugs, etc.), their cholesterol levels and triglyceride levels, etc. I’ll talk about these numbers towards the end of the post – if you want to get straight to this coverage and don’t care about the ‘main coverage’, you can just scroll down until you reach the ‘…’ point below.
The report has 182 pages with a lot of data, so I’m not going to talk about all of it. It is based on very large data sets which include more than 37.000 Danish diabetes patients from specialized diabetes units (diabetesambulatorier) (these are usually located in hospitals and provide ambulatory care only) as well as 34.000 diabetics treated by their local GPs – the aim is to eventually include all Danish diabetics in the database, and more are added each year, but even as it is a very big proportion of all patients are ‘accounted for’ in the data. Other sources also provide additional details, for example there’s a database on children and young diabetics collected separately. Most of the diabetics which are not included here are patients treated by their local GPs, and there’s still a substantial amount of uncertainty related to this group; approximately 90% of all patients connected to the diabetes units are assumed at this point to be included in the database, but the report also notes that approximately 80 % of diabetics are assumed to be treated in general practice. Coverage of this patient population is currently improving rapidly and it seems that most diabetics in Denmark will likely be included in the database within the next few years. They speculate in the report that the inclusion of more patients treated in general practice may be part of the explanation why goal achievement seems to have decreased slightly over time; this seems to me like a likely explanation considering the data they present as the diabetes units in general are better at achieving the goals set than are the GPs. The data is up to date – as some of you might have inferred from the presumably partly unintelligible words in the parenthesis in the title, the report deals with data from the time period 2013-2014. I decided early on not to copy tables into this post directly as it’s highly annoying to have to translate terms in such tables; instead I’ve tried to give you the highlights. I may or may not have succeeded in doing that, but you should be aware, especially if you understand Danish, that the report has a lot of details, e.g. in terms of intraregional variation etc., which are excluded from this coverage. Although I far from cover all the data, I do cover most of the main topics dealt with in the publication in at least a little bit of detail.
The report concludes in the introduction that for most treatment indicators no clinically significant differences in the quality of the treatment provided to diabetics are apparent when you compare the different Danish regions – so if you’re looking at the big picture, if you’re a Danish diabetic it doesn’t matter all that much if you live in Jutland or in Copenhagen. However some significant intra-regional differences do exist. In the following I’ll talk in a bit more detail about some of data included in the report.
When looking at the Hba1c goal (95% should be tested at least once per year), they evaluate the groups treated in the diabetes units and the groups treated in general practice separately; so you have one metric for patients treated in diabetes units living in the north of Jutland (North Denmark Region) and you have another group of patients treated in general practice living in the north of Jutland – this breakdown of the data makes it possible to not only compare people across regions but also to investigate whether there are important differences between the care provided by diabetes units and the care provided by general practitioners. When dealing with patients receiving ambulatory care from the diabetes units all regions meet the goal, but in Copenhagen (Capital Region of Denmark, (-CRD)) only 94% of patients treated in general practice had their Hba1c measured within the last year – this was the only region which did not meet the goal for the patient population treated in general practice. I would have thought beforehand that all diabetes units would have 100% coverage here, but that’s actually only the case in the region in which I live (Central Denmark Region) – on the other hand in most other regions, aside from Copenhagen again, the number is 99%, which seems reasonable as I’m assuming a substantial proportion of the remainder is explained by patient noncompliance, which is difficult to avoid completely. I speculate that patient compliance differences between patient populations treated at diabetes units and patient populations treated by their GP might also be part of the explanation for the lower goal achievement of the general practice population; as far as I’m aware diabetes units can deny care in the case of non-compliance whereas GPs cannot, so you’d sort of expect the most ‘difficult’ patients to end up in general practice; this is speculation to some extent and I’m not sure it’s a big effect, but it’s worth keeping in mind when analyzing this data that not all differences you observe necessarily relate to service delivery inputs (whether or not a doctor reminds a patient it’s time to get his eyes checked, for example); the two main groups analyzed are likely to also be different due to patient population compositions. Differences in patient population composition may of course also drive some of the intraregional variation observed. They mention in their discussion of the results for the Hba1c variable that they’re planning on changing the standard here to one which relate to the distributional results of the Hba1c, not just whether the test was done, which seems like a good idea. As it is, the great majority of Danish diabetics have their Hba1c measured at least annually, which is good news because of the importance of this variable in the treatment context.
In the context of hypertension, there’s a goal that at least 95% of diabetics should have their blood pressure measured at least once per year. In the context of patients treated in the diabetes units, all regions achieve the goal and the national average for this patient population is 97% (once again the region in which I live is the only one that achieved 100 % coverage), but in the context of patients treated in general practice only one region (North Denmark Region) managed to get to 95% and the national average is 90%. In most regions, one in ten diabetics treated in general practice do not have their blood pressure measured once per year, and again Copenhagen (CRD) is doing worst with a coverage of only 87%. As mentioned in the general comments above some of the intraregional variation is actually quite substantial, and this may be a good example because not all hospitals are doing great on this variable. Sygehus Sønderjylland, Aabenraa (in southern Jutland), one of the diabetes units, had a coverage of only 67%, and the percentage of patients treated at Hillerød Hospital in Copenhagen (CRD), another diabetes unit, was likewise quite low, with 83% of patients having had their blood pressure measured within the last year. These hospitals are however the exceptions to the rule. Evaluating whether it has been tested if patients do or do not have hypertension is different from evaluating whether hypertension is actually treated after it has been discovered, and here the numbers are less impressive; for the type 1 patients treated in the diabetes units, roughly one third (31%) of patients with a blood pressure higher than 140/90 are not receiving treatment for hypertension (the goal was at most 20%). The picture was much better for type 2 patients (11% at the national level) and patients treated in general practice (13%). They note that the picture has not improved over the last years for the type 1 patients and that this is not in their opinion a satisfactory state of affairs. A note of caution is that the variable only includes patients who have had a blood pressure measured within the last year which was higher than 140/90 and that you can’t use this variable as an indication of how many patients with high blood pressure are not being treated; some patients who are in treatment for high blood pressure have blood pressures lower than 140/90 (achieving this would in many cases be the point of treatment…). Such an estimate will however be added to later versions of the report. In terms of the public health consequences of undertreatment, the two patient populations are of course far from equally important. As noted later in the coverage, the proportion of type 2 patients on antihypertensive agents is much higher than the proportion of type 1 diabetics receiving treatment like this, and despite this difference the blood pressure distributions of the two patient populations are reasonably similar (more on this below).
Screening for albuminuria: The goal here is that at least 95 % of adult diabetics are screened within a two-year period (There are slightly different goals for children and young adults, but I won’t go into those). In the context of patients treated in the diabetes units, the northern Jutland Region and Copenhagen/RH failed to achieve the goal with a coverage slightly below 95% – the other regions achieved the goal, although not much more than that; the national average for this patient population is 96%. In the context of patients treated in general practice none of the regions achieve the goal and the national average for this patient population is 88%. Region Zealand was doing worst with 84%, whereas the region in which I live, Region Midtjylland, was doing best with a 92% coverage. Of the diabetes units, Rigshospitalet, “one of the largest hospitals in Denmark and the most highly specialised hospital in Copenhagen”, seems to also be the worst performing hospital in Denmark in this respect, with only 84 % of patients being screened – which to me seems exceptionally bad considering that for example not a single hospital in the region in which I live is below 95%. Nationally roughly 20% of patients with micro- or macroalbuminuria are not on ACE-inhibitors/Angiotensin II receptor antagonists.
Eye examination: The main process goal here is at least one eye examination every second year for at least 90% of the patients, and a requirement that the treating physician knows the result of the eye examination. This latter requirement is important in the context of the interpretation of the results (see below). For patients treated in diabetes units, four out of five regions achieved the goal, but there were also what to me seemed like large differences across regions. In Southern Denmark, the goal was not met and only 88 % had had an eye examination within the last two years, whereas the number was 98% in Region Zealand. Region Zealand was a clear outlier here and the national average for this patient population was 91%. For patients treated in general practice no regions achieved the goal, and this variable provides a completely different picture from the previous variables in terms of the differences between patients treated in diabetes units and patients treated in general practice: In most regions, the coverage here for patients in general practice is in the single digits and the national average for this patient population is just 5 %. They note in the report that this number has decreased over the years through which this variable has been analyzed, and they don’t know why (but they’re investigating it). It seems to be a big problem that doctors are not told about the results of these examinations, which presumably makes coordination of care difficult.
The report also has numbers on how many patients have had their eyes checked within the last 4 years, rather than within the last two, and this variable makes it clear that more infrequent screening is not explaining anything in terms of the differences between the patient populations; for patients treated in general practice the numbers are still here in the single digits. They mention that data security requirements imposed on health care providers are likely the reason why the numbers are low in general practice as it seems common that the GP is not informed of the results of screenings taking place, so that the only people who gets to know about the results are the ophthalmologists doing them. A new variable recently included in the report is whether newly-diagnosed type 2 diabetics are screened for eye-damage within 12 months of receiving their diagnosis – here they have received the numbers directly from the ophthalmologists so uncertainty about information sharing doesn’t enter the picture (well, it does, but the variable doesn’t care; it just measures whether an eye screen has been performed or not) – and although the standard set is 95% (at most one in twenty should not have their eyes checked within a year of diagnosis) at the national level only half of patients actually do get an eye screen within the first year (95% CI: 46-53%) – uncertainty about the date of diagnosis makes it slightly difficult to interpret some of the specific results, but the chosen standard is not achieved anywhere and this once again underlines how diabetic eye care is one of the areas where things are not going as well as the people setting the goals would like them to. The rationale for screening people within the first year of diagnosis is of course that many type 2 patients have complications at diagnosis – “30–50 per cent of patients with newly diagnosed T2DM will already have tissue complications at diagnosis due to the prolonged period of antecedent moderate and asymptomatic hyperglycaemia.” (link).
The report does include estimates of the number of diabetics who receive eye screenings regardless of whether the treating physician knows the results or not; at the national level, according to this estimate 65% of patients have their eyes screened at least once every second year, leaving more than a third of patients in a situation where they are not screened as often as is desirable. They mention that they have had difficulties with the transfer of data and many of the specific estimates are uncertain, including two of the regional estimates, but the general level – 65% or something like that – is based on close to 10.000 patients and is assumed to be representative. Approximately 1% of Danish diabetics are blind, according to the report.
Foot examinations: Just like most of the other variables: At least 95 % of patients, at least once every second year. For diabetics treated in diabetes units, the national average is here 96% and the goal was not achieved in Copenhagen (CRD) (94%) and northern Jutland (91%). There are again remarkable differences within regions; at Helsingør Hospital only 77% were screened (95% CI: 73-82%) (a drop from 94% the year before), and at Hillerød Hospital the number was even lower, 73% (95% CI: 70-75), again a drop from the previous year where the coverage was 87%. Both these numbers are worse than the regional averages for all patients treated in general practice, even though none of the regions meet the goal. Actually I thought the year-to-year changes in the context of these two hospitals were almost as interesting as the intraregional differences because I have a hard time explaining those; how do you even set up a screening programme such that a coverage drop of more than 10 % from one year to the next is possible? To those who don’t know, diabetic feet are very expensive and do not seem to get the research attention one might from a cost-benefit perspective assume they would (link, point iii). Going back to the patients in general practice on average 81 % of these patients have a foot examination at least once every second year. The regions here vary from 79% to 84%. The worst covered patients are patients treated in general practice in the Vordingborg sygehus catchment area in the Zealand Region, where only roughly two out of three (69%, 95% CI: 62-75%) patients have regularly foot examinations.
Aside from all the specific indicators they’ve collected and reported on, the authors have also constructed a combined indicator, an ‘all-or-none’ indicator, in which they measure the proportion of patients who have not failed to get their Hba1c measured, their feet checked, their blood pressure measured, kidney function tests, etc. … They do not include in this metric the eye screening variable because of the problems associated with this variable, but this is the only process variable not included, and the variable is sort of an indicator of how many of the patients are actually getting all of the care that they’re supposed to get. As patients treated in general practice are generally less well covered than patients treated in the diabetes units at the hospitals I was interested to know how much these differences ‘added up to’ in the end. For the diabetes units, 11 % of patients failed on at least one metric (i.e. did not have their feet checked/Hba1c measured/blood pressure measured/etc.), whereas this was the case for a third of patients in general practice (67%). Summed up like that it seems to me that if you’re a Danish diabetes patient and you want to avoid having some variable neglected in your care, it matters whether you’re treated by your local GP or by the local diabetes unit and that you’re probably going to be better off receiving care from the diabetes unit.
Some descriptive statistics from the appendix (p. 95 ->):
Sex ratio: In the case of this variable, they have multiple reports on the same variable based on data derived from different databases. In the first database, including 16.442 people, 56% are male and 44% are female. In the next database (n=20635), including only type 2 diabetics, the sex ratio is more skewed; 60% are males and 40% are females. In a database including only patients in general practice (n=34359), like in the first database 56% of the diabetics are males and 44% are females. For the patient population of children and young adults included (n=2624), the sex ratio is almost equal (51% males and 49% females). The last database, Diabase, based on evaluation of eye screening and including only adults (n=32842), have 55% males and 45% females. It seems to me based on these results that the sex ratio is slightly skewed in most patient populations, with slightly more males than females having diabetes – and it seems not improbable that this is to due to a higher male prevalence of type 2 diabetes (the children/young adult database and type 2 database seem to both point in this direction – the children/young adult group mainly consists of type 1 patients as 98% of this sample is type 1. The fact that the prevalence of autoimmune disorders is in general higher in females than in males also seems to support this interpretation; to the extent that the sex ratio is skewed in favour of males you’d expect lifestyle factors to be behind this.
Next, age distribution. In the first database (n=16.442), the average and the median age is 50, the standard deviation is 16, the youngest individual is 16 and the oldest is 95. It is worth remembering in this part of the reporting that the oldest individual in the sample is not a good estimate of ‘how long a diabetic can expect to live’ – for all we know the 95 year old in the database got diagnosed at the age of 80. You need diabetes duration before you can begin to speculate about that variable. Anyway, in the next database, of type 2 patients (n=20635), the average age is 64 (median=65), the standard deviation is 12 and the oldest individual is 98. In the context of both of the databases mentioned so far some regions do better than others in terms of the oldest individual, but it also seems to me that this may just be a function of the sample size and ‘random stuff’ (95+ year olds are rare events); Northern Jutland doesn’t have a lot of patients so the oldest patient in that group is not as old as the oldest patient from Copenhagen – this is probably but what you’d expect. In the general practice database (n=34359), the average age is 68 (median=69) and the standard deviation is 11; the oldest individual there is 102. In the Diabase database (n=32842), the average age is 62 (median=64), the standard deviation is 15 and the oldest individual is 98. It’s clear from these databases that most diabetics in Denmark are type 2 diabetics (this is no surprise) and that a substantial proportion of them are at or close to retirement age.
The appendix has a bit of data on diabetes type, but I think the main thing to take away from the tables that break this variable down is that type 1 is overrepresented in the databases compared to the true prevalence – in the Diabase database for example almost half of patients are type 1 (46%), despite the fact that type 1 diabetics are estimated to make up only 10% of the total in Denmark (see e.g. this (Danish source)). I’m sure this is to a significant extent due to lack of coverage of type 2 diabetics treated in general practice.
Diabetes duration: In the first data-set including 16.442 individuals the patients have a median diabetes duration of 21,2 years. The 10% cutoff is 5,4 years, the 25% cutoff is 11,3 years, the 75% cutoff is 33,5 years, and the 90% cutoff is 44,2 years. High diabetes durations are more likely to be observed in type 1 patients as they’re in general diagnosed earlier; in the next database involving only type 2 patients (n=20635), the median duration is 12.9 years and the corresponding cutoffs are 3,8 years (10%); 7,4 years (25%); 18,6 years (75%); and 24,7 years (90%). In the database involving patients treated in general practice, the median duration is 6,8 years and the cutoffs reported for the various percentiles are 2,5 years (10%), 4,0 (25%), 11,2 (75%) and 15,6 (90%). One note not directly related to the data but which I thought might be worth adding here is that of one were to try to use these data for the purposes of estimating the risk of complications as a function of diabetes duration, it would be important to have in mind that there’s probably often a substantial amount of uncertainty associated with the diabetes duration variable because many type 2 diabetics are diagnosed after a substantial amount of time with sub-optimal glycemic control; i.e. although diabetes duration is lower in type 2 populations than in type 1 populations, I’d assume that the type 2 estimates of duration are still biased downwards compared to type 1 estimates causing some potential issues in terms of how to interpret associations found here.
Next, smoking. In the first database (n=16.442), 22% of diabetics smoke daily and another 22% are ex-smokers who have not smoked within the last 6 months. According to the resource to which you’re directed when you’re looking for data on that kind of stuff on Statistics Denmark, the percentage of daily smokers was 17% in 2013 in the general population (based on n=158.870 – this is a direct link to the data), which seems to indicate that the trend (this is a graph of the percentage of Danes smoking daily as a function of time, going back to the 70es) I commented upon (Danish link) a few years back has not reversed or slowed down much. If we go back to the appendix and look at the next source, dealing with type 2 diabetics, 19% of them are smoking daily and 35% of them are ex-smokers (again, 6 months). In the general practice database (n=34.359) 17% of patients smoke daily and 37% are ex-smokers.
BMI. Here’s one variable where type 1 and type 2 look very different. The first source deals with type 1 diabetics (n=15.967) and here the median BMI is 25.0, which is comparable to the population median (if anything it’s probably lower than the population median) – see e.g. page 63 here. Relevant percentile cutoffs are 20,8 (10%), 22,7 (25%), 28,1 (75%), and 31,3 (90%). Numbers are quite similar across regions. For the type 2 data, the first source (n=20.035) has a median BMI of 30,7 (almost equal to the 1 in 10 cutoff for type 1 diabetics), with relevant cutoffs of 24,4 (10%), 27,2 (25%), 34,9 (75%), and 39,4 (90%). According to this source, one in four type 2 diabetics in Denmark are ‘severely obese‘ and more diabetics are obese than are not. It’s worth remembering that using these numbers to implicitly estimate the risk of type 2 diabetes associated with overweight is problematic as especially some of the people in the lower end of the distribution are quite likely to have experienced weight loss post-diagnosis. For type 2 patients treated in general practice (n=15.736), the median BMI is 29,3 and cutoffs are 23,7 (10%), 26,1 (25%), 33,1 (75%), and 37,4 (90%).
Distribution of Hba1c. The descriptive statistics included also have data on the distribution of Hba1c values among some of the patients who have had this variable measured. I won’t go into the details here except to note that the differences between type 1 and type 2 patients in terms of the Hba1c values achieved are smaller than I’d perhaps expected; the median Hba1c among type 1s was estimated at 62, based on 16.442 individuals, whereas the corresponding number for type 2s was 59, based on 20.635 individuals. Curiously, a second data source finds a median Hba1c of only 48 for type 2 patients treated in general practice; the difference between this one and the type 1 median is definitely high enough to matter in terms of the risk of complications (it’s more questionable how big the effect of a jump from 59 to 62 is, especially considering measurement error and the fact that the type 1 distribution seems denser than the type 2 distribution so that there aren’t that many more exceptionally high values in the type 1 dataset), but I wonder if this actually quite impressive level of metabolic control in general practice may not be due to biased reporting, with GPs doing well in terms of diabetes management being also more likely to report to the databases; it’s worth remembering that most patients treated in general practice are still not accounted for in these data-sets.
Oral antidiabetics and insulin. In one sample of 20.635 type 2 patients, 69% took oral antidiabetics, and in another sample of 34.359 type 2 patients treated in general practice the number was 75%. 3% of type 1 diabetics in a sample of 16.442 individuals also took oral antidiabetics, which surprised me. In the first-mentioned sample of type 2 patients 69% (but not the same amount of individuals – this was not a reporting error) also took insulin, so there seems to be a substantial number of patients on both treatments. In the general practice sample included the number of patients on insulin was much lower, as only 14% of type 2 patients were on insulin – again concerns about reporting bias may play a role here, but even taking this number at face value and extrapolating out of sample you reach the conclusion that the majority of patients on insulin are probably type 2 diabetics, as only roughly one patient in 10 is type 1.
Antihypertensive treatment and treatment for hyperlipidemia: Although there as mentioned above seems to be less focus on hypertension in type 1 patients than on hypertension in type 2 patients, it’s still the case that roughly half (48%) of all patients in the type 1 sample (n=16.442) was on antihypertensive treatment. In the first type 2 sample (n=20635), 82% of patients were receiving treatment against hypertension, and this number was similar in the general practice sample (81%). The proportions of patients in treatment for hyperlipidemia are roughly similar (46% of type 1, and 79% and 73% in the two type 2 samples, respectively).
Blood pressure. The median level of systolic blood pressure among type 1 diabetics (n=16442) was 130, with the 75% cutoff intersecting the hypertension level (140) and 10% of patients having a systolic blood pressure above 151. These numbers are almost identical to the sample of type 2 patients treated in general practice, however as earlier mentioned this blood pressure level is achieved with a lower proportion of patients in treatment for hypertension. In the second sample of type 2 patients (n=20635), the numbers were slightly higher (median: 133, 75% cutoff: 144, 90% cutoff: 158). The median diastolic blood pressure was 77 in the type 1 sample, with 75 and 90% cutoffs of 82 and 89; the data in the type 2 samples are almost identical.
Here’s my first post about the book. In this post I’ll continue my coverage where I left off in my first post. A few of the chapters covered below I did not think very highly of, but other parts of the coverage are about as good as you could expect (given problems such as e.g. limited data etc.). Some of the stuff I found quite interesting. As people will note in the coverage below the book does address the religious dimension to some extent, though in my opinion far from to the extent that the variable deserves. An annoying aspect of the chapter on religion was to me that although the author of the chapter includes data which to me cannot but lead to some very obvious conclusions, the author seems to be very careful avoiding drawing those conclusions explicitly. It’s understandable, but still annoying. For related reasons I also got annoyed at him for presumably deliberately completely disregarding which seems in the context of his own coverage to be an actually very important component of Huntington’s thesis, that conflict at the micro level seems to very often be between muslims and ‘the rest’. Here’s a relevant quote from Clash…, p. 255:
“ethnic conflicts and fault line wars have not been evenly distributed among the world’s civilizations. Major fault line fighting has occurred between Serbs and Croats in the former Yugoslavia and between Buddhists and Hindus in Sri Lanka, while less violent conflicts took place between non-Muslim groups in a few other places. The overwhelming majority of fault line conflicts, however, have taken place along the boundary looping across Eurasia and Africa that separates Muslims from non-Muslims. While at the macro or global level of world politics the primary clash of civilizations is between the West and the rest, at the micro or local level it is between Islam and the others.”
This point, that conflict at the local level – which seems to be the type of conflict level you’re particularly interested in if you’re researching civil wars, as also argued in previous chapters in the coverage – according to Huntington seems to be very islam-centric, is completely overlooked (ignored?) in the handbook chapter, and if you haven’t read Huntington and your only exposure to him is through the chapter in question you’ll probably conclude that Huntington was wrong, because that seems to be the conclusion the author draws, arguing that other models are more convincing (I should add here that these other models do seem useful, at least in terms of providing (superficial) explanations; the point is just that I feel the author is misrepresenting Huntington and I dislike this). Although there are parts of the coverage in that chapter where I feel that it’s obvious the author and I do not agree, I should note that the fact that he talks about the data and the empirical research makes up for a lot of other stuff.
Anyway, on to the coverage – it’s perhaps worth noting, in light of the introductory remarks above, that the post has stuff on a lot of things besides religion, e.g. the role of natural resources, regime types, migration, and demographics.
“Elites seeking to end conflict must: (1) lead followers to endorse and support peaceful solutions; (2) contain spoilers and extremists and prevent them from derailing the process of peacemaking; and (3) forge coalitions with more moderate members of the rival ethnic group(s) […]. An important part of the two-level nature of the ethnic conflict is that each of the elites supporting the peace process be able to present themselves, and the resulting terms of the peace, as a “win” for their ethnic community. […] A strategy that a state may pursue to resolve ethnic conflict is to co-opt elites from the ethnic communities demanding change […]. By satisfying elites, it reduces the ability of the aggrieved ethnic community to mobilize. Such a process of co-option can also be used to strengthen ethnic moderates in order to undermine ethnic extremists. […] the co-opted elites need to be careful to be seen as still supporting ethnic demands or they may lose all credibility in their respective ethnic community. If this occurs, the likely outcome is that more extreme ethnic elites will be able to capture the ethnic community, possibly leading to greater violence.
It is important to note that “spoilers,” be they an individual or a small sub-group within an ethnic community, can potentially derail any peace process, even if the leaders and masses support peace (Stedman, 2001).”
“Three separate categories of international factors typically play into identity and ethnic conflict. The first is the presence of an ethnic community across state boundaries. Thus, a single community exists in more than one state and its demands become international. […] This division of an ethnic community can occur when a line is drawn geographically through a community […], when a line is drawn and a group moves into the new state […], or when a diaspora moves a large population from one state to another […] or when sub-groups of an ethnic community immigrate to the developed world […] When ethnic communities cross state boundaries, the potential for one state to support an ethnic community in the other state exists. […] There is also the potential for ethnic communities to send support to a conflict […] or to lobby their government to intervene […]. Ethnic groups may also form extra-state militias and cross international borders. Sometimes these rebel groups can be directly or indirectly sponsored by state governments, leading to a very complex situation […] A second set of possible international factors is non-ethnic international intervention. A powerful state may decide to intervene in an ethnic conflict for a variety of reasons, ranging from humanitarian support, to peacekeeping, to outright invasion […] The third and last factor is the commitment of non-governmental organizations (NGOs) or third-party mediators to a conflict. […] The record of international interventions in ethnic civil wars is quite mixed. There are many difficulties associated with international action [and] international groups cannot actually change the underlying root of the ethnic conflict (Lake and Rothchild, 1998; Kaufman, 1996).”
“A relatively simple way to think of conflict onset is to think that for a rebellion to occur two conditions need to be satisfactorily fulfilled: There must be a motivation and there must be an opportunity to rebel.3 First, the rebels need a motive. This can be negative – a grievance against the existing state of affairs – or positive – a desire to capture resource rents. Second, potential rebels need to be able to achieve their goal: The realization of their desires may be blocked by the lack of financial means. […] Work by Collier and Hoeffler (1998, 2004) was crucial in highlighting the economic motivation behind civil conflicts. […] Few conflicts, if any, can be characterized purely as “resource conflicts.” […] It is likely that few groups are solely motivated by resource looting, at least in the lower rank level. What is important is that valuable natural resources create opportunities for conflicts. To feed, clothe, and arm its members, a rebel group needs money. Unless the rebel leaders are able to raise sufficient funds, a conflict is unlikely to start no matter how severe the grievances […] As a consequence, feasibility of conflict – that is, valuable natural resources providing opportunity to engage in violent conflict – has emerged as a key to understanding the relation between valuable resources and conflict.”
“It is likely that some natural resources are more associated with conflict than others. Early studies on armed civil conflict used resource measures that aggregated different types of resources together. […] With regard to financing conflict start-up and warfare the most salient aspect is probably the ease with which a resource can be looted. Lootable resources can be extracted with simple methods by individuals or small groups, are easy to transport, and can be smuggled across borders with limited risks. Examples of this type of resources are alluvial gemstones and gold. By contrast, deep-shaft minerals, oil, and natural gas are less lootable and thus less likely sources of financing. […] Using comprehensive datasets on all armed civil conflicts in the world, natural resource production, and other relevant aspects such as political regime, economic performance, and ethnic composition, researchers have established that at least some high-value natural resources are related to higher risk of conflict onset. Especially salient in this respect seem to be oil and secondary diamonds […] The results regarding timber […] and cultivation of narcotics […] are inconclusive. […] [An] important conclusion is that natural resources should be considered individually and not lumped together. Diamonds provide an illustrative example: the geological form of the diamond deposit is related to its effect on conflict. Secondary diamonds – the more lootable form of two deposit types – makes conflict more likely, longer, and more severe. Primary diamonds on the other hand are generally not related to conflict.”
“Analysis on conflict duration and severity confirm that location is a salient factor: resources matter for duration and severity only when located in the region where the conflict is taking place […] That the location of natural resources matters has a clear and important implication for empirical conflict research: relying on country-level aggregates can lead to wrong conclusions about the role of natural resources in armed civil conflict. As a consequence of this, there has been effort to collect location-specific data on oil, gas, drug cultivation, and gemstones”.
“a number of prominent studies of ethnic conflict have suggested that when ethnic groups grow at different rates, this may lead to fears of an altered political balance, which in turn might cause political instability and violent conflict […]. There is ample anecdotal evidence for such a relationship [but unfortunately little quantitative research…]. The civil war in Lebanon, for example, has largely been attributed to a shift in the delicate ethnic balance in that state […]. Further, in the early 1990s, radical Serb leaders were agitating for the secession of “Serbian” areas in Bosnia-Herzegovina by instigating popular fears that Serbs would soon be outnumbered by a growing Muslim population heading for the establishment of a Shari’a state”.
“[One] part of the demography-conflict literature has explored the role of population movements. Most of this literature […] treats migration and refugee flows as a consequence of conflict rather than a potential cause. Some scholars, however, have noted that migration, and refugee migration in particular, can spur the spread of conflict both between and within states […]. Existing work suggests that environmentally induced migration can lead to conflict in receiving areas due to competition for scarce resources and economic opportunities, ethnic tensions when migrants are from different ethnic groups, and exacerbation of socioeconomic “fault lines” […] Salehyan and Gleditsch (2006) point to spill-over effects, in the sense that mass refugee migration might spur tensions in neighboring or receiving states by imposing an economic burden and causing political stability [sic]. […] Based on a statistical analysis of refugees from neighboring countries and civil war onset during the period 1951–2001, they find that countries that experience an influx of refugees from neighboring states are significantly more likely to experience wars themselves. […] While the youth bulge hypothesis [large groups of young males => higher risk of violence/war/etc.] in general is supported by empirical evidence, indicating that countries and areas with large youth cohorts are generally at a greater risk of low-intensity conflict, the causal pathways relating youth bulges to increased conflict propensity remain largely unexplored quantitatively. When it comes to the demographic factors which have so far received less attention in terms of systematic testing – skewed sex ratios, differential ethnic growth, migration, and urbanization – the evidence is somewhat mixed […] a clear challenge with regard to the study of demography and conflict pertains to data availability and reliability. […] Countries that are undergoing armed conflict are precisely those for which we need data, but also those in which census-taking is hampered by violence.”
“Most research on the duration of civil war find that civil wars in democracies tend to be longer than other civil wars […] Research on conflict severity finds some evidence that democracies tend to see fewer battledeaths and are less likely to target civilians, suggesting that democratic institutions may induce some important forms of restraints in armed conflict […] Many researchers have found that democratization often precedes an increase in the risk of the onset of armed conflict. Hegre et al. (2001), for example, find that the risk of civil war onset is almost twice as high a year after a regime change as before, controlling for the initial level of democracy […] Many argue that democratic reforms come about when actors are unable to rule unilaterally and are forced to make concessions to an opposition […] The actual reforms to the political system we observe as democratization often do not suffice to reestablish an equilibrium between actors and the institutions that regulate their interactions; and in its absence, a violent power struggle can follow. Initial democratic reforms are often only partial, and may fail to satisfy the full demands of civil society and not suffice to reduce the relevant actors’ motivation to resort to violence […] However, there is clear evidence that the sequence matters and that the effect [the increased risk of civil war after democratization, US] is limited to the first election. […] civil wars […] tend to be settled more easily in states with prior experience of democracy […] By our count, […] 75 percent of all annual observations of countries with minor or major armed conflicts occur in non-democracies […] Democracies have an incidence of major armed conflict of only 1 percent, whereas nondemocracies have a frequency of 5.6 percent.”
“Since the Iranian revolution in the late 1970s, religious conflicts and the rise of international terror organizations have made it difficult to ignore the facts that religious factors can contribute to conflict and that religious actors can cause or participate in domestic conflicts. Despite this, comprehensive studies of religion and domestic conflict remain relatively rare. While the reasons for this rarity are complex there are two that stand out. First, for much of the twentieth century the dominant theory in the field was secularization theory, which predicted that religion would become irrelevant and perhaps extinct in modern times. While not everyone agreed with this extreme viewpoint, there was a consensus that religious influences on politics and conflict were a waning concern. […] This theory was dominant in sociology for much of the twentieth century and effectively dominated political science, under the title of modernization theory, for the same period. […] Today supporters of secularization theory are clearly in the minority. However, one of their legacies has been that research on religion and conflict is a relatively new field. […] Second, as recently as 2006, Brian Grim and Roger Finke lamented that “religion receives little attention in international quantitative studies. Including religion in cross-national studies requires data, and high-quality data are in short supply” […] availability of the necessary data to engage in quantitative research on religion and civil wars is a relatively recent development.”
“[Some] studies [have] found that conflicts involving actors making religious demands – such as demanding a religious state or a significant increase in religious legislation – were less likely to be resolved with negotiated settlements; a negotiated settlement is possible if the settlement focused on the non-religious aspects of the conflict […] One study of terrorism found that terror groups which espouse religious ideologies tend to be more violent (Henne, 2012). […] The clear majority of quantitative studies of religious conflict focus solely on inter-religious conflicts. Most of them find religious identity to influence the extent of conflict […] but there are some studies which dissent from this finding”.
“Terror is most often selected by groups that (1) have failed to achieve their goals through peaceful means, (2) are willing to use violence to achieve their goals, and (3) do not have the means for higher levels of violence.”
“the PITF dataset provides an accounting of the number of domestic conflicts that occurred in any given year between 1960 and 2009. […] Between 1960 and 2009 the modified dataset includes 817 years of ethnic war, 266 years of genocides/politicides, and 477 years of revolutionary wars. […] Cases were identified as religious or not religious based on the following categorization:
1 Not Religious.
2 Religious Identity Conflict: The two groups involved in the conflict belong to different religions or different denominations of the same religion.
3 Religious Wars: The two sides of the conflict belong to the same religion but the description of the conflict provided by the PITF project identifies religion as being an issue in the conflict. This typically includes challenges by religious fundamentalists to more secular states. […]
The results show that both numerically and as a proportion of all conflict, religious state failures (which include both religious identity conflicts and religious wars) began increasing in the mid-1970s. […] As a proportion of all conflict, religious state failures continued to increase and became a majority of all state failures in 2002. From 2002 onward, religious state failures were between 55 percent and 62 percent of all state failures in any given year.”
“Between 2002 and 2009, eight of 12 new state failures were religious. All but one of the new religious state failures were ongoing as of 2009. These include:
• 2002: A rebellion in the Muslim north of the Ivory Coast (ended in 2007)
• 2003: The beginning of the Sunni–Shia violent conflict in Iraq (ongoing)
• 2003: The resumption of the ethnic war in the Sudan [97% muslims, US] (ongoing)
• 2004: Muslim militants challenged Pakistan’s government in South and North Waziristan. This has been followed by many similar attacks (ongoing)
• 2004: Outbreak of violence by Muslims in southern Thailand (ongoing)
• 2004: In Yemen [99% muslims, US], followers of dissident cleric Husain Badr al-Din al-Huthi create a stronghold in Saada. Al-Huthi was killed in September 2004, but serious fighting begins again in early 2005 (ongoing)
• 2007: Ethiopia’s invasion of southern Somalia causes a backlash in the Muslim (ethnic- Somali) Ogaden region (ongoing)
• 2008: Islamist militants in the eastern Trans-Caucasus region of Russia bordering on Georgia (Chechnya, Dagestan, and Ingushetia) reignited their violent conflict against Russia (ongoing)” [my bold]
“There are few additional studies which engage in this type of longitudinal analysis. Perhaps the most comprehensive of such studies is presented in Toft et al.’s (2011) book God’s Century based on data collected by Toft. They found that religious conflicts – defined as conflicts with a religious content – rose from 19 percent of all civil wars in the 1940s to about half of civil wars during the first decade of the twenty-first century. Of these religious conflicts, 82 percent involved Muslims. This analysis includes only 135 civil wars during this period. The lower number is due to a more restrictive definition of civil war which includes at least 1,000 battle deaths. This demonstrates that the findings presented above also hold when looking at the most violent of civil wars.” [my bold]
“This comprehensive new Handbook explores the significance and nature of armed intrastate conflict and civil war in the modern world.
Civil wars and intrastate conflict represent the principal form of organised violence since the end of World War II, and certainly in the contemporary era. These conflicts have a huge impact and drive major political change within the societies in which they occur, as well as on an international scale. The global importance of recent intrastate and regional conflicts in Afghanistan, Pakistan, Iraq, Somalia, Nepal, Côte d’Ivoire, Syria and Libya – amongst others – has served to refocus academic and policy interest upon civil war. […] This volume will be of much interest to students of civil wars and intrastate conflict, ethnic conflict, political violence, peace and conflict studies, security studies and IR in general.”
I’m currently reading this handbook. One observation I’ll make here before moving on to the main coverage is that although I’ve read more than 100 pages and although every single one of the conflicts argued in the introduction above to be motivating study into these topics aside from one (the exception being Nepal) involve muslims, the word ‘islam’ has been mentioned exactly once in the coverage so far (an updated list would arguably include yet another muslim country, Yemen, as well). I noted while doing the text search that they seem to take up the topic of religion and religious motivation later on, so I sort of want to withhold judgment for now, but if they don’t deal more seriously with this topic later on than they have so far, I’ll have great difficulties giving this book a high rating, despite the coverage being in general actually quite interesting, detailed and well written so far – chapter 7, on so-called ‘critical perspectives’ is in my opinion a load of crap [a few illustrative quotes/words/concepts from that chapter: “Frankfurt School-inspired Critical Theory”, “approaches such as critical constructivism, post-structuralism, feminism, post-colonialism”, “an openly ethical–normative commitment to human rights, progressive politics”, “labelling”, “dialectical”, “power–knowledge structures”, “conflict discourses”, “Foucault”, “an abiding commitment to being aware of, and trying to overcome, the Eurocentric, Orientalist and patriarchal forms of knowledge often prevalent within civil war studies”, “questioning both morally and intellectually the dominant paradigm”… I read the chapter very fast, to the point of almost only skimming it, and I have not quoted from that chapter in my coverage below, for reasons which should be obvious – I was reminded of Poe’s Corollary while reading the chapter as I briefly started wondering along the way if the chapter was an elaborate joke which had somehow made it into the publication, and I also briefly was reminded of the Sokal affair, mostly because of the unbelievable amount of meaningless buzzwords], but that’s just one chapter and most of the others so far have been quite okay. A few of the points in the problematic chapter are actually arguably worth having in mind, but there’s so much bullshit included as well that you’re having a really hard time taking any of it seriously.
Some observations from the first 100 pages:
“There are wide differences of opinion across the broad field of scholars who work on civil war regarding the basis of legitimate and scientific knowledge in this area, on whether cross-national studies can generate reliable findings, and on whether objective, value-free analysis of armed conflict is possible. All too often – and perhaps increasingly so, with the rise in interest in econometric approaches – scholars interested in civil war from different methodological traditions are isolated from each other. […] even within the more narrowly defined empirical approaches to civil war studies there are major disagreements regarding the most fundamental questions relating to contemporary civil wars, such as the trends in numbers of armed conflicts, whether civil wars are changing in nature, whether and how international actors can have a role in preventing, containing and ending civil wars, and the significance of [various] factors”.
“In simplest terms civil war is a violent conflict between a government and an organized rebel group, although some scholars also include armed conflicts primarily between non-state actors within their study. The definition of a civil war, and the analytical means of differentiating a civil war from other forms of large-scale violence, has been controversial […] The Uppsala Conflict Data Program (UCDP) uses 25 battle-related deaths per year as the threshold to be classified as armed conflict, and – in common with other datasets such as the Correlates of War (COW) – a threshold of 1,000 battle-related deaths for a civil war. While this is now widely endorsed, debate remains regarding the rigor of this definition […] differences between two of the main quantitative conflict datasets – the UCDP and the COW – in terms of the measurement of armed conflict result in significant differences in interpreting patterns of conflict. This has led to conflicting findings not only about absolute numbers of civil wars, but also regarding trends in the numbers of such conflicts. […] According to the UCDP/PRIO data, from 1946 to 2011 a total of 102 countries experienced civil wars. Africa witnessed the most with 40 countries experiencing civil wars between 1946 and 2011. During this period 20 countries in the Americas experienced civil war, 18 in Asia, 13 in Europe, and 11 in the Middle East […]. There were 367 episodes (episodes in this case being separated by at least one year without at least 25 battle-related deaths) of civil wars from 1946 to 2009 […]. The number of active civil wars generally increased from the end of the Cold War to around 1992 […]. Since then the number has been in decline, although whether this is likely to be sustained is debatable. In terms of onset of first episode by region from 1946 to 2011, Africa leads the way with 75, followed by Asia with 67, the Western Hemisphere with 33, the Middle East with 29, and Europe with 25 […]. As Walter (2011) has observed, armed conflicts are increasingly concentrated in poor countries. […] UCDP reports 137 armed conflicts for the period 1989–2011. For the overlapping period 1946–2007, COW reports 179 wars, while UCDP records 244 armed conflicts. As most of these conflicts have been fought over disagreements relating to conditions within a state, it means that civil war has been the most common experience of war throughout this period.”
“There were 3 million deaths from civil wars with no international intervention between 1946 and 2008. There were 1.5 million deaths in wars where intervention occurred. […] In terms of region, there were approximately 350,000 civil war-related deaths in both Europe and the Middle East from the years 1946 to 2008. There were 467,000 deaths in the Western Hemisphere, 1.2 million in Africa, and 3.1 million in Asia for the same period […] In terms of historical patterns of civil wars and intrastate armed conflict more broadly, the most conspicuous trend in recent decades is an apparent decline in absolute numbers, magnitude, and impact of armed conflicts, including civil wars. While there is wide – but not total – agreement regarding this, the explanations for this downward trend are contested. […] the decline seems mainly due not to a dramatic decline of civil war onsets, but rather because armed conflicts are becoming shorter in duration and they are less likely to recur. While this is undoubtedly welcome – and so is the tendency of civil wars to be generally smaller in magnitude – it should not obscure the fact that civil wars are still breaking out at a rate that has been fairly static in recent decades.”
“there is growing consensus on a number of findings. For example, intrastate armed conflict is more likely to occur in poor, developing countries with weak state structures. In situations of weak states the presence of lootable natural resources and oil increase the likelihood of experiencing armed conflict. Dependency upon the export of primary commodities is also a vulnerability factor, especially in conjunction with drastic fluctuations in international market prices which can result in economic shocks and social dislocation. State weakness is relevant to this – and to most of the theories regarding armed conflict proneness – because such states are less able to cushion the impact of economic shocks. […] Authoritarian regimes as well as entrenched democracies are less likely to experience civil war than societies in-between […] Situations of partial or weak democracy (anocracy) and political transition, particularly a movement towards democracy in volatile or divided societies, are also strongly correlated to conflict onset. The location of a society – especially if it has other vulnerability factors – in a region which has contiguous neighbors which are experiencing or have experienced armed conflict is also an armed conflict risk.”
“Military intervention aimed at supporting a protagonist or influencing the outcome of a conflict tends to increase the intensity of civil wars and increase their duration […] It is commonly argued that wars ending with military victory are less likely to recur […]. In these terminations one side no longer exists as a fighting force. Negotiated settlements, on the other hand, are often unstable […] The World Development Report 2011 notes that 90 percent of the countries with armed conflicts taking place in the first decade of the 2000s also had a major armed conflict in the preceding 30 years […] of the 137 armed conflicts that were fought after 1989 100 had ended by 2011, while 37 were still ongoing”
“Cross-national, aggregated, analysis has played a leading role in strengthening the academic and policy impact of conflict research through the production of rigorous research findings. However, the […] aggregation of complex variables has resulted in parsimonious findings which arguably neglect the complexity of armed conflict; simultaneously, differences in the codification and definition of key concepts result in contradictory findings. The growing popularity of micro-studies is therefore an important development in the field of civil war studies, and one that responds to the demand for more nuanced analysis of the dynamics of conflict at the local level.”
“Jason Quinn, University of Notre Dame, has calculated that the number of scholarly articles on the onset of civil wars published in the first decade of the twenty-first century is larger than the previous five decades combined”.
“One of the most challenging aspects of quantitative analysis is transforming social concepts into numerical values. This difficulty means that many of the variables used to capture theoretical constructs represent crude indicators of the real concept […] econometric studies of civil war must account for the endogenising effect of civil war on other variables. Civil war commonly lowers institutional capacity and reduces economic growth, two of the primary conditions that are consistently shown to motivate civil violence. Scholars have grown more capable of modelling this process […], but still too frequently fail to capture the endogenising effect of civil conflict on other variables […] the problems associated with the rare nature of civil conflict can [also] cause serious problems in a number of econometric models […] Case-based analysis commonly suffers from two fundamental problems: non-generalisability and selection bias. […] Combining research methods can help to enhance the validity of both quantitative and qualitative research. […] the combination of methods can help quantitative researchers address measurement issues, assess outliers, discuss variables omitted from the large-N analysis, and examine cases incorrectly predicted by econometric models […] The benefits of mixed methods research designs have been clearly illustrated in a number of prominent studies of civil war […] Yet unfortunately the bifurcation of conflict studies into qualitative and quantitative branches makes this practice less common than is desirable.”
“Ethnography has elicited a lively critique from within and without anthropology. […] Ethnographers stand accused of argument by ostension (pointing at particular instances as indicative of a general trend). The instances may not even be true. This is one of the reasons that the economist Paul Collier rejected ethnographic data as a source of insight into the causes of civil wars (Collier 2000b). According to Collier, the ethnographer builds on anecdotal evidence offered by people with good reasons to fabricate their accounts. […] The story fits the fact. But so might other stories. […] [It might be categorized as] a discipline that still combines a mix of painstaking ethnographic documentation with brilliant flights of fancy, and largely leaves numbers on one side.”
“While macro-historical accounts convincingly argue for the centrality of the state to the incidence and intensity of civil war, there is a radical spatial unevenness to violence in civil wars that defies explanation at the national level. Villages only a few miles apart can have sharply contrasting experiences of conflict and in most civil wars large swathes of territory remain largely unaffected by violence. This unevenness presents a challenge to explanations of conflict that treat states or societies as the primary unit of analysis. […] A range of databases of disaggregated data on incidences of violence have recently been established and a lively publication programme has begun to explore sub-national patterns of distribution and diffusion of violence […] All of these developments testify to a growing recognition across the social sciences that spatial variation, territorial boundaries and bounding processes are properly located at the heart of any understanding of the causes of civil war. It suggests too that sub-national boundaries in their various forms – whether regional or local boundaries, lines of control established by rebels or no-go areas for state security forces – need to be analysed alongside national borders and in a geopolitical context. […] In both violent and non-violent contention local ‘safe territories’ of one kind or another are crucial to the exercise of power by challengers […] the generation of violence by insurgents is critically affected by logistics (e.g. roads), but also shelter (e.g. forests) […] Schutte and Weidmann (2011) offer a […] dynamic perspective on the diffusion of insurgent violence. Two types of diffusion are discussed; relocation diffusion occurs when the conflict zone is shifted to new locations, whereas escalation diffusion corresponds to an expansion of the conflict zone. They argue that the former should be a feature of conventional civil wars with clear frontlines, whereas the latter should be observed in irregular wars, an expectation that is borne out by the data.”
“Research on the motivation of armed militants in social movement scholarship emphasises the importance of affective ties, of friendship and kin networks and of emotion […] Sageman’s (2004, 2008) meticulous work on Salafist-inspired militants emphasises that mobilisation is a collective rather than individual process and highlights the importance of inter-personal ties, networks of friendship, family and neighbours. That said, it is clear that there is a variety of pathways to armed action on the part of individuals rather than one single dominant motivation”.
“While it is often difficult to conduct real experiments in the study of civil war, the micro study of violence has seen a strong adoption of quasi-experimental designs and in general, a more careful thinking about causal identification”.
“Condra and Shapiro (2012) present one of the first studies to examine the effects of civilian targeting in a micro-level study. […] they show that insurgent violence increases as a result of civilian casualties caused by counterinsurgent forces. Similarly, casualties inflicted by the insurgents have a dampening effect on insurgent effectiveness. […] The conventional wisdom in the civil war literature has it that indiscriminate violence by counterinsurgent forces plays into the hands of the insurgents. After being targeted collectively, the aggrieved population will support the insurgency even more, which should result in increased insurgent effectiveness. Lyall (2009) conducts a test of this relationship by examining the random shelling of villages from Russian bases in Chechnya. He matches shelled villages with those that have similar histories of violence, and examines the difference in insurgent violence between treatment and control villages after an artillery strike. The results clearly disprove conventional wisdom and show that shelling reduces subsequent insurgent violence. […] Other research in this area has looked at alternative counterinsurgency techniques, such as aerial bombings. In an analysis that uses micro-level data on airstrikes and insurgent violence, Kocher et al. (2011) show that, counter to Lyall’s (2009) findings, indiscriminate violence in the form of airstrikes against villages in the Vietnam war was counterproductive […] Data availability […] partly dictates what micro-level questions we can answer about civil war. […] not many conflicts have datasets on bombing sorties, such as the one used by Kocher et al. (2011) for the Vietnam war.”
i. Lock (water transport). Zumerchik and Danver’s book covered this kind of stuff as well, sort of, and I figured that since I’m not going to blog the book – for reasons provided in my goodreads review here – I might as well add a link or two here instead. The words ‘sort of’ above are in my opinion justified because the book coverage is so horrid you’d never even know what a lock is used for from reading that book; you’d need to look that up elsewhere.
On a related note there’s a lot of stuff in that book about the history of water transport etc. which you probably won’t get from these articles, but having a look here will give you some idea about which sort of topics many of the chapters of the book are dealing with. Also, stuff like this and this. The book coverage of the latter topic is incidentally much, much more detailed than is that wiki article, and the article – as well as many other articles about related topics (economic history, etc.) on the wiki, to the extent that they even exist – could clearly be improved greatly by adding content from books like this one. However I’m not going to be the guy doing that.
ii. Congruence (geometry).
I’d note that this is a topic which seems to be reasonably well covered on wikipedia; there’s for example also a ‘good article’ on the Everglades and a featured article about the Everglades National Park. A few quotes and observations from the article:
“The geography and ecology of the Everglades involve the complex elements affecting the natural environment throughout the southern region of the U.S. state of Florida. Before drainage, the Everglades were an interwoven mesh of marshes and prairies covering 4,000 square miles (10,000 km2). […] Although sawgrass and sloughs are the enduring geographical icons of the Everglades, other ecosystems are just as vital, and the borders marking them are subtle or nonexistent. Pinelands and tropical hardwood hammocks are located throughout the sloughs; the trees, rooted in soil inches above the peat, marl, or water, support a variety of wildlife. The oldest and tallest trees are cypresses, whose roots are specially adapted to grow underwater for months at a time.”
“A vast marshland could only have been formed due to the underlying rock formations in southern Florida. The floor of the Everglades formed between 25 million and 2 million years ago when the Florida peninsula was a shallow sea floor. The peninsula has been covered by sea water at least seven times since the earliest bedrock formation. […] At only 5,000 years of age, the Everglades is a young region in geological terms. Its ecosystems are in constant flux as a result of the interplay of three factors: the type and amount of water present, the geology of the region, and the frequency and severity of fires. […] Water is the dominant element in the Everglades, and it shapes the land, vegetation, and animal life of South Florida. The South Florida climate was once arid and semi-arid, interspersed with wet periods. Between 10,000 and 20,000 years ago, sea levels rose, submerging portions of the Florida peninsula and causing the water table to rise. Fresh water saturated the limestone, eroding some of it and creating springs and sinkholes. The abundance of fresh water allowed new vegetation to take root, and through evaporation formed thunderstorms. Limestone was dissolved by the slightly acidic rainwater. The limestone wore away, and groundwater came into contact with the surface, creating a massive wetland ecosystem. […] Only two seasons exist in the Everglades: wet (May to November) and dry (December to April). […] The Everglades are unique; no other wetland system in the world is nourished primarily from the atmosphere. […] Average annual rainfall in the Everglades is approximately 62 inches (160 cm), though fluctuations of precipitation are normal.”
“Between 1871 and 2003, 40 tropical cyclones struck the Everglades, usually every one to three years.”
“Islands of trees featuring dense temperate or tropical trees are called tropical hardwood hammocks. They may rise between 1 and 3 feet (0.30 and 0.91 m) above water level in freshwater sloughs, sawgrass prairies, or pineland. These islands illustrate the difficulty of characterizing the climate of the Everglades as tropical or subtropical. Hammocks in the northern portion of the Everglades consist of more temperate plant species, but closer to Florida Bay the trees are tropical and smaller shrubs are more prevalent. […] Islands vary in size, but most range between 1 and 10 acres (0.40 and 4.05 ha); the water slowly flowing around them limits their size and gives them a teardrop appearance from above. The height of the trees is limited by factors such as frost, lightning, and wind: the majority of trees in hammocks grow no higher than 55 feet (17 m). […] There are more than 50 varieties of tree snails in the Everglades; the color patterns and designs unique to single islands may be a result of the isolation of certain hammocks. […] An estimated 11,000 species of seed-bearing plants and 400 species of land or water vertebrates live in the Everglades, but slight variations in water levels affect many organisms and reshape land formations.”
“Because much of the coast and inner estuaries are built by mangroves—and there is no border between the coastal marshes and the bay—the ecosystems in Florida Bay are considered part of the Everglades. […] Sea grasses stabilize sea beds and protect shorelines from erosion by absorbing energy from waves. […] Sea floor patterns of Florida Bay are formed by currents and winds. However, since 1932, sea levels have been rising at a rate of 1 foot (0.30 m) per 100 years. Though mangroves serve to build and stabilize the coastline, seas may be rising more rapidly than the trees are able to build.”
iv. Chang and Eng Bunker. Not a long article, but interesting:
“Chang (Chinese: 昌; pinyin: Chāng; Thai: จัน, Jan, rtgs: Chan) and Eng (Chinese: 恩; pinyin: Ēn; Thai: อิน In) Bunker (May 11, 1811 – January 17, 1874) were Thai-American conjoined twin brothers whose condition and birthplace became the basis for the term “Siamese twins”.”
I loved some of the implicit assumptions in this article: “Determined to live as normal a life they could, Chang and Eng settled on their small plantation and bought slaves to do the work they could not do themselves. […] Chang and Adelaide [his wife] would become the parents of eleven children. Eng and Sarah [‘the other wife’] had ten.”
A ‘normal life’ indeed… The women the twins married were incidentally sisters who ended up disliking each other (I can’t imagine why…).
v. Genie (feral child). This is a very long article, and you should be warned that many parts of it may not be pleasant to read. From the article:
“Genie (born 1957) is the pseudonym of a feral child who was the victim of extraordinarily severe abuse, neglect and social isolation. Her circumstances are prominently recorded in the annals of abnormal child psychology. When Genie was a baby her father decided that she was severely mentally retarded, causing him to dislike her and withhold as much care and attention as possible. Around the time she reached the age of 20 months Genie’s father decided to keep her as socially isolated as possible, so from that point until she reached 13 years, 7 months, he kept her locked alone in a room. During this time he almost always strapped her to a child’s toilet or bound her in a crib with her arms and legs completely immobilized, forbade anyone from interacting with her, and left her severely malnourished. The extent of Genie’s isolation prevented her from being exposed to any significant amount of speech, and as a result she did not acquire language during childhood. Her abuse came to the attention of Los Angeles child welfare authorities on November 4, 1970.
In the first several years after Genie’s early life and circumstances came to light, psychologists, linguists and other scientists focused a great deal of attention on Genie’s case, seeing in her near-total isolation an opportunity to study many aspects of human development. […] In early January 1978 Genie’s mother suddenly decided to forbid all of the scientists except for one from having any contact with Genie, and all testing and scientific observations of her immediately ceased. Most of the scientists who studied and worked with Genie have not seen her since this time. The only post-1977 updates on Genie and her whereabouts are personal observations or secondary accounts of them, and all are spaced several years apart. […]
Genie’s father had an extremely low tolerance for noise, to the point of refusing to have a working television or radio in the house. Due to this, the only sounds Genie ever heard from her parents or brother on a regular basis were noises when they used the bathroom. Although Genie’s mother claimed that Genie had been able to hear other people talking in the house, her father almost never allowed his wife or son to speak and viciously beat them if he heard them talking without permission. They were particularly forbidden to speak to or around Genie, so what conversations they had were therefore always very quiet and out of Genie’s earshot, preventing her from being exposed to any meaningful language besides her father’s occasional swearing. […] Genie’s father fed Genie as little as possible and refused to give her solid food […]
In late October 1970, Genie’s mother and father had a violent argument in which she threatened to leave if she could not call her parents. He eventually relented, and later that day Genie’s mother was able to get herself and Genie away from her husband while he was out of the house […] She and Genie went to live with her parents in Monterey Park. Around three weeks later, on November 4, after being told to seek disability benefits for the blind, Genie’s mother decided to do so in nearby Temple City, California and brought Genie along with her.
On account of her near-blindness, instead of the disabilities benefits office Genie’s mother accidentally entered the general social services office next door. The social worker who greeted them instantly sensed something was not right when she first saw Genie and was shocked to learn Genie’s true age was 13, having estimated from her appearance and demeanor that she was around 6 or 7 and possibly autistic. She notified her supervisor, and after questioning Genie’s mother and confirming Genie’s age they immediately contacted the police. […]
Upon admission to Children’s Hospital, Genie was extremely pale and grossly malnourished. She was severely undersized and underweight for her age, standing 4 ft 6 in (1.37 m) and weighing only 59 pounds (27 kg) […] Genie’s gross motor skills were extremely weak; she could not stand up straight nor fully straighten any of her limbs. Her movements were very hesitant and unsteady, and her characteristic “bunny walk”, in which she held her hands in front of her like claws, suggested extreme difficulty with sensory processing and an inability to integrate visual and tactile information. She had very little endurance, only able to engage in any physical activity for brief periods of time. […]
Despite tests conducted shortly after her admission which determined Genie had normal vision in both eyes she could not focus them on anything more than 10 feet (3 m) away, which corresponded to the dimensions of the room she was kept in. She was also completely incontinent, and gave no response whatsoever to extreme temperatures. As Genie never ate solid food as a child she was completely unable to chew and had very severe dysphagia, completely unable to swallow any solid or even soft food and barely able to swallow liquids. Because of this she would hold anything which she could not swallow in her mouth until her saliva broke it down, and if this took too long she would spit it out and mash it with her fingers. She constantly salivated and spat, and continually sniffed and blew her nose on anything that happened to be nearby.
Genie’s behavior was typically highly anti-social, and proved extremely difficult for others to control. She had no sense of personal property, frequently pointing to or simply taking something she wanted from someone else, and did not have any situational awareness whatsoever, acting on any of her impulses regardless of the setting. […] Doctors found it extremely difficult to test Genie’s mental age, but on two attempts they found Genie scored at the level of a 13-month-old. […] When upset Genie would wildly spit, blow her nose into her clothing, rub mucus all over her body, frequently urinate, and scratch and strike herself. These tantrums were usually the only times Genie was at all demonstrative in her behavior. […] Genie clearly distinguished speaking from other environmental sounds, but she remained almost completely silent and was almost entirely unresponsive to speech. When she did vocalize, it was always extremely soft and devoid of tone. Hospital staff initially thought that the responsiveness she did show to them meant she understood what they were saying, but later determined that she was instead responding to nonverbal signals that accompanied their speaking. […] Linguists later determined that in January 1971, two months after her admission, Genie only showed understanding of a few names and about 15–20 words. Upon hearing any of these, she invariably responded to them as if they had been spoken in isolation. Hospital staff concluded that her active vocabulary at that time consisted of just two short phrases, “stop it” and “no more”. Beyond negative commands, and possibly intonation indicating a question, she showed no understanding of any grammar whatsoever. […] Genie had a great deal of difficulty learning to count in sequential order. During Genie’s stay with the Riglers, the scientists spent a great deal of time attempting to teach her to count. She did not start to do so at all until late 1972, and when she did her efforts were extremely deliberate and laborious. By 1975 she could only count up to 7, which even then remained very difficult for her.”
“From January 1978 until 1993, Genie moved through a series of at least four additional foster homes and institutions. In some of these locations she was further physically abused and harassed to extreme degrees, and her development continued to regress. […] Genie is a ward of the state of California, and is living in an undisclosed location in the Los Angeles area. In May 2008, ABC News reported that someone who spoke under condition of anonymity had hired a private investigator who located Genie in 2000. She was reportedly living a relatively simple lifestyle in a small private facility for mentally underdeveloped adults, and appeared to be happy. Although she only spoke a few words, she could still communicate fairly well in sign language.“
i. World Happiness Report 2013. A few figures from the publication:
“As the Internet has become a nearly ubiquitous resource for acquiring knowledge about the world, questions have arisen about its potential effects on cognition. Here we show that searching the Internet for explanatory knowledge creates an illusion whereby people mistake access to information for their own personal understanding of the information. Evidence from 9 experiments shows that searching for information online leads to an increase in self-assessed knowledge as people mistakenly think they have more knowledge “in the head,” even seeing their own brains as more active as depicted by functional MRI (fMRI) images.”
A little more from the paper:
“If we go to the library to find a fact or call a friend to recall a memory, it is quite clear that the information we seek is not accessible within our own minds. When we go to the Internet in search of an answer, it seems quite clear that we are we consciously seeking outside knowledge. In contrast to other external sources, however, the Internet often provides much more immediate and reliable access to a broad array of expert information. Might the Internet’s unique accessibility, speed, and expertise cause us to lose track of our reliance upon it, distorting how we view our own abilities? One consequence of an inability to monitor one’s reliance on the Internet may be that users become miscalibrated regarding their personal knowledge. Self-assessments can be highly inaccurate, often occurring as inflated self-ratings of competence, with most people seeing themselves as above average [here’s a related link] […] For example, people overestimate their own ability to offer a quality explanation even in familiar domains […]. Similar illusions of competence may emerge as individuals become immersed in transactive memory networks. They may overestimate the amount of information contained in their network, producing a “feeling of knowing,” even when the content is inaccessible […]. In other words, they may conflate the knowledge for which their partner is responsible with the knowledge that they themselves possess (Wegner, 1987). And in the case of the Internet, an especially immediate and ubiquitous memory partner, there may be especially large knowledge overestimations. As people underestimate how much they are relying on the Internet, success at finding information on the Internet may be conflated with personally mastered information, leading Internet users to erroneously include knowledge stored outside their own heads as their own. That is, when participants access outside knowledge sources, they may become systematically miscalibrated regarding the extent to which they rely on their transactive memory partner. It is not that they misattribute the source of their knowledge, they could know full well where it came from, but rather they may inflate the sense of how much of the sum total of knowledge is stored internally.
We present evidence from nine experiments that searching the Internet leads people to conflate information that can be found online with knowledge “in the head.” […] The effect derives from a true misattribution of the sources of knowledge, not a change in understanding of what counts as internal knowledge (Experiment 2a and b) and is not driven by a “halo effect” or general overconfidence (Experiment 3). We provide evidence that this effect occurs specifically because information online can so easily be accessed through search (Experiment 4a–c).”
iii. Some words I’ve recently encountered on vocabulary.com: hortatory, adduce, obsequious, enunciate, ineluctable, guerdon, chthonic, condign, philippic, coruscate, exceptionable, colophon, lapidary, rubicund, frumpish, raiment, prorogue, sonorous, metonymy.
v. I have no idea how accurate this test of chess strength is, (some people in this thread argue that there are probably some calibration issues at the low end) but I thought I should link to it anyway. I’d be very cautious about drawing strong conclusions about over-the-board strength without knowing how they’ve validated the tool. In over-the-board chess you have at minimum a couple of minutes/move on average and this tool never gives you more than 30 seconds, so some slow players will probably suffer using this tool (I’d imagine this is why u/ViktorVamos got such a low estimate). For what it’s worth my Elo estimate was 2039 (95% CI: 1859, 2220).
In related news, I recently defeated my first IM – Pablo Garcia Castro – in a blitz (3 minutes/player) game. It actually felt a bit like an anticlimax and afterwards I was thinking that it would probably have felt like a bigger deal if I’d not lately been getting used to winning the occasional bullet game against IMs on the ICC. Actually I think my two wins against WIM Shiqun Ni during the same bullet session at the time felt like a bigger accomplishment, because that specific session was played during the Women’s World Chess Championship and I realized while looking up my opponent that this woman was actually stronger than one of the contestants who made it to the quarter-finals in that event (Meri Arabidze). On the other hand bullet isn’t really chess, so…
Here’s the first post about the book. This post will cover some of the stuff included in the remaining chapters of the book.
“It’s not easy to get an accurate or reliable picture of children’s curiosity at school. To begin with, the data are, almost by definition, descriptive. We can watch to see how many questions children ask, how often they tinker, open, take apart, or watch — but it’s virtually impossible to track the thoughts of twenty-three children during a classroom activity. However, we can measure how much curiosity children express while they are in school. […] We wanted to find out whether children expressed curiosity when they began grade school, and how different things looked by the time children were finished. We recorded ten hours in each of five kindergarten classrooms and five fifth-grade classrooms. Each time we visited, we recorded the children for two hours. […] Three students were trained to code the data, and achieved a high rate of inter-coder reliability. It turned out it’s not all that hard to spot curiosity in action. But what we found took us aback. Or rather what we didn’t find. On average, in any given kindergarten classroom, there were 2.36 episodes of curiosity in a two-hour stretch. Expressions of curiosity were even scarcer in the older grades. The average number of episodes in a fifth-grade classroom was 0.48. In other words, on average, classroom activity over a two-hour stretch included less than one expression of curiosity. In the schools we studied, the expression of curiosity was, at best, infrequent. Nine of the ten classrooms had at least one two-hour stretch where there were no expressions of curiosity. In other words, we rarely saw children take things apart, ask questions about topics either children or adults had raised, watch interesting phenomena unfold in front of their eyes, or in any way show signs that there were things they were eager to know more about it, much less actually follow up with any visible sort of investigation, whether in words or actions. The easiest interpretation is that children are simply less curious by the time they are in kindergarten and grow even less so by the end of grade school. However, the data don’t support that conclusion. For one thing, we saw as much variation between classrooms as we did between grade levels.”
“Our discovery, that there is little curiosity in grade school, is confirmed by the work others have done. Recall that Tizard and Hughes fitted preschoolers with tape recorders to get a picture of how many questions they asked at home with their parents (the answer […] is that preschoolers ask a lot of questions). However, Tizard and Hughes also recorded those same children when they went to preschool (1984). Once inside a school building, the picture changes dramatically. While the preschoolers they studied asked, on average, twenty-six questions per hour at home, that rate dropped to two per hour when the children were in school. […] One striking feature […] was how curious children were about anything that seemed exotic to them. Topics that led to a series of eager questions included the Rocky Mountains, Pangaea, Venus flytraps, unusual geometric shapes, trips to Mexico, and the Australopithecus Lucy’s descendants. But their episodes of curiosity were brief, often fleeting. Some 78 percent of the curiosity episodes involved fewer than four conversational turns. We also timed these sequences, since we were interested in nonverbal inquiry. Not one episode lasted longer than six minutes, and all but three lasted less than three minutes. We never saw an episode of curiosity that led to a more structured classroom activity, or that redirected a classroom discussion for more than a few moments.”
“Our impression was that most of the time teachers had very specific objectives for each stretch of time, and that a great deal of effort was put into keeping children on task and in reaching those objectives. […] Mastery rather than inquiry seemed to be the dominant goal for almost all the classrooms in which we observed. Often it seemed that finishing specific assignments (worksheets, writing assignments) was an even more salient goal than actually learning the material. In other words, the structure of the classroom made it clear that the educational activities we saw were not designed to encourage curiosity — nor were teachers using the children’s curiosity as a guide to what and how to teach. […] in the classrooms we visited, there was little or no evidence that an implicit or explicit goal of the curriculum was to help children pose questions. […] an important but easily overlooked distinction [is] between children’s engagement and children’s curiosity. A teacher can be talking about things that captivate the students, and the students can be deeply interested in a topic — quite engaged in a discussion or activity. But that in and of itself doesn’t mean the children are asking questions, or that their questions reflect curiosity. […] a key finding of our research so far [is that often] the reason children ask few questions, and fail to examine objects or tinker with things, is that the teacher feels such exploration would get in the way of learning. I have even heard teachers say as much. […] “I can’t answer questions right now. Now it’s time for learning.” […] A student and I sent out surveys to 114 teachers. In one part of the survey, they were asked to list the five skills or attributes they most wanted to instill or encourage in their students over the course of the school year. In the second part of the survey they were asked to circle five such desirable attributes from a list of ten. The list included words like “polite,” “cooperative,” “thoughtful,” “knowledgeable,” and also “curious.” Some 77 percent of the teachers surveyed circled “curious” as one of their top five. However, when asked to come up with their own ideas, only twenty- three listed curiosity. […] The impediments to curiosity in school consist of more than just the absence of enthusiasm for it. There are also powerful, somewhat invisible forces working against the expression and cultivation of curiosity in classrooms. Two primary impediments are the way in which plans and scripts govern what happens in most classrooms, and the pressure to get a lot of things “done” each day. […] Once children get to school, they exhibit a lot less curiosity. They ask fewer questions, examine objects less frequently and less thoroughly, and in general seem less inclined to persevere in sating their appetite for information.”
“When children have trouble learning, we think we need to teach it in a different way, or impress upon them the importance or usefulness of what they are learning. We encourage them to try harder, or spend more time trying to learn, even though it’s usually more effective to elicit their interest in the material. […] Several studies confirm the commonsense idea that children remember text better, and understand it more fully, when it has piqued their interest in one way or another (Silvia 2006; Knobloch et al. 2004).”
“Some would argue that the work of researchers like Robert Bjork (Bjork and Linn 2006) and Nate Kornell (Kornell and Bjork 2008) demonstrates that difficulty is key to learning. In what is now a large series of studies, researchers have shown that when students struggle a bit with the material they are learning, they learn it better.”
“Though researchers and teachers must deal with the fact that there are significant individual differences in what stirs a child’s interest or urge to know more, it is also possible to identify some general qualities that seem to make an object or a topic more or less intriguing to the majority of students. […] In the observations of curiosity that my students and I have done in classrooms, we have noticed one […] topic that consistently sparked children’s curiosity — intellectual exotica. […] Often what ignited a line of questioning was a reference to something outside the children’s zone of familiarity — unfamiliar places, historically distant times. […] children are often as curious about things they cannot see, touch, or directly experience as they are about what is going on right around them. […] the more unknown and unfamiliar a topic, and the denser with details its presentation, the more it may invite learning. […] The characteristics that fuel curiosity are not mysterious. Adults who use words and facial expressions to encourage children to explore; access to unexpected, opaque, and complex materials and topics; a chance to inquire with others; and plenty of suspense . . . these turn out to be the potent ingredients.”
“children are frequently privy to language not directed at them. The conversations adults have with one another influence how children talk and think. […] By the time children are four or so, they not only listen to their parents talk about other people — they also begin, in fledgling form, to gossip themselves. […] Daniela O’Neill and her colleagues tape-recorded the snack-time conversations of twenty-five preschoolers over a period of twenty-five weeks. Over 77 percent of the conversations children initiated with one another referenced other people, and nearly 30 percent mentioned people’s mental states. […] Peggy Miller’s work (Miller et al. 1992) shows that by the time children are five, more of their stories include information not just about themselves, but about themselves in relation to other people.”
“Sandra Hofferth and John Sandberg (2001) drew subjects from the 1997 Child Health Development Supplement to the Panel Study of Income Dynamics, a thirty-year longitudinal survey of a representative sample of families. […] While three-to-five-year-olds spent approximately seventeen hours a week in free play, most of them spent less than one hour a week outside, and less than two hours a week reading. By the time children were nine years old, they spent no more time outside, and far less time in free play (just under nine hours a week). They spent even less time reading (one and a quarter hours per week).”
“In an examination of how adults use the Internet to pursue a recreational interest in genealogy, Crystal Fulton (2009) found a link between amount of pleasure and effective persistent information-foraging strategies. The key to her argument is the role of time — she points out that when students feel pressured to complete an assignment, they experience less pleasure, and also engage in less thorough search behavior. That finding is replicated in a wide range of studies of online foraging.”
“The children who will get the most out of opportunities to work on their own (deciding what to tackle, and what to concentrate on) are the ones who can stay focused, stick with a question, and plan how to solve what ever problem intrigues them. In other words, at their best, autonomy and self-regulation go hand in hand. But in the world of real classrooms, every teacher must figure out how to balance the two. If a child doesn’t seem to have a great deal of perseverance, focus, or self-control, the teacher must decide whether to give him more autonomy so that he has a chance develop self-regulation, or whether to make autonomy the prize for self-control. […] This book for the most part has not focused on fleeting moments of curiosity, but the kind of curiosity that persists, unfolding over time and leading to sustained action (inquiry, discovery, tinkering, question asking, observation, research, reflection). Such sustained inquiry may be more likely to blossom when children have free time, and some time alone.”
“Many teachers […] discourage uncertainty, emphasizing instead what they know, or feel the students should know. They are more comfortable encouraging students to learn trustworthy information than to explore questions to which they themselves do not know the answer. Instead of using school as a place to formalize and extend the power of a young child’s zest for tackling the unknown or uncertain, teachers tend to squelch curiosity. They don’t do this out of meanness, or small-mindedness. They do it in the interests of making sure children master certain skills and established facts. While an emphasis on acquiring knowledge is reasonable, discouraging the disposition that leads to gaining new knowledge squanders a child’s most formidable learning tool. […] curiosity takes time to unfold, and even more time to bear fruit. In order to help children build on their curiosity, teachers have to be willing to spend time doing so. Nurturing curiosity takes time, but also saturation. It cannot be confined to science class. […] Teachers should provide children with interesting materials, seductive details, and desirable difficulty. Instead of presenting children with material that has been made as straightforward and digested as possible, teachers should make sure their students encounter objects, texts, environments, and ideas that will draw them in and pique their curiosity. […] to cultivate students’ curiosity, teachers need to give them both time to seek answers and guidance about various routes to getting answers, such as looking things up in reliable sources or testing hypotheses.”
“Few teachers readily see that they’re discouraging students’ questions, just as few parents readily see that they’re short-tempered with their children. […] One of the key findings of research is that children are heavily influenced not only by what adults say to them, but also by how the adults themselves behave. If schools value children’s curiosity, they’ll need to hire teachers who are curious. It is hard to fan the flames of a drive you yourself rarely experience. Many principals hire teachers who seem smart, who like children, and who have the kind of drive that supports academic achievement. They know that teachers who possess these qualities will foster the same in their students. Why not put curiosity at the top of the list of criteria for good teachers? […] in order to flourish, curiosity needs to be cultivated.”
“I will […] argue that curiosity is a fragile seed — for some the seed bears fruit, and for others, it shrivels and dies all too soon. By the time a child is five years old, his curiosity has been carved to reflect his personality, family life, daily encounters, and school experience. By the time that five-year-old is twenty-two, the intensity and object of his curiosity has become a defining, though often invisible part of who he is — something that will shape much of his future life. But the journey curiosity takes, from a universal and ubiquitous characteristic, one that accompanies much of the infant’s daily experience, to a quality that defines certain adults and barely exists in others, is subtle. In the chapters that follow, I’ll try to show that there are several sources of individual variation, and each has its developmental moment. Attachment in toddlerhood, language in the three-year-old, and a succession of environmental limitations and open doors all contribute to a person’s particular kind and intensity of curiosity. […] This book is about why some children remain curious and others do not, and how we can encourage more curiosity in everyone.”
“I’d expected more from a Harvard University Press publication. The book has too many personal anecdotes and too much speculation, and not enough data; also, the coverage would have benefited from the author being more familiar with ethological research such as e.g. some of the stuff included in Natural Conflict Resolution. However it was interesting enough for me to read it to the end, despite the format, and I assume many people who don’t mind reading popular science books might like the book.”
I’ve mentioned before how my expectations sort of depend, a bit, on who the publisher is; I have one set of (implicit) criteria for books published by academic publishers, and a different set of (implicit) criteria which needs to be met if the book is published by other publishing companies. Over the last couple of years I’ve pretty much exclusively read academic publications (I think I read two or three non-academic non-fiction publications last year, out of 72), but at least I’m aware there’s an argument to be made for having different standards for different kinds of books. I gave this book two stars, and part of the reason why it did not get a higher rating is that this kind of publication is the kind of publication I’m actively trying to avoid by sticking to only reading academic publications. I don’t care about reading anecdotes about somebody’s grandmother, and I don’t need two-page long anecdotes used to introduce readers to relatively simple concepts which could be covered in a paragraph by a skilled textbook author. I consider much of the fluff in normal popular science publications to be a waste of my time, and I get annoyed and confused when I find that kind of stuff in supposedly academic publications (this book was published by Harvard University Press). The book is not bad and it has some interesting ideas, but there’s way too much fluff for my taste. In the post I’ll talk a little about some of the ideas presented in the first four chapters of the book.
This observation from the book, made early in the coverage, might arguably be one of the most important things to take away from the book: “People who are curious learn more than people who are not, and people learn more when they are curious than when they are not.”
Attention is an important variable in the learning context, and curiosity helps with that; the author notes both that it’s quite obvious that curiosity helps children (much of the book is about the curiosity of children) learn, but also that we don’t actually know a great deal about how to make children curious about stuff in order to help them learn – this is not something people have researched very much. I find this, curious. An important observation in that context is however that we do know that curiosity is not what you might term dimension-less; people are curious about different things, and children are most curious when they are given the opportunity to inquire about things that mystify them or attract their attention. Research indicates that children are very curious early on in their lives (babies, toddlers), and that curiosity then seems to decline later on. One way to think about this is that babies don’t have good working models of what to expect will happen in the world around them yet given specific input, in part because they don’t have a lot of experience, so they’re often surprised; later on, they come to expect certain things to happen in specific ways (gravity causes both the plate and the cup (and the cutlery…) to drop to the floor if you pick it up and throw it – my example, derived from avuncular experience…), and as their working models improve habituation kicks in and removes the need to attend to the inputs which previously demanded their attention, freeing up mental resources which can then be devoted to other purposes. Actually adults wouldn’t be very well off if they were all as curious as two-year-olds, because the need to constantly react to new stimuli presenting themselves would likely mean they’d never get anything done (the author does not bring this up, but it’s also not really important in the context of the coverage). As put in the book: “during the first three years, children are gathering the material they need to establish, and then enrich, the schemas that help them navigate the physical, psychological, and social worlds. Key to this mastery of pattern and order is their alertness to novelty. This fundamental characteristic of early development explains why toddlers seem practically voracious in their appetite for new information.”
Curiosity has multiple faces, but a working definition presented early in the work is that “curiosity is an expression, in words or behaviors, of the urge to know more — an urge that is typically sparked when expectations are violated.” Breadth and depth are important variables, as is persistence. Even if there’s sort of an identifiable general trajectory for the variable during childhood, with much curiosity early on and then lower values later, you still as argued in the quote above have a lot of interpersonal variation, and the book spends some time trying to figure out why it is that some people end up a lot more curious than others and how they might be different. It seems to be the case that differences present quite early, and as usual Bowlby‘s name pops up. It pops up because although exploration of the unknown may have positive consequences, it also involves taking a risk – anxiety is argued to be an important curiosity-mediator, so that children who are worried about abandonment may be less likely to go exploring than are children who have a secure attachment bond and feel that they have a safe haven to which they can retreat without much risk. Longitudinal research has indicated that at least for one curiosity conceptualization (a so-called ‘curiosity box’-setup), individuals who were securely attached at the age of 2 were more curious two to three years later than were individuals who were not securely attached at baseline. A study on monkeys done more than fifty years ago likewise found that monkeys raised without an attachment figure were more fearful and that fear prevented the animals from exploring their environment. Not impressive, but it seems plausible. This is incidentally one of the only (if not the only? Can’t remember…) monkey studies included in the coverage, and if I had to explain my annoyance in my goodreads review at the absence of such research, the main reason was that the author in my opinion early on in the coverage pushes the ‘humans are exceptional’-point further than it can be supported, which is the sort of behaviour that always tends to make me irritated.
It seems likely that feedback processes start early and may be important; if you explore and have positive experiences doing it early on, you’ll probably be more likely to explore in the future; and if you’re too fearful to go look behind that curtain, you may never realize it wasn’t dangerous. Although trait variables matter, environmental mediation also seems really important and there’s quite a bit of stuff about this in the book. There’s incidentally some research suggesting that too little inhibition may not be desirable, but too much will certainly contribute to a lack of curiosity.
Although it’s very obvious that children in what might be termed the ‘asks a lot of questions’-age are incredibly curious, it’s become clear from research on these matters that they’re actually quite curious even before that time, if you know where to look for this curiosity; in a series of experiments it’s been shown that children will point at objects to get information about them long before they learn how to verbally form questions, and it’s clear both that children point more often at unfamiliar objects and events than familiar ones, and that they’re more likely to point when they’re in the presence of someone they consider to be a knowledgeable informant (e.g. a mother). When they do reach the asks-a-lot-of-questions age, they, well, ask a lot of questions, and it turns out that some people have actually collected data on this stuff. One really neat sample mentioned in the book involved four children followed for almost four years, from they were fourteen months old until they were five years and one month old, and here the recordings included 24,741 questions presenting 229,5 hours of conversation; the children asked an average of 107 questions per hour. That’s an average, and it hides a huge variation among the individuals even in that small sample; one of the children asked an average of close to 200 questions per hour, whereas another asked only slightly less than 70. I’d suggest these numbers are higher than average due to selection bias and perhaps also due to Hawthorne, but I find it quite incredible that data such as this is even available in the first place, and the numbers do sort of illustrate what kind of level we’re talking about. It’s obvious from the conversational strategies the children employ at that point in time that they aren’t just asking questions to get their parents’ attention or in order to monopolize their time (though this may be a convenient side-effect); children act differently depending on how questions are answered and question-sequences display path-dependence, indicating that they use the questions to gather knowledge about the world around them, rather than e.g. just to train their language skills.
Most children acquire language in roughly the same sequence. They point long before they start talking in sentences, and after pointing they begin to use an object to represent another object. After that they realize that objects have names, and at that point they start learning new words very fast. While their vocabulary develops very rapidly during this first learning-new-words phase, they start combining them in orderly ways; i.e. they start speaking in sentences.
In diary studies the data seem to indicate that children who hear adults ask many questions in their environment are more likely to get their questions answered (causality is iffy, though). How many questions they ask depends on what they consider to constitute a satisfactory answer, but in general they are more likely to continue asking questions than are children who rarely see other people ask informational questions and who are not rewarded with satisfactory answers when they ask questions. The data suggest that three-year-olds generally ask more questions than seven-year-olds, but also that there are already at that point (at the age of three) important differences in terms of how many questions are asked by different children; interindividual differences can be spotted quite early and the feedback processes involved may be one mechanism leading to those differences growing over time.
Small children depend a great deal on their parents and other adults to interpret stuff in the world around them, and they don’t quickly outgrow this dependence on adults; however as children age the range of responses towards specific stimuli expands. A toddler might want to know whether or not a fear response is proper in a specific context and so will observe the parents before reacting to a new stimuli to learn what’s the proper response; but as the child ages and the cognitive abilities increase the child might also have to make a decision, implicitly or explicitly, of e.g whether or not to play with (how many of?) the toys on the floor. In a study on this stuff they tried to manipulate the curiosity of the mother of a child by asking her either to manipulate objects lying on a table, look towards the corner of the table, or talk to another adult elsewhere in the room, with the child observing through a one-way mirror – the child was then later let into the room, and it turned out that children who had observed their mother manipulating the objects were not only more likely to manipulate the toys in manners similar to how the parents had done, but they were also more likely to explore to the toys in other ways. How parents (and other adults) behave will be noticed by children whether or not the parents know they’re being observed, and I think many parents might be surprised to learn how much observed behaviours, as opposed to verbally communicated behavioural norms, matter. A quote from the coverage:
“To sum up so far, from infancy until at least the elementary school years, children look to adults for cues about how to respond to objects and events, how to interpret the things they witness and experience, and how to interact with the world. The cues children take from adults are powerful in the moment, but have long-term impact as well. Moreover, the influence extends beyond problem solving. Children also learn from the adults around them what kind of stance they can or should take toward the objects and events they encounter as the day unfolds. This is particularly important when it comes to inquiry. Because, as should be clear by now, inquiry does not bubble up simply because a child is intrinsically curious. Nor does it simply erupt when something in the environment is particularly intriguing. Whether a child has the impulse, day in and day out, to find out more, ebbs and flows as a result of the adults who surround her.” [my emphasis].
Parents aren’t the only adults with whom children interact, and multiple studies have indicated that when preschoolers receive informative answers by their teachers they ask significantly more questions. In a curiosity-box setup (basic setup: Leave a box with lots of drawers, each one including a small item, in a classroom and then observe how many children approach it, how fast they approach it, how often they do, etc.), “there was a direct link between how much the teacher smiled and talked in an encouraging manner and the level of curiosity [as measured by box-related behaviours] the children in the room expressed.” Even subtle adult behaviours like encouraging nods and smiles by a teacher may affect behaviours/curiosity.
A very important point in the context of social modelling is that many of the behaviours adults display are not necessarily geared towards the children, but that these behaviours still matter:
“Parents and teachers are not always gearing their behavior directly toward the children they are with. They are to a great degree just being themselves. They lift lids, tinker, look things up, watch things carefully, and ask questions. Or they don’t. In fact, many adults do not express much curiosity in their everyday lives. There are plenty of adults who rarely want to find out about something new, or probe beneath the surface. Why wouldn’t this have an impact on children? […] children watch and learn from adult behavior in the short run and in the long run. And now we have some evidence that the same is true when it comes to children’s interest in finding out more. When parents give their children some freedom to wander, explore, and tinker, it makes a difference. When parents express fear or disapproval of inquiry, that too has an effect. But parents are just the beginning. When it comes to their urge to know more, children at least as old as nine continue to be extremely susceptible to the behavior of adults. And here it’s worth remembering that children learn a lot at home from behaviors not directed toward them, and that at school the same is true.”
Here’s a previous post in the series covering this book. There’s a lot of stuff in these chapters, so the stuff below’s just some of the things I thought were interesting and worth being aware of. I’ve covered three chapters in this post: One about skin, nails and hair, one about the eye, and one about infectious and tropical diseases. I may post one more post about the book later on, but I’m not sure if I’ll do that or not at this point so this may be the last post in the series.
Okay, on to the book – skin, nails and hair (my coverage mostly deals with the skin):
“The skin is a highly specialized organ that covers the entire external surface of the body. Its various roles include protecting the body from trauma, infection and ultraviolet radiation. It provides waterproofing and is important for fluid and temperature regulation. It is essential for the detection of some sensory stimuli. […] Skin problems are extremely common and are responsible for 10–15 per cent of all consultations in general practice. […] Given that there are around 2000 dermatological conditions described, only common and important conditions, including some that might be especially relevant in the examination setting, can be covered here.”
“Urticaria is characterized by the development of red dermal swellings known as weals […]. Scaling is not seen and the lesions are typically very itchy. The lesions result from the release of histamine from mast cells. An important clue to the diagnosis is that individual lesions come and go within 24 hours, although new lesions may be appearing at other sites. Another associated feature is dermographism: a firm scratch of the skin with an orange stick will produce a linear weal within a few minutes. Urticaria is common, estimated to affect up to 20 per cent of the population at some point in their lives.”
“Stevens–Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) are thought to be two ends of a spectrum of the same condition. They are usually attributable to drug hypersensitivity, though a precipitant is not always identified. The latent period following initiation of the drug tends to be longer than seen with a classical maculopapular drug eruption. The disease is termed:
*SJS when 10 per cent or less of the body surface area epidermis detaches
*TEN when greater than 30 per cent detachment occurs.
Anything in between is designated SJS/TEN overlap. Following a prodrome of fever, an erythematous eruption develops. Macules, papules, or plaques may be seen. Some or all of the affected areas become vesicular or bullous, followed by sloughing off of the dead epidermis. This leads to potentially widespread denudation of skin. […] The affected skin is typically painful rather than itchy. […] The risk of death relates to the extent of epidermal loss and can exceed 30 per cent. […] A widespread ‘drug rash’ that is very painful should ring alarm bells.”
“Various skin problems arise in patients with diabetes mellitus. Bacterial and fungal infections are more common, due to impaired immunity. Vascular disease and neuropathy lead to ulceration on the feet, which can sometimes be very deep and there may be underlying osteomyelitis. Granuloma annulare […] and necrobiosis lipoidica have also been associated with diabetes, though many cases are seen in non-diabetic patients. The former produces smooth papules in an annular configuration, often coalescing into a ring. The latter usually occurs over the shins giving rise to yellow-brown discoloration, with marked atrophy and prominent telangiectasia. There is often an annular appearance, with a red or brown border. Acanthosis nigricans, velvety thickening of the flexural skin […], is seen with insulin resistance, with or without frank diabetes. […] Diabetic bullae are also occasionally seen and diabetic dermopathy produces hyperpigmented, atrophic plaques on the legs. The aetiology of these is unknown.”
“Malignant melanoma is one of the commonest cancers in young adults [and it] is responsible for almost three-quarters of skin cancer deaths, despite only accounting for around 4 per cent of skin cancers. Malignant melanoma can arise de novo or from a pre-existing naevus. Most are pigmented, but some are amelanotic. The most important prognostic factor for melanoma is the depth of the tumour when it is excised – Breslow’s thickness. As most malignant melanomas undergo a relatively prolonged radial (horizontal) growth phase prior to invading vertically, there is a window of opportunity for early detection and management, while the prognosis remains favourable. […] ‘Red flag’ findings […] in pigmented lesions are increasing size, darkening colour, irregular pigmentation, multiple colours within the same lesion, and itching or bleeding for no reason. […] In general, be suspicious if a lesion is rapidly changing.”
“Most ocular surface diseases […] are bilateral, whereas most serious pathology (usually involving deeper structures) is unilateral […] Any significant reduction of vision suggests serious pathology [and] [s]udden visual loss always requires urgent investigation and referral to an ophthalmologist. […] Sudden loss of vision is commonly due to a vascular event. These may be vessel occlusions giving rise to ischaemia of vision-serving structures such as the retina, optic nerve or brain. Alternatively there may be vessel rupture and consequent bleeding which may either block transmission of light as in traumatic hyphaema (haemorrhage into the anterior chamber) and vitreous haemorrhage, or may distort the retina as in ‘wet’ age-related macular degeneration (AMD). […] Gradual loss of vision is commonly associated with degenerations or depositions. […] Transient loss of vision is commonly due to temporary or subcritical vascular insufficiency […] Persistent loss of vision suggests structural changes […] or irreversible damage”.
There are a lot of questions one might ask here, and I actually found it interesting to know how much can be learned simply by asking some questions which might help narrow things down – the above are just examples of variables to consider, and there are others as well, e.g. whether or not there is pain (“Painful blurring of vision is most commonly associated with diseases at the front of the eye”, whereas “Painless loss of vision usually arises from problems in the posterior part of the eye”), whether there’s discharge, just how the vision is affected (a blind spot, peripheral field loss, floaters, double vision, …), etc.
“Ptosis (i.e. drooping lid) and a dilated pupil suggest an ipsilateral cranial nerve III palsy. This is a neuro-ophthalmic emergency since it may represent an aneurysm of the posterior communicating artery. […] In such cases the palsy may be the only warning of impending aneurysmal rupture with subsequent subarachnoid haemorrhage. One helpful feature that warns that a cranial nerve III palsy may be compressive is pupil involvement (i.e. a dilated pupil).”
“Although some degree of cataract (loss of transparency of the lens) is almost universal in those >65 years of age, it is only a problem when it is restricting the patient’s activity. It is most commonly due to ageing, but it may be associated with ocular disease (e.g. uveitis), systemic disease (e.g. diabetes), drugs (e.g. systemic corticosteroids) or it may be inherited. It is the commonest cause of treatable blindness worldwide. […] Glaucoma describes a group of eye conditions characterized by a progressive optic neuropathy and visual field loss, in which the intraocular pressure is sufficiently raised to impair normal optic nerve function. Glaucoma may present insidiously or acutely. In the more common primary open angle glaucoma, there is an asymptomatic sustained elevation in intraocular pressure which may cause gradual unnoticed loss of visual field over years, and is a significant cause of blindness worldwide. […] Primary open angle glaucoma is asymptomatic until sufficiently advanced for field loss to be noticeable to the patient. […] Acute angle closure glaucoma is an ophthalmic emergency in which closure of the drainage angle causes a sudden symptomatic elevation of intraocular pressure which may rapidly damage the optic nerve.”
“Age-related macular degeneration is the commonest cause of blindness in the older population (>65 years) in the Western world. Since it is primarily the macula […] that is affected, patients retain their peripheral vision and with it a variable level of independence. There are two forms: ‘dry’ AMD accounts for 90 per cent of cases and the more dramatic ‘wet’ (also known as neovascular) AMD accounts for 10 per cent. […] Treatments for dry AMD do not alter the course of the disease but revolve around optimizing the patient’s remaining vision, such as using magnifiers. […] Treatments for wet AMD seek to reverse the neovascular process”.
“Diabetes is the commonest cause of blindness in the younger population (<65 years) in the Western world. Diabetic retinopathy is a microvascular disease of the retinal circulation. In both type 1 and type 2 diabetes glycaemic control and blood pressure should be optimized to reduce progression. Progression of retinopathy to the proliferative stage is most commonly seen in type 1 diabetes, whereas maculopathy is more commonly a feature of type 2 diabetes. […] Symptoms
*Bilateral. *Usually asymptomatic until either maculopathy or vitreous haemorrhage. [This is part of why screening programs for diabetic eye disease are so common – the first sign of eye disease may well be catastrophic and irreversible vision loss, despite the fact that the disease process may take years or decades to develop to that point] *Gradual loss of vision – suggests diabetic maculopathy (especially if distortion) or cataract. *Sudden loss of vision – most commonly vitreous haemorrhage secondary to proliferative diabetic retinopathy.”
Recap of some key points made in the chapter:
“*For uncomfortable/red eyes, grittiness, itchiness or a foreign body sensation usually indicate ocular surface problems such as conjunctivitis.
*Severe ‘aching’ eye pain suggests serious eye pathology such as acute angle closure glaucoma or scleritis. *Photophobia is most commonly seen with acute anterior uveitis or corneal disease (ulcers or trauma). [it’s also common in migraine]
*Sudden loss of vision is usually due to a vascular event (e.g. retinal vessel occlusions, anterior ischaemic optic neuropathy, ‘wet’ AMD).
*Gradual loss of vision is common in the ageing population. It is frequently due to cataract […], primary open angle glaucoma (peripheral field loss) or ‘dry’ AMD (central field loss).
*Recent-onset flashes and floaters should be presumed to be retinal tear/detachment.
*Double vision may be monocular (both images from the same eye) or binocular (different images from each eye). Binocular double vision is serious, commonly arising from a cranial nerve III, IV or VI palsy. […]
the following presentations are sufficiently serious to warrant urgent referral to an ophthalmologist: sudden loss of vision, severe ‘aching’ eye pain, new-onset flashes and floaters, [and] new-onset binocular diplopia.”
Infectious and tropical diseases:
“Patients with infection (and inflammatory conditions or, less commonly, malignancy) usually report fever […] Whatever the cause, body temperature generally rises in the evening and falls during the night […] Fever is often lower or absent in the morning […]. A sensation of ‘feeling hot’ or ‘feeling cold’ is unreliable – healthy individuals often feel these sensations, as may those with menopausal flushing, thyrotoxicosis, stress, panic, or migraine. The height and duration of fever are important. Rigors (chills or shivering, often uncontrollable and lasting for 20–30 minutes) are highly significant, and so is a documented temperature over 37.5 °C taken with a reliable oral thermometer. Drenching sweats are also highly significant. Rigors generally indicate serious bacterial infections […] or malaria. An oral temperature >39 °C has the same significance as rigors. Rigors generally do not occur in mild viral infections […] malignancy, connective tissue diseases, tuberculosis and other chronic infections. […] Anyone with fever lasting longer than a week should have lost weight – if a patient reports a prolonged fever but no weight loss, the ‘fever’ usually turns out to be of no consequence. […] untouched meals indicate ongoing illness; return of appetite is a reliable sign of recovery.”
“Bacterial infections are the most common cause of sepsis, but other serious infections (e.g. falciparum malaria) or inflammatory states (e.g. pancreatitis, pre-eclamptic toxaemia, burns) can cause the same features. Below are listed the indicators of sepsis – the more abnormal the result, the more severe is the patient’s condition.
*Check if it is above 38 °C or below 36 °C.
*Simple viral infections seldom exceed 39 °C.
*Temperatures (from any cause) are generally higher in the evening than in the early morning.
*As noted above, rigors (uncontrollable shivering) are important indicators of severe bacterial infection or malaria. […] A heart rate greater than 90 beats/min is abnormal, and in severe sepsis a pulse of 140/min is not unusual. […] Peripheries (fingers, toes, nose) are often markedly cooler than central skin (trunk, forehead) with prolonged capillary refill time […] Blood pressure (BP) is low in the supine position (systolic BP <90 mmHg) and falls further when the patient is repositioned upright. In septic shock sometimes the BP is unrecordable on standing, and the patient may faint when they are helped to stand up […] The first sign [of respiratory disturbance] is a respiratory rate greater than 20 breaths/min. This is often a combination of two abnormalities: hypoxia caused by intrapulmonary shunts, and lactic acidosis. […] in hypoxia, the respiratory pattern is normal but rapid. Acidotic breathing has a deep, sighing character (also known as Kussmaul’s respiration). […] Also called toxic encephalopathy or delirium, confusion or drowsiness is often present in sepsis. […] Sepsis is always severe when it is accompanied by organ dysfunction. Septic shock is defined as severe sepsis with hypotension despite adequate fluid replacement.”
“Involuntary neck stiffness (‘nuchal rigidity’) is a characteristic sign of meningitis […] Patients with meningitis or subarachnoid haemorrhage characteristically lie still and do not move the head voluntarily. Patients who complain about a stiff neck are often worried about meningitis; patients with meningitis generally complain of a sore head, not a sore neck – thus neck stiffness is a sign, not a symptom, of meningitis.”
“General practitioners are generally correct when they say an infection is ‘a virus’, but the doctor needs to make an accurate assessment to be sure of not missing a serious bacterial infection masquerading as ‘flu’. […]
*Influenza is highly infectious, so friends, family, or colleagues should also be affected at the same time – the incubation period is short (1–3 days). If there are no other cases, question the diagnosis.
*The onset of viraemic symptoms is abrupt and often quite severe, with chills, headache, and myalgia. There may be mild rigors on the first day, but these are not sustained.
*As the next few days pass, the fever improves each day, and by day 3 the fever is settling or absent. A fever that continues for more than 3 days is not uncomplicated ’flu, and nor is an illness with rigors after the first day.
*As the viraemia subsides, so the upper respiratory symptoms become prominent […] The patient experiences a combination of: rasping sore throat, dry cough, hoarseness, coryza, red eyes, congested sinuses. These persist for a long time (10 days is not unusual) and the patient feels ‘miserable’ but the fever is no longer prominent.”
“Several infections cause a similar picture to ‘glandular fever’. The commonest is EBV [Epstein–Barr Virus], with cytomegalovirus (CMV) a close second; HIV seroconversion may look clinically identical, and acute toxoplasmosis similar (except for the lack of sore throat). Glandular fever in the USA is called ‘infectious mononucleosis’ […] The illness starts with viraemic symptoms of fever (without marked rigors), myalgia, lassitude, and anorexia. A sore throat is characteristic, and the urine often darkens (indicating liver involvement). […] Be very alert for any sign of stridor, or if the tonsils meet in the middle or are threatening to obstruct (a clue is that the patient is unable to swallow their saliva and is drooling or spitting it out). If there are any of these signs of upper airway obstruction, give steroids, intravenous fluids, and call the ENT surgeons urgently – fatal obstruction occasionally occurs in the middle of the night. […] Be very alert for a painful or tender spleen, or any signs of peritonism. In glandular fever the spleen may rupture spontaneously; it is rare, but tragic. It usually begins as a subcapsular haematoma, with pain and tenderness in the left upper quadrant. A secondary rupture through the capsule then occurs at a later date, and this is often rapidly fatal.”
(This was a review lecture for me as I read a textbook on these topics a few months back going into quite a lot more detail – the post I link to has some relevant links if you’re curious to explore this topic further).
A few relevant links: Group (featured), symmetry group, Cayley table, Abelian group, Symmetry groups of Platonic solids, dual polyhedron, Lagrange’s theorem (group theory), Fermat’s little theorem. I think he was perhaps trying to cover a little bit too much ground in too little time by bringing up the RSA algorithm towards the end, but I’m sort of surprised how many people disliked the video; I don’t think it’s that bad.
The beginning of the lecture has a lot of remarks about Fourier‘s life which are in some sense not ‘directly related’ to the mathematics, and so if this is what you’re most interested in knowing more about you can probably skip the first 11 minutes or so of the lecture without missing out on much. The lecture is very non-technical compared to coverage like this, this, and this (…or this).
I think one thing worth mentioning here is that the lecturer is the author of a rather amazing book on the topic he talks about in the lecture.
I noted in my last post about the book that although I’d initially thought I’d cover the rest of the book in that post, I at the end found myself unable to do so because the post would end up being too long; this post will cover the remaining chapters and points of interest and will be the last post about the book.
The first of the remaining chapters is a chapter about ‘Maintaining Relationships'; as usual most of the coverage focuses on romantic relationships. Some quotes:
“The most frequent focus of maintenance research has been the identification of behaviors or interactions that relational partners can enact to sustain their relationship […]. Numerous typologies of such behaviors exist […] Stafford and Canary’s (1991) initial research on the topic generated five positive and proactive maintenance strategies, which have become widely used […] Positivity refers to attempts to make interactions pleasant. These include acting nice and cheerful when one does not feel that way, performing favors for the partner, and withholding complaints. Openness involves direct discussion about the relationship, including talk about the history of the involvement, rules made, and personal disclosure. Assurances involve support of the partner, comforting the partner, and making one’s commitment clear. Social networks refers to relying on friends and family to support the relationship (e.g., having dinner every Sunday at the in-laws). Finally, sharing tasks refers to doing one’s fair share of household chores […] Early on, Duck (1988) questioned the extent to which maintenance behaviors are intentionally enacted. This issue is central because it addresses whether maintenance as a process requires effort and planning or occurs as a by-product of relating. […] some behaviors might start as strategies but over time become routine […] Dainton and Aylor (2002) found that the same behaviors are used intentionally and unintentionally […] [They] speculated that maintenance might be performed routinely until something happens to disrupt the routine. At that point, relational partners might turn to strategic maintenance enactment. As such, routine maintenance might be used during times when preferred levels of satisfaction and commitment are experienced, and strategic maintenance might be enacted during times of perceived uncertainty.”
“One popular axiom is that relationships are easy to get into and hard to get out of, and evidence exists to support this axiom. Attridge (1994) reviewed various “barriers” to dissolving romantic relationships […] Attridge noted that both internal and external barriers prevent people from treating marriages like blind dates and that smart relational partners would make use of barriers to keep their relationships intact (e.g., remind the partner of religious premises of marriage). In terms of internal barriers that Attridge (1994) reviewed, the first is commitment. […] Next, one’s religious beliefs regarding the sanctity of marriage compel people to remain. Also, one’s self-identity – that is, viewing oneself in terms of the relationship – acts as a barrier to dissolution. Next, irretrievable personal investments (such as spending time with the partner) work against dissolution. Finally, Attridge argued that the presence of children acted as an internal barrier, especially for women; women who have children are more likely to remain in a marriage than are women without children.
In terms of external barriers, Attridge (1994) cited several. Not surprisingly, these include legal barriers, financial obligations, and social networks that promote the bond. In addition to these, we would add a perception of a lack of alternatives. Both Rusbult and Johnson’s models indicate that having no perceived alternatives increases one’s commitment to the partner. Both Johnson (2001) and Rusbult and Martz (1995) have shown that abused women remain in these marriages because they perceive that they have no alternative associations or resources that they can leverage to leave their unhappy state. Conversely, Heaton and Albrecht (1991) found that “social contact – whether having potential sources of help, receiving help, or spending social and recreational time away from home – is positively associated with instability” […] Relationships with barriers are probably stable, but they do not necessarily contain characteristics that demarcate a high-quality relationship. To ensure the continuation of such qualities, one needs to engage in individual and relational strategies that help create and sustain liking, love, commitment, and so forth.”
“research shows that maintenance strategies provide the bases for increases in intimacy […]. That is, the use of maintenance behaviors helps dating partners develop their involvements. Moreover, people who do not engage in maintenance behaviors are more likely to de-escalate or terminate their relationships […] Yet the functional utility of maintenance behaviors does not endure for long. […] Canary, Stafford, and Semic (2002) conducted a panel study examining married partners’ maintenance activity and relational characteristics (liking, commitment, and control mutuality) at three points in time, each a month apart. They found that maintenance behaviors are strongly associated with relational characteristics concurrently, but that the effects completely fade within a month’s time (when controlling for the previous months’ reports). Thus, it appears that maintenance strategies must be used continuously if they are to sustain desired relational characteristics. Being positive, assuring the partner of one’s love and commitment, sharing tasks, and so forth represent proactive relational behaviors to be sure, but they must be enacted on a regular basis to matter.”
“Rusbult (1987) identified variations in the way that people respond to their partners during troubled times. These tendencies to accommodate reflect two dimensions: passive versus active and constructive versus destructive. Exit is an active and destructive behavior that includes threats to leave the partner; voice is an active and constructive strategy that involves discussing the problem without hostility; Loyalty is a passive and constructive approach that involves giving in to the partner; and Neglect is a passive and destructive approach that includes passive– aggressive reactions. Several studies have shown that committed individuals are more likely to engage in the more civil forms of accommodation – voice and loyalty – and that these behaviors have a more positive associations than do neglect or exit with relational quality. […] Tests of Rusbult’s model have largely endorsed its basic tenets, as reported elsewhere (Canary & Zelley, 2000).”
“a longstanding assumption is that in established relationships much communication involves taken-for-granted presumptions and expectations, and “habits of adjustment to the other person become perfected and require less participation of the consciousness” (Waller, 1951, p. 311). This would imply that over time maintenance would be achieved routinely rather than strategically. […] Research supports these presuppositions.”
The next chapter is called ‘The Treatment of Relationship Distress: Theoretical Perspectives and Empirical Findings’ – a few observations from the chapter:
“distressed married couples are more prone than nondistressed couples to aversive, destructive patterns of communication […] distressed couples are more likely to engage in exchanges in which one person’s hurtful comment is reciprocated with greater intensity by the receiving partner. […] Studies of couples’ conversations have shown that distressed partners are more likely to respond negatively to each other’s expressions of negative affect than are members of nondistressed couples (negative reciprocity); furthermore, these expressions of negative affect are not as likely to be offset by high levels of positive affect as they are in nondistressed relationships […] social learning theory emphasizes that a spouse’s behavior is both learned and influenced by the other partner’s behavior. Over time, spouses’ influence on each other becomes a stronger predictor of current behavior than the influences of previous close relationships.”
CBCT [Cognitive–Behavioral Couple Therapy] researchers have identified five major types of cognitions involved in couple relationship functioning […] The first three cognitions involve evaluations of specific events. Selective attention involves how each member of a couple idiosyncratically notices, or fails to notice, particular aspects of relationship events. Selective attention contributes to distressed couples’ low rates of agreement about the occurrence and quality of specific events, as well as negative biases in perceptions of each other’s messages […] Attributions are inferences made about the determinants of partners’ positive and negative behaviors. The tendency of distressed partners to attribute each other’s negative actions to global, stable traits has been referred to as “distress-maintaining attributions” because they leave little room for future optimism that one’s partner will behave in a more pleasing manner in other situations […] Expectancies, or predictions that each member of the couple makes about particular relationship events in the immediate or more distant future, are the last type of cognitions involving specific events. Negative relationship expectancies have been associated with lower [relationship] satisfaction […] The fourth and fifth categories of cognition are forms of what cognitive therapists have referred to as basic or core beliefs shaping one’s experience of the world. These include (a) assumptions, or beliefs that each individual holds about the characteristics of individuals and intimate relationships, and (b) standards, or each individual’s personal beliefs about the characteristics that an intimate relationship and its members “should” have […] Couples’ assumptions and standards are associated with current relationship distress, either when these beliefs are unrealistic or when the partners are not satisfied with how their personal standards are being met in their relationship […] many of the problematic behavioral interactions between spouses may evolve from the partners’ relatively stable cognitions about the relationship. Unless these cognitions are taken into account, successful intervention is likely to be compromised.” [The important point being that in a distressed relationship you can address: a) behaviours, b) how people in the relationship think about the behaviours, or c) both – and c seems at least theoretically to be superior to either of the other choices].
“CBCT teaches partners to monitor and test the appropriateness of their cognitions. It incorporates some standard cognitive restructuring strategies, such as (a) considering alternative attributions for a partner’s negative behavior; (b) asking for behavioral data to test a negative perception concerning a partner (e.g., that the partner never complies with requests); and (c) evaluating extreme standards by generating lists of the advantages and disadvantages of expectations to live up to this standard. […] Overall, we propose that some of the common elements in the effective approaches that we have reviewed include (a) broadening partners’ perspectives on sources of their difficulties as a couple, as well as on their strengths as a couple; (b) increasing the partners’ abilities to differentiate between the strengths and problems within their current relationship, versus characteristics that occurred in prior relationships; (c) motivating and directing the couple to reduce behavioral patterns that maintain or worsen relationship distress; and (d) increasing the range of constructive strategies that partners have available for influencing each other. […] Although the quality of the therapeutic alliance in explaining treatment effects has not been investigated empirically in couple therapy, the therapeutic alliance has received considerable attention in psychotherapy research more generally. A recent meta-analysis of psychotherapy concluded that the therapeutic alliance explains between 38% and 77% of the variance in treatment outcome, whereas specific techniques account for only 0% to 8% of the variance (Wampold, 2001).”
The last chapter is a sort of ‘bringing it all together’-chapter with some key points to take away from the book. I thought I’d include a few of these here even if I’ve talked about them before:
“The ratio of positive and negative behaviors during conflict interactions is also critical to relationships as viewed from a social exchange perspective […]. The study of conflict communication in married couples, however, has shown that negative behavior tends to have a stronger impact on relationship satisfaction than positive behavior. […] In discussing social exchange processes and emotion, Planalp, Fitness, and Fehr debunk the idea that social exchange processes are cold and calculating and argue that “the basic concepts and processes of social exchange theory can be viewed as deeply emotional.” For example, they note that rewards and costs are often experienced as positive and negative feelings. In addition, our reactions to inequity and inequality in our relationships are likely to be highly emotional, and indeed such social exchange concepts as comparison levels and comparison levels for alternatives are basically about positive and negative feelings toward the partner and toward potential alternatives. […] Although there is some controversy about the extent to which social exchange processes are relevant to committed relationships that are going well, it is clear that people want their relationships to be fair and equitable, and exchange processes tend to become the focus when relationships are not going well.”
“Fincham and Beach suggest that the evidence for an association between attributions and relationship satisfaction is one of the most robust findings in the area of close relationships […] understanding a person’s interpretation of partner behavior may be as important as observing that behavior […] [However] many cognitive variables, apart from attributions, are associated with relationship satisfaction. Their list includes discrepancies between the partner’s behavior and one’s ideal standards, social comparison processes such as seeing one’s relationships as superior to the norm, memory processes that lead to the recall of positive versus negative memories, and self-evaluation maintenance processes that serve to maintain self-esteem even when one compares poorly with the partner.”
“Commitment seems to be the strongest predictor of relational stability, and other factors include religious beliefs about the sanctity of marriage, viewing one’s identity in terms of the relationship, personal investments in the relationship, and children. Le and Agnew (2003) conducted a meta-analysis to test Rusbult’s (1980) investment model of commitment. They found that Rusbult’s three variables of satisfaction with, alternatives to, and investment in the relationship were significantly related to commitment to that relationship and together accounted for two-thirds of the variance in commitment.”
“cognitive distortions in a positive direction tend to be characteristic of happy couples. Those who idealize their partners and who tend to see their partners in a more positive light than their partners view themselves are likely to be happier than other couples. The attributions of these couples are likely to be affected, and they are likely to blame themselves for negative events and give their partners the credit for positive events […] there is a lot of evidence in this volume supporting the powerful role that cognitions can play in personal relationships. Whether our focus is on cognitions at the cultural level or at the interpersonal level, they seem to have powerful effects on relationship behavior and satisfaction. Also, the effects are likely to be reciprocal, with cognitions affecting relationship satisfaction and satisfaction affecting cognitions.”
Yesterday I gave some of the reasons I had for disliking the book; in this post I’ll provide some of the reasons why I kept reading. The book had a lot of interesting data. I know I’ve covered some of these topics and numbers before (e.g. here), but I don’t mind repeating myself every now and then; some things are worth saying more than once, and as for the those that are not I must admit I don’t really care enough about not repeating myself here to spend time perusing the archives in order to make sure I don’t repeat myself here. Anyway, here are some number from the coverage:
“Twenty-two high-burden countries account for over 80 % of the world’s TB cases […] data referring to 2011 revealed 8.7 million new cases of TB [worldwide] (13 % coinfected with HIV) and 1.4 million people deaths due to such disease […] Around 80 % of TB cases among people living with HIV were located in Africa. In 2011, in the WHO European Region, 6 % of TB patients were coinfected with HIV […] In 2011, the global prevalence of HIV accounted for 34 million people; 69 % of them lived in Sub-Saharan Africa. Around five million people are living with HIV in South, South-East and East Asia combined. Other high-prevalence regions include the Caribbean, Eastern Europe and Central Asia . Worldwide, HIV incidence is in downturn. In 2011, 2.5 million people acquired HIV infection; this number was 20 % lower than in 2001. […] Sub-Saharan Africa still accounts for 70 % of all AIDS-related deaths […] Worldwide, an estimated 499 million new cases of curable STIs (as gonorrhoea, chlamydia and syphilis) occurred in 2008; these findings suggested no improvement compared to the 448 million cases occurring in 2005. However, wide variations in the incidence of STIs are reported among different regions; the burden of STIs mainly occurs in low-income countries”.
“It is estimated that in 2010 alone, malaria caused 216 million clinical episodes and 655,000 deaths. An estimated 91 % of deaths in 2010 were in the African Region […]. A total of 3.3 billion people (half the world’s population) live in areas at risk of malaria transmission in 106 countries and territories”.
“Diarrhoeal diseases amount to an estimated 4.1 % of the total disability-adjusted life years (DALY) global burden of disease, and are responsible for 1.8 million deaths every year. An estimated 88 % of that burden is attributable to unsafe supply of water, sanitation and hygiene […] It is estimated that diarrhoeal diseases account for one in nine child deaths worldwide, making diarrhoea the second leading cause of death among children under the age of 5 after pneumonia”
“NCDs [Non-Communicable Diseases] are the leading global cause of death worldwide, being responsible for more
deaths than all other causes combined. […] more than 60 % of all deaths worldwide currently stem from NCDs .
In 2008, the leading causes of all NCD deaths (36 million) were:
• CVD [cardiovascular disease] (17 million, or 48 % of NCD deaths) [nearly 30 % of all deaths];
• Cancer (7.6 million, or 21 % of NCD deaths) [about 13 % of all deaths]
• Respiratory diseases (4.2 million, or 12 % of NCD deaths) [7 % of all deaths]
• Diabetes (1.3 million, 4 % of NCD deaths) .” [Elsewhere in the publication they report that: “In 2010, diabetes was responsible for 3.4 million deaths globally and 3.6 % of DALYs” – obviously there’s a lot of uncertainty here. How to avoid ‘double-counting’ is one of the major issues, because we have a pretty good idea what they die of: “CVD is by far the most frequent cause of death in both men and women with diabetes, accounting for about 60 % of all mortality”].
“Behavioural risk factors such as physical inactivity, tobacco use and unhealthy diet explain nearly 80 % of the CVD burden”
“nearly 80 % of NCD deaths occur in low- and middle-income countries , up sharply from just under 40 % in 1990 […] Low- and lower-middle-income countries have the highest proportion of deaths from NCDs under 60 years. Premature deaths under 60 years for high-income countries were 13 and 25 % for upper-middle-income countries. […] In low-income countries, the proportion of premature NCD deaths under 60 years is 41 %, three times the proportion in high-income countries . […] Overall, NCDs account for more than 50 % of DALYs [disability-adjusted life years] in most counties. This percentage rises to over 80 % in Australia, Japan and the richest countries of Western Europe and North America […] In Europe, CVD causes over four million deaths per year (52 % of deaths in women and 42 % of deaths in men), and they are the main cause of death in women in all European countries.”
“Overall, age-adjusted CVD death rates are higher in most low- and middle-income countries than in developed countries […]. CHD [coronary heart disease] and stroke together are the first and third leading causes of death in developed and developing countries, respectively. […] excluding deaths from cancer, these two conditions were responsible for more deaths in 2008 than all remaining causes among the ten leading causes of death combined (including chronic diseases of the lungs, accidents, diabetes, influenza, and pneumonia)”.
“The global prevalence of diabetes was estimated to be 10 % in adults aged 25 + years […] more than half of all nontraumatic lower limb amputations are due to diabetes [and] diabetes is one of the leading causes of visual impairment and blindness in developed countries .”
“Almost six million people die from tobacco each year […] Smoking is estimated to cause nearly 10 % of CVD […] Approximately 2.3 million die each year from the harmful use of alcohol. […] Alcohol abuse is responsible for 3.8 % of all deaths (half of which are due to CVD, cancer, and liver cirrhosis) and 4.5 % of the global burden of disease […] Heavy alcohol consumption (i.e. ≥ 4 drinks/day) is significantly associated with an about fivefold increased risk of oral and pharyngeal cancer and oesophageal squamous cell carcinoma (SqCC), 2.5-fold for laryngeal cancer, 50 % for colorectal and breast cancers and 30 % for pancreatic cancer . These estimates are based on a large number of epidemiological studies, and are generally consistent across strata of several covariates. […] The global burden of cancer attributable to alcohol drinking has been estimated at 3.6 and 3.5 % of cancer deaths , although this figure is higher in high-income countries (e.g. the figure of 6 % has been proposed for UK  and 9 % in Central and Eastern Europe).”
“At least two million cancer cases per year (18 % of the global cancer burden) are attributable to chronic infections by human papillomavirus, hepatitis B virus, hepatitis C virus and Helicobacter pylori. These infections are largely preventable or treatable […] The estimate of the attributable fraction is higher in low- and middle-income countries than in high-income countries (22.9 % of total cancer vs. 7.4 %).”
“Information on the magnitude of CVD in high-income countries is available from three large longitudinal studies that collect multidisciplinary data from a representative sample of European and American individuals aged 50 and older […] according to the Health Retirement Survey (HRS) in the USA, almost one in three adults have one or more types of CVD [11, 12]. By contrast, the data of Survey of Health, Ageing and Retirement in Europe (SHARE), obtained from 11 European countries, and English Longitudinal Study of Aging (ELSA) show that disease rates (specifically heart disease, diabetes, and stroke) across these populations are lower (almost one in five)”
“In 1990, the major fraction of morbidity worldwide was due to communicable, maternal, neonatal, and nutritional disorders (47 %), while 43 % of disability adjusted life years (DALYs) lost were attributable to NCDs. Within two decades, these estimates had undergone a drastic change, shifting to 35 % and 54 %, respectively”
“Estimates of the direct health care and nonhealth care costs attributable to CVD in many countries, especially in low- and middle-income countries, are unclear and fragmentary. In high-income countries (e.g., USA and Europe), CVD is the most costly disease both in terms of economic costs and human costs. Over half (54 %) of the total cost is due to direct health care costs, while one fourth (24 %) is attributable to productivity losses and 22 % to the informal care of people with CVD. Overall, CVD is estimated to cost the EU economy, in terms of health care, almost €196 billion per year, i.e., 9 % of the total health care expenditure across the EU”
“In the WHO European Region, the Eastern Mediterranean Region, and the Region of the Americas, over 50 % of women are overweight. The highest prevalence of overweight among infants and young children is in upper-to-middle-income populations, while the fastest rise in overweight is in the lower-to-middle-income group . Globally, in 2008, 9.8 % of men and 13.8 % of women were obese compared to 4.8 % of men and 7.9 % of women in 1980 .”
“In low-income countries, around 25 % of adults have raised total cholesterol, while in high-income countries, over 50 % of adults have raised total cholesterol […]. Overall, one third of CHD disease is attributable to high cholesterol levels” (These numbers seem very high to me, but I’m reporting them anyway).
“interventions based on tobacco taxation have a proportionally greater effect on smokers of lower SES and younger smokers, who might otherwise be difficult to influence. Several studies suggest that the application of a 10 % rise in price could lead to as much as a 2.5–10 % decline in smoking [20, 45, 50, 56].”
“The decision to allocate resources for implementing a particular health intervention depends not only on the strength of the evidence (effectiveness of intervention) but also on the cost of achieving the expected health gain. Cost-effectiveness analysis is the primary tool for evaluating health interventions on the basis of the magnitude of their incremental net benefits in comparison with others, which allows the economic attractiveness of one program over another to be determined [More about this kind of stuff here]. If an intervention is both more effective and less costly than the existing one, there are compelling reasons to implement it. However, the majority of health interventions do not meet these criteria, being either more effective but more costly, or less costly but less effective, than the existing interventions [see also this]. Therefore, in most cases, there is no “best” or absolute level of cost-effectiveness, and this level varies mainly on the basis of health care system expenditure and needs .”
“The number of new cases of cancer worldwide in 2008 has been estimated at about 12,700,000 . Of these, 6,600,000 occurred in men and 6,000,000 in women. About 5,600,000 cases occurred in high-resource countries […] and 7,100,000 in low- and middle-income countries. Among men, lung, stomach, colorectal, prostate and liver cancers are the most common […], while breast, colorectal, cervical, lung and stomach are the most common neoplasms among women […]. The number of deaths from cancer was estimated at about 7,600,000 in 2008 […] No global estimates of survival from cancer are available: Data from selected cancer registries suggest wide disparities between high- and low-income countries for neoplasms with effective but expensive treatment, such as leukaemia, while the gap is narrow for neoplasms without an effective therapy, such as lung cancer […]. The overall 5-year survival of cases diagnosed during 1995– 1999 in 23 European countries was 49.6 % […] Tobacco smoking is the main single cause of human cancer worldwide […] In high-income countries, tobacco smoking causes approximately 30 % of all human cancers .”
“Systematic reviews have concluded that nutritional factors may be responsible for about one fourth of human cancers in high-income countries, although, because of the limitations of the current understanding of the precise role of diet in human cancer, the proportion of cancers known to be avoidable in practicable ways is much smaller . The only justified dietary recommendation for cancer prevention is to reduce the total caloric intake, which would contribute to a decrease in overweight and obesity, an established risk factor for human cancer. […] The magnitude of the excess risk [associated with obesity] is not very high (for most cancers, the relative risk (RR) ranges between 1.5 and 2 for body weight higher than 35 % above the ideal weight). Estimates of the proportion of cancers attributable to overweight and obesity in Europe range from 2 %  to 5 % . However, this figure is likely to be larger in North America, where the prevalence of overweight and obesity is higher.”
“Estimates of the global burden of cancer attributable to occupation in high-income countries result in the order of 1–5 % [9, 42]. In the past, almost 50 % of these were due to asbestos alone […] The available evidence suggests, in most populations, a small role of air, water and soil pollutants. Global estimates are in the order of 1 % or less of total cancers [9, 42]. This is in striking contrast with public perception, which often identifies pollution as a major cause of human cancer.”
“Avoidance of sun exposure, in particular during the middle of the day, is the primary preventive measure to reduce the incidence of skin cancer. There is no adequate evidence of a protective effect of sunscreens, possibly because use of sunscreens is associated with increased exposure to the sun. The possible benefit in reducing skin cancer risk by reduction of sun exposure, however, should be balanced against possible favourable effects of UV radiation in promoting vitamin D metabolism.”
In my review of the book on goodreads I did not have many nice things to say about this book, but I do note that the book had some interesting data. I’ll save those for another post – in this post I’ll provide some of the reasons why the book got a one star rating. Given the format of the book I thought I should clarify a bit what I didn’t like about it, because both the title and actually also the basic structure maked the book seem quite promising; they cover a lot of review articles and a lot of studies, so how could I possibly dislike a book like that? Well…
The main issue: If I thought the Psychology of Lifestyle book was bad in terms of implicit political assumptions etc., this book takes this to a whole different level. Outright bans and severe restrictions on behaviours harming health are repeatedly described as either cost-effective or ‘best buys’, and many chapters don’t even touch upon potential problems associated with such policies, making you start wondering along the way why policies such as national bans on alcohol and tobacco and special police forces armed with automatic weapons coming to your house during the night and throwing you in jail if you’re found smoking a cigarette aren’t already implemented worldwide, if the research looks that way. The political agenda here seems so apparent in many chapters that you start questioning the reporting because you figure these people would not be above lying to you to get the sort of policies they’d like. Faulty assumptions throughout the coverage don’t help – as a rule you don’t get significant health effects by simply providing information about healthy behaviours and behavioural risk factors to the population; we know this from a large number of studies – and I know this because I just read a book about this research – so the fact that some authors assume such interventions to be ‘cost-effective’, and that they can point to one very old example where there does seem to have been some measurable effects, does not convince me. Some of the authors point to interventions involving primary care physicians lecturing people about healthy lifestyle behaviours being cost effective, without at all going into the many issues related to even evaluating the long-run health effects of such interventions. That effects might not persist over time is not the impression you get from this kind of coverage:
“The evidence suggests that counseling by physicians to reduce intake of total fat, saturated fat intake, and daily salt, and to increase fruit and vegetable intake, is very cost-effective, leading to dietary changes, improved weight control, and increased physical activity [64–69].” (p. 55).
Compare with for example this quote from Thirlaway and Upton:
“Hundreds of interventions to combat the obesity epidemic are currently being introduced worldwide, but there are significant gaps in the evidence base for such interventions and few been evaluated in a way that enables any definitive conclusions to be drawn about their effectiveness. Those that have shown an impact are limited to easily controlled settings and it remains unclear how promising small-scale initiatives would be scaled up for whole population impact”.
What people compare when doing the CEAs in the book is occasionally/often unclear, which tends to make that sort of reporting close to worthless. I had the impression in some parts of the coverage that what was driving cost-effectiveness in some of the studies was a combination of large health impacts of disease + assumed but unproven/speculative health impacts of the interventions; an impression probably partly a result of the intervention study coverage provided in Thirlaway & Upton.
‘Implicit assumptions’ and more or less overtly politicizing comments along the way spoiled the reading experience. Below I have added some examples of sentences I for various reasons did not like:
“Several countries have explored fiscal measures such as increased taxation on foods that should be consumed in lower quantities and decreased taxation, price subsidies or production incentives for foods that are encouraged.” (‘foods that should be consumed…’).
“Restriction of alcohol drinking to the limits indicated by the European Code Against Cancer  (20 g/day for men and 10 g/day for women) would avoid about 90 % of alcohol-related cancers and cancer deaths in men and over 50 % of cancers in women, i.e. about 330/360,000 cancer cases and about 200/220,000 cancer deaths. Avoidance or moderation of alcohol consumption to 2 drinks/day in men and 1 drink/day in women is therefore a global public health priority” [The idea that men might not want to avoid 90% of alcohol-related cancers doesn’t seem to cross the minds of these authors – they want them to not get cancer, and they’re going to get their way one way or the other, dammit!]
“Nowadays, obesity is the most frequently encountered metabolic disease” [Disease? Disease???]
“T2D is the most common type of diabetes, representing 90 % of cases worldwide and it is named non-insulin-dependent diabetes mellitus (NIDDM)” [My comment in the margin: “No, it’s actually not. No longer. Because this is a terrible name. A majority of diabetics on insulin treatment are type 2 diabetics.” (see also my comments in the last paragraph here if you’re curious to know more about this topic)]
“The difficulty of communicating is, however, exactly the major obstacle in this communion of responsibility. In this regard, we shall analyze the dynamics of interpersonal communication based on the scheme proposed by Slama-Cazacu . According to this model the elements of a communicative act are: (1) the transmitter, who produces the message, (2) the message conveyed according to the rules provided by code; (3) the code according to which the message is produced; (4) the transmission channel; (5) the context in which the message is found and to which it refers; and (6) the receiver” [To be frank, the chapter from which this quote is taken – Some Ethical Reflections in Public Health – had almost nothing but problematic sentences, despite actually addressing a few issues I’d had with the coverage elsewhere in the publication. I thought the quote illustrated how rambling and besides-the-point that coverage was; recall that this is a chapter about ethics. The quote was used to provide context so that you’d understand e.g. that people sometimes don’t understand health messages. Incidentally you should not be fooled by the quote into assuming that the author actually covered any data about how sensitive people are to health data in this coverage (how information impacts behaviour). She of course did not.]
“The distal risk factors of ethnic groups thus explain why a certain proximal risk factor is unevenly distributed across ethnic groups. If, for example, a certain ethnic minority group has an increased prevalence of smoking, this may be due to the fact that the group is exposed to discrimination in the host country (relational), or to specific sociocultural values characteristic for that group (attributional).” [My comment in the margin: “Discrimination => smoking? Seriously? Stop being stupid.” I was close to losing my patience at this point…]
“metabolic control is poor among migrant groups with diabetes, and HbA1c in migrants is generally higher than in the local-born population [3, 32]. These findings suggest shortfalls in diabetes health care among migrant populations.” [“Or some of the immigrants are stupid and irresponsible.” As mentioned, I was losing patience fast… (In the margin the words ‘some of’ were of course not included, but I live in a wonderful country where omitting such qualifiers in texts like this one run you the risk of getting thrown in jail for ‘racism’..)]
“For European health care contexts, empirical research on inequalities in healthcare outcomes is scarce. For some diseases or care contexts, ethnic inequalities in outcomes, attributable to deficient care, have been shown.” [Stuff like this was also part of the reason for the outburst above – I got really annoyed in this chapter, because the author repeatedly seemed to assume/implicitly assert that anything less than equal coverage for all individuals living in a country was a state that was really morally unjustifiable – later talk about ‘diversity-responsive care’ did not help. I don’t understand how anyone would consider it to be fair that a guy getting sick after paying taxes into the cost-sharing mechanism financing his care for 30 years do not get better health care coverage than some poor immigrant who just arrived yesterday and haven’t paid anything into the scheme, but anyway this is politics and so I shouldn’t bother.]
“In developing countries, the prevalence of some form of depression among urban adults ranges from 12 to 51 %” [No, it probably doesn’t…]
“Of course, in a millennium in which next to the advancement of health technologies (digital, with the development of nanotechnology; social and cultural, with the emergence of new values that should be conjugated with the old; scientific and medical, through imaging and the study of genomics, proteomics, and metabolomics; etc.) there is a global crisis of the world economy, it is fundamental to strengthen and use the assets of individual and community resilience (most definitions of resilience refer to notions—derived from physics—of rebound, or bouncing back, from deformation or distress), also because action to improve community health requires the coordination and the cooperation of decision makers in many sectors responsible for shaping wider determinants, and also because the traditional management of policy may be ineffective to address the problems of the “future cities” and requires an institutional change, given the discrepancy that can exist between technological innovation, scientific evolution, and adaptive flexibility of governance systems.” [This was around the point where I decided that no matter what happened in the last couple of chapters, this book is going to get one star]
“The National Institute for Public Health and the Environment was committed to analyze opportunities to address health inequalities through the HiAP strategy. On the basis of data derived from the document analysis, 38 out of 153 policy resolutions were identified to have a potential impact on determinants of health inequalities. Resolutions often consisted of a combination of policy measures, projects, and programs and were mostly released by the Ministry of Housing, Communities, and Integration and by the Ministry of the Education, Culture, and Science. Fifteen resolutions were on the enhancement of socioeconomic position; 4 on striving participation of people with health problems; 19 on improving living and working environment and lifestyle; and 4 on accessibility and quality of care. Interestingly, only 11 were inter-sectoral collaboration between the Ministry of Health and other ministries. This aspect allows us to conclude that even though HiAP is officially recognized as a strategic approach to be followed in setting policies and programs, further efforts are needed at European and global levels in order to implement in a practical manner.” [I’m pretty sure if this stuff had not been located in the last chapter of the book, I’d never have finished the book.]
I haven’t really blogged this book in anywhere near the amount of detail it deserves even though my first post about the book actually had a few quotes illustrating how much different stuff is covered in the book.
This book is technical, and even if I’m trying to make it less technical by omitting the math in this post it may be a good idea to reread the first post about the book before reading this post to refresh your knowledge of these things.
Quotes and comments below – most of the coverage here focuses on stuff covered in chapters 3 and 4 in the book.
“Tests of null hypotheses and information-theoretic approaches should not be used together; they are very different analysis paradigms. A very common mistake seen in the applied literature is to use AIC to rank the candidate models and then “test” to see whether the best model (the alternative hypothesis) is “significantly better” than the second-best model (the null hypothesis). This procedure is flawed, and we strongly recommend against it […] the primary emphasis should be on the size of the treatment effects and their precision; too often we find a statement regarding “significance,” while the treatment and control means are not even presented. Nearly all statisticians are calling for estimates of effect size and associated precision, rather than test statistics, P-values, and “significance.” [Borenstein & Hedges certainly did as well in their book (written much later), and this was not an issue I omitted to talk about in my coverage of their book…] […] Information-theoretic criteria such as AIC, AICc, and QAICc are not a “test” in any sense, and there are no associated concepts such as test power or P-values or α-levels. Statistical hypothesis testing represents a very different, and generally inferior, paradigm for the analysis of data in complex settings. It seems best to avoid use of the word “significant” in reporting research results under an information-theoretic paradigm. […] AIC allows a ranking of models and the identification of models that are nearly equally useful versus those that are clearly poor explanations for the data at hand […]. Hypothesis testing provides no general way to rank models, even for models that are nested. […] In general, we recommend strongly against the use of null hypothesis testing in model selection.”
“The bootstrap is a type of Monte Carlo method used frequently in applied statistics. This computer-intensive approach is based on resampling of the observed data […] The fundamental idea of the model-based sampling theory approach to statistical inference is that the data arise as a sample from some conceptual probability distribution f. Uncertainties of our inferences can be measured if we can estimate f. The bootstrap method allows the computation of measures of our inference uncertainty by having a simple empirical estimate of f and sampling from this estimated distribution. In practical application, the empirical bootstrap means using some form of resampling with replacement from the actual data x to generate B (e.g., B = 1,000 or 10,000) bootstrap samples […] The set of B bootstrap samples is a proxy for a set of B independent real samples from f (in reality we have only one actual sample of data). Properties expected from replicate real samples are inferred from the bootstrap samples by analyzing each bootstrap sample exactly as we first analyzed the real data sample. From the set of results of sample size B we measure our inference uncertainties from sample to (conceptual) population […] For many applications it has been theoretically shown […] that the bootstrap can work well for large sample sizes (n), but it is not generally reliable for small n […], regardless of how many bootstrap samples B are used. […] Just as the analysis of a single data set can have many objectives, the bootstrap can be used to provide insight into a host of questions. For example, for each bootstrap sample one could compute and store the conditional variance–covariance matrix, goodness-of-fit values, the estimated variance inflation factor, the model selected, confidence interval width, and other quantities. Inference can be made concerning these quantities, based on summaries over the B bootstrap samples.”
“Information criteria attempt only to select the best model from the candidate models available; if a better model exists, but is not offered as a candidate, then the information-theoretic approach cannot be expected to identify this new model. Adjusted R2 […] are useful as a measure of the proportion of the variation “explained,” [but] are not useful in model selection […] adjusted R2 is poor in model selection; its usefulness should be restricted to description.”
“As we have struggled to understand the larger issues, it has become clear to us that inference based on only a single best model is often relatively poor for a wide variety of substantive reasons. Instead, we increasingly favor multimodel inference: procedures to allow formal statistical inference from all the models in the set. […] Such multimodel inference includes model averaging, incorporating model selection uncertainty into estimates of precision, confidence sets on models, and simple ways to assess the relative importance of variables.”
“If sample size is small, one must realize that relatively little information is probably contained in the data (unless the effect size if very substantial), and the data may provide few insights of much interest or use. Researchers routinely err by building models that are far too complex for the (often meager) data at hand. They do not realize how little structure can be reliably supported by small amounts of data that are typically “noisy.””
“Sometimes, the selected model [when applying an information criterion] contains a parameter that is constant over time, or areas, or age classes […]. This result should not imply that there is no variation in this parameter, rather that parsimony and its bias/variance tradeoff finds the actual variation in the parameter to be relatively small in relation to the information contained in the sample data. It “costs” too much in lost precision to add estimates of all of the individual θi. As the sample size increases, then at some point a model with estimates of the individual parameters would likely be favored. Just because a parsimonious model contains a parameter that is constant across strata does not mean that there is no variation in that process across the strata.”
“[In a significance testing context,] a significant test result does not relate directly to the issue of what approximating model is best to use for inference. One model selection strategy that has often been used in the past is to do likelihood ratio tests of each structural factor […] and then use a model with all the factors that were “significant” at, say, α = 0.05. However, there is no theory that would suggest that this strategy would lead to a model with good inferential properties (i.e., small bias, good precision, and achieved confidence interval coverage at the nominal level). […] The purpose of the analysis of empirical data is not to find the “true model”— not at all. Instead, we wish to find a best approximating model, based on the data, and then develop statistical inferences from this model. […] We search […] not for a “true model,” but rather for a parsimonious model giving an accurate approximation to the interpretable information in the data at hand. Data analysis involves the question, “What level of model complexity will the data support?” and both under- and overfitting are to be avoided. Larger data sets tend to support more complex models, and the selection of the size of the model represents a tradeoff between bias and variance.”
“The easy part of the information-theoretic approaches includes both the computational aspects and the clear understanding of these results […]. The hard part, and the one where training has been so poor, is the a priori thinking about the science of the matter before data analysis — even before data collection. It has been too easy to collect data on a large number of variables in the hope that a fast computer and sophisticated software will sort out the important things — the “significant” ones […]. Instead, a major effort should be mounted to understand the nature of the problem by critical examination of the literature, talking with others working on the general problem, and thinking deeply about alternative hypotheses. Rather than “test” dozens of trivial matters (is the correlation zero? is the effect of the lead treatment zero? are ravens pink?, Anderson et al. 2000), there must be a more concerted effort to provide evidence on meaningful questions that are important to a discipline. This is the critical point: the common failure to address important science questions in a fully competent fashion. […] “Let the computer find out” is a poor strategy for researchers who do not bother to think clearly about the problem of interest and its scientific setting. The sterile analysis of “just the numbers” will continue to be a poor strategy for progress in the sciences.
Researchers often resort to using a computer program that will examine all possible models and variables automatically. Here, the hope is that the computer will discover the important variables and relationships […] The primary mistake here is a common one: the failure to posit a small set of a priori models, each representing a plausible research hypothesis.”
“Model selection is most often thought of as a way to select just the best model, then inference is conditional on that model. However, information-theoretic approaches are more general than this simplistic concept of model selection. Given a set of models, specified independently of the sample data, we can make formal inferences based on the entire set of models. […] Part of multimodel inference includes ranking the fitted models from best to worst […] and then scaling to obtain the relative plausibility of each fitted model (gi) by a weight of evidence (wi) relative to the selected best model. Using the conditional sampling variance […] from each model and the Akaike weights […], unconditional inferences about precision can be made over the entire set of models. Model-averaged parameter estimates and estimates of unconditional sampling variances can be easily computed. Model selection uncertainty is a substantial subject in its own right, well beyond just the issue of determining the best model.”
“There are three general approaches to assessing model selection uncertainty: (1) theoretical studies, mostly using Monte Carlo simulation methods; (2) the bootstrap applied to a given set of data; and (3) utilizing the set of AIC differences (i.e., ∆i) and model weights wi from the set of models fit to data.”
“Statistical science should emphasize estimation of parameters and associated measures of estimator uncertainty. Given a correct model […], an MLE is reliable, and we can compute a reliable estimate of its sampling variance and a reliable confidence interval […]. If the model is selected entirely independently of the data at hand, and is a good approximating model, and if n is large, then the estimated sampling variance is essentially unbiased, and any appropriate confidence interval will essentially achieve its nominal coverage. This would be the case if we used only one model, decided on a priori, and it was a good model, g, of the data generated under truth, f. However, even when we do objective, data-based model selection (which we are advocating here), the [model] selection process is expected to introduce an added component of sampling uncertainty into any estimated parameter; hence classical theoretical sampling variances are too small: They are conditional on the model and do not reflect model selection uncertainty. One result is that conditional confidence intervals can be expected to have less than nominal coverage.”
“Data analysis is sometimes focused on the variables to include versus exclude in the selected model (e.g., important vs. unimportant). Variable selection is often the focus of model selection for linear or logistic regression models. Often, an investigator uses stepwise analysis to arrive at a final model, and from this a conclusion is drawn that the variables in this model are important, whereas the other variables are not important. While common, this is poor practice and, among other issues, fails to fully consider model selection uncertainty. […] Estimates of the relative importance of predictor variables xj can best be made by summing the Akaike weights across all the models in the set where variable j occurs. Thus, the relative importance of variable j is reflected in the sum w+ (j). The larger the w+ (j) the more important variable j is, relative to the other variables. Using the w+ (j), all the variables can be ranked in their importance. […] This idea extends to subsets of variables. For example, we can judge the importance of a pair of variables, as a pair, by the sum of the Akaike weights of all models that include the pair of variables. […] To summarize, in many contexts the AIC selected best model will include some variables and exclude others. Yet this inclusion or exclusion by itself does not distinguish differential evidence for the importance of a variable in the model. The model weights […] summed over all models that include a given variable provide a better weight of evidence for the importance of that variable in the context of the set of models considered.” [The reason why I’m not telling you how to calculate Akaike weights is that I don’t want to bother with math formulas in wordpress – but I guess all you need to know is that these are not hard to calculate. It should perhaps be added that one can also use bootstrapping methods to obtain relevant model weights to apply in a multimodel inference context.]
“If data analysis relies on model selection, then inferences should acknowledge model selection uncertainty. If the goal is to get the best estimates of a set of parameters in common to all models (this includes prediction), model averaging is recommended. If the models have definite, and differing, interpretations as regards understanding relationships among variables, and it is such understanding that is sought, then one wants to identify the best model and make inferences based on that model. […] The bootstrap provides direct, robust estimates of model selection probabilities πi , but we have no reason now to think that use of bootstrap estimates of model selection probabilities rather than use of the Akaike weights will lead to superior unconditional sampling variances or model-averaged parameter estimators. […] Be mindful of possible model redundancy. A carefully thought-out set of a priori models should eliminate model redundancy problems and is a central part of a sound strategy for obtaining reliable inferences. […] Results are sensitive to having demonstrably poor models in the set of models considered; thus it is very important to exclude models that are a priori poor. […] The importance of a small number (R) of candidate models, defined prior to detailed analysis of the data, cannot be overstated. […] One should have R much smaller than n. MMI [Multi-Model Inference] approaches become increasingly important in cases where there are many models to consider.”
“In general there is a substantial amount of model selection uncertainty in many practical problems […]. Such uncertainty about what model structure (and associated parameter values) is the K-L [Kullback–Leibler] best approximating model applies whether one uses hypothesis testing, information-theoretic criteria, dimension-consistent criteria, cross-validation, or various Bayesian methods. Often, there is a nonnegligible variance component for estimated parameters (this includes prediction) due to uncertainty about what model to use, and this component should be included in estimates of precision. […] we recommend assessing model selection uncertainty rather than ignoring the matter. […] It is […] not a sound idea to pick a single model and unquestioningly base extrapolated predictions on it when there is model uncertainty.”
Despite not actually having reading all that many books this year I’m way behind on blogging the books I’ve read, so I thought I might as well try to catch up a bit. You can find my previous coverage of the book here and here.
In this post I’ll cover the chapters about the musculoskeletal system, the endocrine system, and the breast.
“Disorders of the musculoskeletal system make up 20–25 per cent of a general practitioner’s workload and account for significant disability in the general population. […] The chief symptoms to identify in the musculoskeletal assessment are: *pain *stiffness *swelling *impaired function *constitutional [regarding constitutional symptoms, “Patients with arthritis may describe symptoms of fatigue, fever, sweating and weight loss”]. […] As a rule mechanical disorders (e.g. OA [Osteoarthritis], spondylosis, and tendinopathies) are worsened by activity and relieved by rest. In severe degenerative disease the pain may, however, be present at rest and disturb sleep. Inflammatory disorders tend to be painful both at rest and during activity and are associated with worsened stiffness after periods of prolonged rest. The patient may note that stiffness is relieved somewhat by movement. Both mechanical and inflammatory disorders may be worsened by excessive movement.”
“The lifetime incidence of lower back pain is about 60 per cent and the greatest prevalence is between ages 45 and 65 years. Over 90 per cent of low back pain is mechanical and self-limiting. […] Indicators of serious pathology in lumbar pain: ‘red flags’ of serious pathology that requires further investigation […] are: *presenting under age 20 and over age 55 years *prolonged stiffness (>6 weeks) *sudden onset of severe pain *pain that disturbs sleep (>6 weeks) *thoracic pain *nerve root symptoms – including spinal claudication (pain on walking resolved by rest), saddle numbness, and loss of bladder or bowel control *chronic persistent pain (>12 weeks) *weight loss *history of carcinoma.”
“Osteoarthritis is a chronic degenerative and mechanical disorder characterized by cartilage loss. It is the most common form of arthritis, estimated to affect 15 per cent of the population of the UK over the age of 55 years. It is second only to cardiovascular disease as a cause of disability. Weight-bearing joints are chiefly involved (e.g. facets in the spine, hip and knee). […] There is little evidence to link OA with repetitive injury from occupation, except perhaps knee bending in men. Dockers and miners have a higher incidence of knee OA.”
“Rheumatoid arthritis […] is the most common ARD [Autoimmune Rheumatic Diseases] and is characterized by the presence of a symmetrical destructive polyarthritis with a predisposition for the small joints of the hands, wrists and feet. It is more common in women than men and may present at any age though most often in the third to fourth decade. […] Onset is typically insidious and progressive pain, stiffness and symmetrical swelling of small joints occurs. Up to a third of patients may have a subacute onset with symptoms of fatigue, malaise, weight loss, myalgia, morning stiffness and joint pain without overt signs of swelling. A mono- or bilateral arthropathy of the shoulder or wrist may account for up to 30–40 per cent of initial presentations”
“[Osteoporosis] remains a significant cause of morbidity and mortality. Peak bone mass is usually achieved in the third decade and is determined by both genetic and environmental factors. After the age of 35 the amount of bone laid down is less than that reabsorbed during each remodelling cycle. The net effect is age-related loss of bone mass. Up to 15 per cent of bone mass can also be lost over the 5-year period immediately post menopause. Symptomless reduction in bone mass and strength results in an increased risk of fracture; it is the resulting fractures that lead to pain and morbidity. Major risk factors to be considered in osteoporosis are: *race (white or Asian > African Caribbean) *age *gender *family history of maternal hip fracture *previous low trauma fracture (low trauma defined as no greater than falling from standing height) *long-term use of corticosteroids *malabsorption disorders *endocrinopathies […] *inflammatory arthritis […] Other risk factors include: *low body mass index […] *late menarche and early menopause *nulliparity *reduced physical activity *low intake of calcium (below 240 mg daily) *excess alcohol intake *smoking *malignancy (multiple myeloma).”
“Infection may give rise to systemic inflammatory arthritis or vasculitis. The condition ‘reactive arthritis’ is also recognized. […] It is usually triggered by sexually transmitted infection such as with Chlamydia trachomatis. The acute inflammatory reaction is treated with NSAIDs and corticosteroids and often ‘burns out’ after 6–18 months [Had to read that one twice: 18 months…]. It may leave lasting joint damage. […] Septic arthritis constitutes an acute emergency. The presentation is usually one of a rapid onset of severe pain in a hot swollen joint, the pain so severe that the patient cannot bear for it to be touched or moved.”
“Focal pain, swelling, or a low trauma fracture in the spine or long bones should alert suspicion [of neoplasia]. Primary tumours of bone include the benign (but often very painful) osteoid-osteoma, chondromas, and malignant osteosarcoma. Metastatic carcinoma may be secondary to a primary lesion in the lung, breast, prostate, kidney or thyroid. Haematological malignancies including lymphomas and leukaemias may also lead to diffuse bone involvement.”
“Diabetes mellitus is becoming a major public health problem. This is particularly true for type 2 diabetes, the prevalence of which is increasing rapidly due to the association with obesity and physical inactivity. Much of the morbidity, and cost, of diabetes care is due to the associated complications, rather than directly to hyperglycaemia and its management. Thyroid disease and polycystic ovarian syndrome are also prevalent [endocrine] conditions. Most other endocrine disorders are uncommon”
“The classic triad of symptoms associated with diabetes mellitus consists of: *thirst *polyuria (often nocturia) *weight loss.
Many patients will also experience pruritus or balanitis, fatigue and blurred vision. Some people, particularly those with newly presenting type 1 diabetes diabetes mellitus (T1DM) or with marked hyperglycaemia in type 2 diabetes mellitus (T2DM), may have a ‘full house’ of symptoms, in which case it is generally not difficult to suspect the diagnosis. However, other patients, particularly those with only modestly elevated blood glucose concentrations in T2DM, will have fewer, milder symptoms, and some may be entirely asymptomatic. […] symptoms potentially suggestive of diabetes may have alternative causes, particularly in elderly people, for example, frequency and nocturia in an older man may be due to bladder outflow obstruction, and many medical disorders are associated with weight loss. The symptom complex of thirst, polydipsia and polyuria most commonly suggests a diagnosis of uncontrolled diabetes mellitus but can occur in other settings. Some patients taking diuretics will experience similar symptoms. A dry mouth, perhaps associated with drug usage (e.g. tricyclic antidepressants) or certain medical conditions (e.g. Sjögren’s syndrome), may lead to increased fluid intake in an attempt at symptom relief.”
“The blood glucose concentration at diagnosis is not useful as a guide to whether an individual patient has T1DM or T2DM. Patients with T1DM can be in severe ketoacidosis with a blood glucose less than 20 mmol/L, and even below 10 mmol/L on occasions, whereas T2DM can present with a hyperosmolar state with blood glucose levels over 50 mmol/L.”
“30–50 per cent of patients with newly diagnosed T2DM will already have tissue complications at diagnosis due to the prolonged period of antecedent moderate and asymptomatic hyperglycaemia. […] Diabetes mellitus is much more than a disorder of glucose metabolism. The complications of diabetes can affect many of the organ systems leading to associated cardiac, vascular, renal, retinal, neurological and other disorders.”
“Pain is one of the commonest presenting disorders in the female breast, occurring in both pre-and postmenopausal women. […] In most women, there is no obvious or serious underlying breast pathology present […] In males, pain is not uncommon in gynaecomastia (swelling of male breast). […] A discrete lump, nodularity or thickening is the next most common mode of presentation. Size may vary (frequently ‘pea-sized’), but can be large. Onset may be acute (several days) or longstanding (several months). Fluctuation with the menstrual cycle is common in young women. Pain and tenderness are features of cysts, less common with fibroadenomas (unless rapidly growing or phylloides tumours), uncommon with cancer, except with rapidly expanding, aggressive (grade 3) and inflammatory tumours. The commonest lump in women below 30 years is a fibroadenoma; in women 30–45 years, a cyst and those over 45 years, cancer. […] Careful assessment of a lump can indicate whether the breast lesion is benign or malignant: *if it is rounded, smooth, mobile, tense and tender it is most likely to be a cyst (30 to 45 years of age) *if it is rounded, smooth, mobile, firm and non-tender it is most likely to be a fibroadenoma (under 30 years of age) *malignant lumps are rare in women under 30 years and uncommon under 40 years (4 per cent of breast cancers). Cancers are usually irregular, firm or hard, with variable involvement of overlying skin or deeper structures.”
“Retraction (intermittent, partial or chronic) is often a concern to women. It can be idiopathic or associated with malignancy in the retroareolar region, but usually is seen in the postmenopausal breast and is secondary to glandular atrophy and replacement by fibrosis and major duct ectasia. Congenital absence is very rare, whereas accessory nipples are seen in 2 per cent of women.” [Again, I had to read that one twice. 2 %! Who knew! Also, this condition seems to be even more common in males (see the link above).]
“Five to 10 per cent of women will, at some stage, present with a macrocyst. Microcysts are more common but tend to be occult. Breast cysts are commonest between the ages of 35 and 50, but can occur outside this age range, particularly in women who have been taking HRT. […] Patients present with a palpable lump or nodularity. When acute and large, the lump can be tender and the patient complains of pain. Typically cysts are well-circumscribed, smooth, mobile and, on occasion, tender lumps.”
“Nipple discharge in premenopausal women is likely to be associated with, or be due to, benign disease. It is the predominant clinical feature in up to 10 per cent of women presenting with breast cancer. […] *Purulent and coloured discharges are usually indicative of benign disease (infection and fibrocystic disease, respectively). *Spontaneous bilateral milky discharge (multiple ducts) most commonly occurs in women of reproductive age and is called galactorrhoea. […] *Clear, serous or bloodstained discharges are not infrequently associated with neoplastic disease”
“Carcinoma of the breast is one of the most common cancers (23 per cent of all female malignancies in the developed world) […]. One in 10 women develops breast cancer during her lifetime. […] Breast cancer is very rare in women under the age of 25. About 4 per cent occur under the age of 40. There is a plateau in incidence between the ages of 45 and 55, and beyond 55 years it continues to increase steadily into the 80s. […] The most common (70 per cent) presentation is a palpable lump, nodularity or thickening in the breast, usually detected by the patient. Typically the lump is firm or hard, well defined, with an irregular surface. […] About 25 per cent of women in the UK present with large primary tumours […], or locally advanced breast cancers […]. In some cases, particularly elderly patients, the tumour may have been present for some time, but hidden by the patient from her relatives due to fear and anxiety […]. Occasionally patients may even deny the presence of a tumour as a psychological coping strategy. […] Breast cancer is the most common malignant condition occurring during pregnancy. The incidence is approximately 1 in 2500 pregnancies, and poses many medical and psychological problems, both for the woman and her relatives.”
“This report shows trends and group differences in current marital status, with a focus on first marriages among women and men aged 15–44 years in the United States. Trends and group differences in the timing and duration of first marriages are also discussed. […] The analyses presented in this report are based on a nationally representative sample of 12,279 women and 10,403 men aged 15–44 years in the household population of the United States.”
“In 2006–2010, […] median age at first marriage was 25.8 for women and 28.3 for men.”
“Among women, 68% of unions formed in 1997–2001 began as a cohabitation rather than as a marriage (8). If entry into any type of union, marriage or cohabitation, is taken into account, then the timing of a first union occurs at roughly the same point in the life course as marriage did in the past (9). Given the place of cohabitation in contemporary union formation, descriptions of marital behavior, particularly those concerning trends over time, are more complete when cohabitation is also measured. […] Trends in the current marital statuses of women using the 1982, 1995, 2002, and 2006–2010 NSFG indicate that the percentage of women who were currently in a first marriage decreased over the past several decades, from 44% in 1982 to 36% in 2006–2010 […]. At the same time, the percentage of women who were currently cohabiting increased steadily from 3.0% in 1982 to 11% in 2006– 2010. In addition, the proportion of women aged 15–44 who were never married at the time of interview increased from 34% in 1982 to 38% in 2006–2010.”
“In 2006–2010, the probability of first marriage by age 25 was 44% for women compared with 59% in 1995, a decrease of 25%. By age 35, the probability of first marriage was 84% in 1995 compared with 78% in 2006–2010 […] By age 40, the difference in the probability of age at first marriage for women was not significant between 1995 (86%) and 2006–2010 (84%). These findings suggest that between 1995 and 2006– 2010, women married for the first time at older ages; however, this delay was not apparent by age 40.”
“In 2006–2010, the probability of a first marriage lasting at least 10 years was 68% for women and 70% for men. Looking at 20 years, the probability that the first marriages of women and men will survive was 52% for women and 56% for men in 2006–2010. These levels are virtually identical to estimates based on vital statistics from the early 1970s (24). For women, there was no significant change in the probability of a first marriage lasting 20 years between the 1995 NSFG (50%) and the 2006–2010 NSFG (52%)”
“Women who had no births when they married for the first time had a higher probability of their marriage surviving 20 years (56%) compared with women who had one or more births at the time of first marriage (33%). […] Looking at spousal characteristics, women whose first husbands had been previously married (38%) had a lower probability of their first marriage lasting 20 years compared with women whose first husband had never been married before (54%). Women whose first husband had children from previous relationships had a lower probability that their first marriage would last 20 years (37%) compared with first husbands who had no other children (54%). For men, […] patterns of first marriage survival […] are similar to those shown for women for marriages that survived up to 15 years.”
“These data show trends that are consistent with broad demographic changes in the American family that have occurred in the United States over the last several decades. One such trend is an increase in the time spent unmarried among women and men. For women, there was a continued decrease in the percentage currently married for the first time — and an increase in the percent currently cohabiting — in 2006–2010 compared with earlier years. For men, there was also an increase in the percentage unmarried and in the percentage currently cohabiting between 2002 and 2006–2010. Another trend is an increase in the age at first marriage for women and men, with men continuing to marry for the first time at older ages than women. […] Previous research suggests that women with more education and better economic prospects are more likely to delay first marriage to older ages, but are ultimately more likely to become married and to stay married […]. Data from the 2006–2010 NSFG support these findings”
ii. Involuntary Celibacy: A life course analysis (review). This is not a link to the actual paper – the paper is not freely available, which is why I do not link to it – but rather a link to a report talking about what’s in that paper. However I found some of the stuff interesting:
“A member of an on-line discussion group for involuntary celibates approached the first author of the paper via email to ask about research on involuntary celibacy. It soon became apparent that little had been done, and so the discussion group volunteered to be interviewed and a research team was put together. An initial questionnaire was mailed to 35 group members, and they got a return rate of 85%. They later posted it to a web page so that other potential respondents had access to it. Eventually 60 men and 22 women took the survey.”
“Most were between the ages of 25-34, 28% were married or living with a partner, 89% had attended or completed college. Professionals (45%) and students (16%) were the two largest groups. 85% of the sample was white, 89% were heterosexual. 70% lived in the U.S. and the rest primarily in Western Europe, Canada and Australia. […] the value of this research lies in the rich descriptive data obtained about the lives of involuntary celibates, a group about which little is known. […] The questionnaire contained 13 categorical, close-ended questions assessing demographic data such as age, sex, marital status, living arrangement, income, education, employment type, area of residence, race/ethnicity, sexual orientation, religious preference, political views, and time spent on the computer. 58 open-ended questions investigated such areas as past sexual experiences, current relationships, initiating relationships, sexuality and celibacy, nonsexual relationships and the consequences of celibacy. They started out by asking about childhood experiences, progressed to questions about teen and early adult years and finished with questions about current status and the effects of celibacy.”
“78% of this sample had discussed sex with friends, 84% had masturbated as teens. The virgins and singles, however, differed from national averages in their dating and sexual experiences.”
“91% of virgins and 52 % of singles had never dated as teenagers. Males reported hesitancy in initiating dates, and females reporting a lack of invitations by males. For those who did date, their experiences tended to be very limited. Only 29% of virgins reported first sexual experiences that involved other people, and they frequently reported no sexual activity at all except for masturbation. Singles were more likely than virgins to have had an initial sexual experience that involved other people (76%), but they tended to report that they were dissatisfied with the experience. […] While most of the sample had discussed sex with friends and masturbated as teens, most virgins and singles did not date. […] Virgins and singles may have missed important transitions, and as they got older, their trajectories began to differ from those of their age peers. Patterns of sexuality in young adulthood are significantly related to dating, steady dating and sexual experience in adolescence. It is rare for a teenager to initiate sexual activity outside of a dating relationship. While virginity and lack of experience are fairly common in teenagers and young adults, by the time these respondents reached their mid-twenties, they reported feeling left behind by age peers. […] Even for the heterosexuals in the study, it appears that lack of dating and sexual experimentation in the teen years may be precursors to problems in adult sexual relationships.”
“Many of the virgins reported that becoming celibate involved a lack of sexual and interpersonal experience at several different transition points in adolescence and young adulthood. They never or rarely dated, had little experience with interpersonal sexual activity, and had never had sexual intercourse. […] In contrast, partnered celibates generally became sexually inactive by a very different process. All had initially been sexually active with their partners, but at some point stopped. At the time of the survey, sexual intimacy no longer or very rarely occurred in their relationships. The majority of them (70%) started out having satisfactory relationships, but they slowly stopped having sex as time went on.”
“shyness was a barrier to developing and maintaining relationships for many of the respondents. Virgins (94%) and singles (84%) were more likely to report shyness than were partnered respondents (20%). The men (89%) were more likely to report being shy than women (77%). 41% of virgins and 23% of singles reported an inability to relate to others socially. […] 1/3 of the respondents thought their weight, appearance, or physical characteristics were obstacles to attracting potential partners. 47% of virgins and 56% of singles mentioned these factors, compared to only 9% of partnered people. […] Many felt that their sexual development had somehow stalled in an earlier stage of life; feeling different from their peers and feeling like they will never catch up. […] All respondents perceived their lack of sexual activity in a negative light and in all likelihood, the relationship between involuntary celibacy and unhappiness, anger and depression is reciprocal, with involuntary celibacy contributing to negative feelings, but these negative feelings also causing people to feel less self-confident and less open to sexual opportunities when they occur. The longer the duration of the celibacy, the more likely our respondents were to view it as a permanent way of life. Virginal celibates tended to see their condition as temporary for the most part, but the older they were, the more likely they were to see it as permanent, and the same was true for single celibates.”
It seems to me from ‘a brief look around’ that not a lot of research has been done on this topic, which I find annoying. Because yes, I’m well aware these are old data and that the sample is small and ‘convenient’. Here’s a brief related study on the ‘Characteristics of adult women who abstain from sexual intercourse‘ – the main findings:
“Of the 1801 respondents, 244 (14%) reported abstaining from intercourse in the past 6 months. Univariate analysis revealed that abstinent women were less likely than sexually active women to have used illicit drugs [odds ratio (OR) 0.47; 95% CI 0.35–0.63], to have been physically abused (OR 0.44, 95% CI 0.31–0.64), to be current smokers (OR 0.59, 95% CI 0.45–0.78), to drink above risk thresholds (OR 0.66, 95% CI 0.49–0.90), to have high Mental Health Inventory-5 scores (OR 0.7, 95% CI 0.54–0.92) and to have health insurance (OR 0.74, 95% CI 0.56–0.98). Abstinent women were more likely to be aged over 30 years (OR 1.98, 95% CI 1.51–2.61) and to have a high school education (OR 1.38, 95% CI 1.01–1.89). Logistic regression showed that age >30 years, absence of illicit drug use, absence of physical abuse and lack of health insurance were independently associated with sexual abstinence.
Prolonged sexual abstinence was not uncommon among adult women. Periodic, voluntary sexual abstinence was associated with positive health behaviours, implying that abstinence was not a random event. Future studies should address whether abstinence has a causal role in promoting healthy behaviours or whether women with a healthy lifestyle are more likely to choose abstinence.”
Here’s another more recent study – Prevalence and Predictors of Sexual Inexperience in Adulthood (unfortunately I haven’t been able to locate a non-gated link) – which I found and may have a closer look at later. A few quotes/observations:
“By adulthood, sexual activity is nearly universal: 97 % of men and 98 % of women between the ages of 25-44 report having had vaginal intercourse (Mosher, Chandra, & Jones, 2005). […] Although the majority of individuals experience this transition during adolescence or early adulthood, a small minority remain sexually inexperienced far longer. Data from the NSFG indicate that about 5% of males and 3% of females between the ages of 25 and 29 report never having had vaginal sex (Mosher et al., 2005). While the percentage of sexually inexperienced participants drops slightly among older age groups, between 1 and 2% of both males and females continue to report that they have never had vaginal sex even into their early 40s. Other nationally representative surveys have yielded similar estimates of adult sexual inexperience (Billy, Tanfer, Grady, & Klepinger, 1993)”
“Individuals who have not experienced any type of sexual activity as adults […] may differ from those who only abstain from vaginal intercourse. For example, vaginal virgins who engage in “everything but” vaginal sex – sometimes referred to as “technical virgins” […] – may abstain from vaginal sex in order to avoid its potential negative consequences […]. In contrast, individuals who have neither coital nor noncoital experience may have been unable to attract sexual partners or may have little interest in sexual involvement. Because prior analyses have generally conflated these two populations, we know virtually nothing about the prevalence or characteristics of young adults who have abstained from all types of sexual activity.”
“We used data from 2,857 individuals who participated in Waves I–IV of the National Longitudinal Study of Adolescent Health (Add Health) and reported no sexual activity (i.e., oral-genital, vaginal, or anal sex) by age 18 to identify, using discrete-time survival models, adolescent sociodemographic, biosocial, and behavioral characteristics that predicted adult sexual inexperience. The mean age of participants at Wave IV was 28.5 years (SD = 1.92). Over one out of eight participants who did not initiate sexual activity during adolescence remained abstinent as young adults. Sexual non-attraction significantly predicted sexual inexperience among both males (aOR = 0.5) and females (aOR = 0.6). Males also had lower odds of initiating sexual activity after age 18 if they were non-Hispanic Asian, reported later than average pubertal development, or were rated as physically unattractive (aORs = 0.6–0.7). Females who were overweight, had lower cognitive performance, or reported frequent religious attendance had lower odds of sexual experience (aORs = 0.7–0.8) while those who were rated by the interviewers as very attractive or whose parents had lower educational attainment had higher odds of sexual experience (aORs = 1.4–1.8). Our findings underscore the heterogeneity of this unique population and suggest that there are a number of different pathways that may lead to either voluntary or involuntary adult sexual inexperience.”
“Breastfeeding has clear short-term benefits, but its long-term consequences on human capital are yet to be established. We aimed to assess whether breastfeeding duration was associated with intelligence quotient (IQ), years of schooling, and income at the age of 30 years, in a setting where no strong social patterning of breastfeeding exists. […] A prospective, population-based birth cohort study of neonates was launched in 1982 in Pelotas, Brazil. Information about breastfeeding was recorded in early childhood. At 30 years of age, we studied the IQ (Wechsler Adult Intelligence Scale, 3rd version), educational attainment, and income of the participants. For the analyses, we used multiple linear regression with adjustment for ten confounding variables and the G-formula. […] From June 4, 2012, to Feb 28, 2013, of the 5914 neonates enrolled, information about IQ and breastfeeding duration was available for 3493 participants. In the crude and adjusted analyses, the durations of total breastfeeding and predominant breastfeeding (breastfeeding as the main form of nutrition with some other foods) were positively associated with IQ, educational attainment, and income. We identified dose-response associations with breastfeeding duration for IQ and educational attainment. In the confounder-adjusted analysis, participants who were breastfed for 12 months or more had higher IQ scores (difference of 3,76 points, 95% CI 2,20–5,33), more years of education (0,91 years, 0,42–1,40), and higher monthly incomes (341,0 Brazilian reals, 93,8–588,3) than did those who were breastfed for less than 1 month. The results of our mediation analysis suggested that IQ was responsible for 72% of the effect on income.”
This is a huge effect size.
iv. Grandmaster blunders (chess). This is quite a nice little collection; some of the best players in the world have actually played some really terrible moves over the years, which I find oddly comforting in a way..
v. History of the United Kingdom during World War I (wikipedia, ‘good article’). A few observations from the article:
“In 1915, the Ministry of Munitions under David Lloyd-George was formed to control munitions production and had considerable success. By April 1915, just two million rounds of shells had been sent to France; by the end of the war the figure had reached 187 million, and a year’s worth of pre-war production of light munitions could be completed in just four days by 1918.”
“During the war, average calories intake [in Britain] decreased only three percent, but protein intake six percent.“
“Energy was a critical factor for the British war effort. Most of the energy supplies came from coal mines in Britain, where the issue was labour supply. Critical however was the flow of oil for ships, lorries and industrial use. There were no oil wells in Britain so everything was imported. The U.S. pumped two-thirds of the world’s oil. In 1917, total British consumption was 827 million barrels, of which 85 percent was supplied by the United States, and 6 percent by Mexico.”
“In the post war publication Statistics of the Military Effort of the British Empire During the Great War 1914–1920 (The War Office, March 1922), the official report lists 908,371 ‘soldiers’ as being either killed in action, dying of wounds, dying as prisoners of war or missing in action in the World War. (This is broken down into the United Kingdom and its colonies 704,121; British India 64,449; Canada 56,639; Australia 59,330; New Zealand 16,711; South Africa 7,121.) […] The civilian death rate exceeded the prewar level by 292,000, which included 109,000 deaths due to food shortages and 183,577 from Spanish Flu.”
vi. House of Plantagenet (wikipedia, ‘good article’).
vii. r/Earthp*rn. There are some really nice pictures here…
“92 per cent of men and 86 per cent of women in Britain drink alcohol (DoH 2002a).”
I sort of liked the chapter about alcohol more than I did at first after I’d yesterday read some stuff in Boccia et al. dealing with the same topic (their coverage is much poorer in regards to some key issues). When thinking about how to blog this chapter I was considering including a table from the book, table 5,1, in full, even if it’s rather large, but I decided against it as I might as well report what it’s talking about myself here. The observation that addiction and physical dependence should be treated as separate entities is not included in the coverage, although Clark & Treisman considered this to be a very important point to keep in mind (see also this post: “It is very important to realize that addiction and physical dependence are different phenomena with different underlying brain substrates”), but the coverage is still much more detailed than the public health review text alluded to above. It should be noted that some of the shortcomings of the chapter is presumably due to the intended scope of the coverage which makes the omission of some of the important distinctions seem understandable, sort of; the authors note early on that they mostly focus on volitional rather than dependent drinking, because the book deals with lifestyle behaviours over which individuals have some level of control (but if you’re covering smoking and illegal substance abuse in your book, why not cover dependent drinking as well? I still find their coverage of some of these issues sort of puzzling…). Anyway, table 5,1 includes the ICD-10 diagnostic criteria for alcohol dependence, and these criteria include (my bold):
Evidence of tolerance (need more alcohol to get the same effect); physiological withdrawal when alcohol use is reduced or ceased (or use of a closely related substance with the intention of relieving or avoiding withdrawal symptoms); persisting with alcohol use despite clear evidence of harmful consequences; preoccupation with alcohol use (important other pleasures/interests given up or reduced because of alcohol, much time spent on activities such as procuring alcohol, consuming it, or recovering from its effects); difficulty controlling drinking behaviour in terms of onset-, termination or level of use – evidenced by alcohol being consumed in larger amounts or over a longer period than intended, or by any unsuccessful effort or persistent desire to cut down; and lastly a strong desire or compulsion to use alcohol.
“The majority of people who drink alcohol have not been diagnosed as dependent drinkers. Orton (2001) reported that 7.5 per cent of men and 2.1 per cent of women in Britain in the 1990s could be classified as dependent on alcohol. […] Nonresponse bias is a particular problem in drinking surveys. […] Issues of response bias are a common concern and one that afflicts many of the lifestyle surveys reported throughout this text. […] An important issue for measurement of drinking is the validity and reliability of the instrument in question and unfortunately many widely used measures of alcohol consumption have not been tested for such psychometric properties. […] Probably the most convincing evidence that self-report measures of drinking in any one study do, at the very least, place people in an appropriate place on the drinking continuum compared to their peers is the relationship between self-reported drinking and proven increased risk for a number of alcohol related conditions (Room et al. 2005).”
“Men drink more alcohol than women and they are more likely to exceed their daily and/or weekly guidelines, even though those guidelines are higher than those recommended for women […]. This gender difference in alcohol consumption is consistently reported in the national surveys and elsewhere […] and furthermore is similar to the gendered drinking patterns of previous decades […] There are few clear socio-economic trends in alcohol consumption evident from the National Surveys”
“People under the influence of alcohol are more likely to behave aggressively and this can lead to physical violence that can harm themselves and others […]. Offenders are believed to be under the influence of alcohol in 46 per cent of incidents of domestic violence and 44 per cent of acquaintance violence. […] 15 per cent of rape victims recorded by the 2001 British Crime Survey were raped when they were under the influence of alcohol [I was actually really surprised the number was that low…] […] People under the influence of alcohol are also more likely to have accidents. […] The World Health Organisation (2002) estimates that 20 per cent of motor vehicle accidents worldwide are alcohol related.”
“Alcohol has been implicated in more than 60 medical conditions, predominantly with negative, but occasionally with positive, consequences […] the relationship between alcohol consumption and health is not always linear. […] Episodic heavy drinking, even when the overall volume of alcohol intake is low, has been found to increase the risk for a number of cardiovascular conditions. […] This association is physiologically consistent with the increased clotting, lower threshold for ventricular fibrillation and elevation of low density lipoproteins that occur after heavy drinking (Room et al. 2005). […] Breast cancer risk increases linearly with increased alcohol consumption: 10 grams of alcohol a day (an average UK unit) increases the relative risk of breast cancer by 9 per cent. A daily consumption of between 30 and 60 grams a day increases the relative risk by 41 per cent […] In England and Wales alcohol-related injury or illness accounts for 180,000 hospital admissions a year (HM Government 2007).”
“Alcohol serves an important social function. It enhances social integration and facilitates the development of relationships (Kuther and Timoshin 2003). It is hardly surprising that people drink most at a period in their lives [teen-age years, early twenties] which is normally associated with the development of stable adult relationships (Paglia and Room 1999). Increased levels of drinking in newly divorced people may be in part due to the breakdown of stable relationships and the desire to establish new relationships (HM Government 2007). Social isolation is a key factor in poor health outcomes […] so the positive social function of alcohol in enabling people to develop social relationships should not be overlooked.”
“In contrast to other lifestyle behaviours where social norms have been argued to play little or no part in the explanation for variations in behaviour, social norms are consistently reported to be useful in explaining variations in drinking behaviour”
“it is well established that the earlier a person starts to drink, smoke or use illegal drugs the higher the risk of later abuse […] There is evidence that people drink less if the price of alcohol increases […] and that those of particular concern, heavy drinkers and young people, both respond to price increases by drinking less […] Many interventions to encourage sensible drinking are aimed at adolescents and young people with the goal of preventing the establishment of unhealthy drinking habits. The rationale for a predominance of interventions for this age group includes the indisputable fact that young people are the heaviest drinkers in society […] Many early drinking interventions are educational in nature. In essence these are risk communication messages and the evidence from psychological research is that improving risk perceptions will have little impact on levels of drinking. Unsurprisingly then, there is little evidence that alcohol education and health promotion have any positive effect on drinking habits in Britain […] These campaigns are heard and understood because knowledge increases in targeted populations […] so it is not that the message is failing to reach the designated audience, rather the message has no impact on behaviour. […] Foxcroft et al. (2003) reviewed the effectiveness of programmes designed to prevent excessive drinking in young people. Worryingly, [they] found very little evidence that any of these programmes were effective. Among the studies with medium-term followup that met the methodological guidelines the majority, 19 studies, found no evidence of intervention effectiveness. Several of these studies had previously reported short-term effectiveness which demonstrates the importance of longer term follow-up. […] There are two concerns from these studies on early drinking interventions. First, there are a wealth of studies that report no reduction in any measure of drinking. Second, research has failed to consistently test and tease out what is effective.”
“There is considerable variation in the prevalence of smoking worldwide. In sub-Saharan Africa less than 10 per cent of the population smoke, whereas in Japan this figure rises to above 50 per cent, and in Indonesia 69 per cent, with almost three-quarters of the Vietnamese population smoking (Edwards 2004).” [I had no idea the numbers were that high anywhere… (and I’m perhaps slightly skeptical, in particular about the Japanese estimate; a 50+% smoking prevalence seems to not fit very well with the very high Japanese life expectancy)]
“Despite the health effects of smoking being known since the 1960s, and the health impact being publicised, some 12 million individuals still smoke in the UK: 25 per cent of men and 23 per cent of women (ONS 2007). These figures have shown a substantial decrease since the early 1970s: for example in the 1970s the comparable figures were 51 per cent of men and 41 per cent of women smoking.” [If you’re curious about Danish figures, I blogged some Danish alcohol and smoking stats some years back here (the post is in Danish)] […] smoking is the highest in the 20–24 year age group (about 36 per cent) and the lowest in the over 65 years (about 15 per cent). This reflects both the fact that many former smokers will have quit and also that about a quarter of smokers die before reaching retirement age (ONS 2007). […] in the UK it is suggested that annually some 120,000 people die as a result of their smoking habit (440,000 in the United States). Every year, tobacco smoking kills 5 million people worldwide (Perkins et al. 2008) […] Deaths caused by tobacco smoking in the UK are higher than the number of deaths caused by road traffic accidents (3,500), other accidents (8,500), poisoning and overdose (900), alcoholic liver disease (5,000), suicide (4,000) and HIV infection (250). Almost a half of all regular smokers will be killed by their habit. A man who smokes cuts short his life by 13.2 years and female smokers lose 14.5 years (ASH 2008).”
“It is usually teenagers who experiment with smoking, with very few smokers starting after the age of 25 years […]. There are a number of reasons why people start smoking, but these are mainly related to psychosocial motives […] One of the major reasons for experimenting with cigarettes is social pressure from peers or older siblings […] adolescents are more likely to smoke cigarettes if their parents smoke […] Research has also indicated that teenagers underestimate the health risk of smoking […] and they also believe that they will quit before they do themselves serious damage […]. Hence, they smoke in spite of knowing the health damage effects of smoking: they know of them, they just don’t think it will impact upon them. […] of all the lifestyle behaviours discussed in this book smoking has the simplest relationship with social class and is the only behaviour to demonstrate a totally linear relationship with class.”
“One of the major attempts to reduce smoking has been the introduction of graphic warning labels on cigarette packets or on posters and billboards. […] there is very little evidence of the success of this form of approach. When politicians are asked for the evidence of such approaches there is much filibustering and some reference to dated research which does not stand up to scrutiny (Ruiter and Kok 2005). […] the evidence can be described as, at best, insubstantial. […] there are a large number of studies that highlight that some type of in-person or telephone behavioural support with NRT [nicotine replacement therapy] increases quit rates, especially those using nicotine gum […]. This support works by increasing motivation for quitting and remaining tobacco-free. However, most quitters attempt to stop smoking by use of NRTs alone and overlook the behavioural and psychological support required to enhance and maintain the necessary motivation”
The stuff below is from the smoking chapter, but might easily have been found in a very different chapter (or even in a different book?):
“Motivational interviewing can be defined as ‘a client-centred, directive method for enhancing intrinsic motivation to change by exploring and resolving ambivalence’ (Miller and Rollnick 2002). Motivational interviewing has as its goal the simple expectation that increasing an individual’s motivation to consider change rather than showing them how to change should be the key step. If a person is not motivated to change then it is irrelevant if they know how to do it or not. […] Motivational interviewing (MI) is a technique based on cognitive-behavioural therapy which aims to enhance an individual’s motivation to change health behaviour. The whole process aims to help the patient understand their thought processes and to identify how their thought processes help produce the inappropriate behaviour and how their thought processes can be changed to develop alternative, health-promoting behaviours. Motivational strategies include eight components that are designed to increase the level of motivation the person has towards changing a specific behaviour. […] The eight components are: *giving advice (about specific behaviours to be changed) *removing barriers (often about access to particular help) *providing choice (making it clear that if they choose not to change that is their right and it is their choice […] *decreasing desirability (of the ambivalence towards change or the status quo) *practising empathy *providing feedback […] *clarifying goals (feedback should be compared with a standard (an ideal) *active helping”.
“The definition of ‘lapse’ and ‘relapse’ has been debated in various forums […] but simply a ‘lapse’ is a slip into smoking behaviour, whereas ‘relapse’ refers to long-term failure. Most smokers who attempt to quit do so through self-quitting […] but the rates of success are very low with reports suggesting that only about 3–5 per cent of those self-quitting attain long-term abstinence at 6–12 months (Hughes et al. 2004). More recently, self-quitters have been aided by being able to purchase over the counter NRT and although this can double the rate of success this is still a paltry 6–10 per cent success rate. […] Although the majority of smokers want to stop smoking and predict that they will have stopped in twelve months, only 2–3 per cent actually stops permanently a year (Taylor et al. 2006).”
“In London, the area with the highest prevalence of HIV in the UK, 30 per cent of people did not know HIV could be transmitted through unprotected sex (National AIDS Trust 2006; UNAIDS 2006). [first thought: Some of these have got to be joke responses] […] [in the UK] the number of women diagnosed with HIV has increased in recent years and in 2007 it was some 40 per cent of the total (compared to 10 per cent of all diagnoses in 1990). […] 95 per cent of 16–24 year olds who use a condom do so in order to prevent pregnancy whereas only 71 per cent report using a condom in order to prevent infection. Furthermore, less than half (48 per cent) of men and only 37 per cent of women report using a condom ‘always’. […] At least 50 per cent of sexually active men and women acquire genital HPV infection at some point in their lives […] Regarding HIV it is estimated that one-quarter of people living with the disease do not know that they have it and are therefore at risk of transmitting the virus to others (CDC 2006e).”
“The pharmacological effects of alcohol and various other non-prescription substances tend to have the effect of reducing inhibitions, boosting confidence, intensifying emotions and increasing the importance of immediate cues such as sexual desire, at the expense of more future-oriented considerations such as STIs. As a result, users have been shown to engage in more risky sexual behaviours [related link (well, sort of related – if you skip the first paragraph and see link i. and ii…)] […] Alcohol use and sexual activity often co-occur and more than one-quarter of sexually active teens used alcohol or drugs during their last sexual experience […] However, not only does the condom have to be used, but also it has to be used effectively (i.e. properly). Hatherall et al. (2007) report that a sizeable minority (between 12 and 40 per cent) applied a condom imperfectly. […] it is well documented that the earlier first sex occurs the less likely it is that contraception will be used […] Reviews have shown that school-based sex education leads to improved awareness of risk and knowledge of protection strategies, and increases intention to adopt safer sex behaviours. It has also been found to delay sexual debut (Kirby et al. 2006).”
This evening IM Christof Sielecki, the guy behind the ChessExplained youtube account, gave an online simultaneous display. These are events where a very strong player will take on many opponents at the same time, and then see how well he does against the opposition. According to the original plan he was supposed to play 20 different opponents, but in the end he ended up only playing 18; I was one of the players he played against during the event. He won 17 games and drew one game. I not surprisingly lost my game, but I did hold out for almost three hours and he had some really nice things to say about my play during the game (see comments below). You can watch the entire ‘show’ here if you haven’t got anything better to do (I sort of hope you do…), and you can see my own game against him here (I was black – Christof had the white pieces in all games); it should perhaps be noted that I spent most of my time on the first 25 moves or so and that I got into severe time trouble and was playing basically only on the increment (30 seconds/move) for the last 20 moves of the game.
As mentioned he had nice things to say about my play, and I’m actually quite satisfied with my play even if I lost. A few quotes from his commentary during the game:
“Very solid game here by the black player.” (43 minutes into the game)
“What can I do, this guy is playing very, very solid chess.” (49 minutes…)
“that’s tough, that’s tough business here, it’s not easy at all …this is one of – he’s playing this very, very solidly. […] I have absolutely nothing here.” (after 17…Re7, roughly 1 hour and 13 minutes into the game)
“Ah, yeah, a5 … yeah, what can you do, he’s playing well…” (after 23…a5 – 1 hour, 52 minutes…)
“Ahm, okay. He keeps defending …that guy, he keeps defending very, very well.” (after 35…g6 – 2 hours, 19 minutes)
“I’m kind of trying to win here, maybe in a situation where it’s not justified.” (after 41…d4 – 2 hours, 28 minutes)
“He played a really, really solid defence, this guy” (2 hours 42 minutes in)
“[M]any [children] show fear and avoidance of novel foods. The tendency to reject novel foods has been termed neophobia. Research has begun to reveal how early experience and learning can reduce the neophobic response to new foods, thereby enhancing dietary variety. For example, Birch and Marlin (1982) found that when 2 year olds were given varying numbers of opportunities to taste new fruits or cheeses, their preferences increased with frequency of exposure. Researchers found that between five and ten exposures to a new food were necessary before preference for that food increased. In another study, Gerrish and Mennella (2001) investigated the acceptance of a novel taste (pureed carrot) by infants who had previously experienced a range of tastes that included many vegetables but not carrot. Exposure to fruit, carrots alone or a variety of vegetables resulted in an increased acceptance of pureed carrot. Furthermore, those who had been exposed to a variety of vegetables were also more likely to eat other novel foods. Researchers concluded that familiarity with a variety of flavours increased the acceptance of novel foods. The implication was that parents should expose their children to a wide variety of tastes to encourage the acceptance of novel foods. […] exposure is a major factor in encouraging consumption. […] during childhood, the neophobic response to new foods decreases with age […]. Although repeated opportunities to taste and eat new food has been found to reduce neophobia and enhance acceptance, merely smelling or looking at the food has no such effect (Birch et al. 1987). This finding is consistent with the learned safety hypothesis which suggests that neophobia is only reduced as we learn that the food is safe to eat and does not cause illness […]. Further evidence suggests that watching others consume the food may provide a form of ‘exposure by proxy’ or modelling which could also reduce rejection […] observing a parent eating energy dense food could potentially encourage a child to establish similar food preferences. The effectiveness of the role model has been found to differ depending on the relationship between the child and the model. […] Birch (1980) and Duncker (1938, cited in Birch 1999) report that older children are more effective role models than younger children; Harper and Sanders (1975) report that mothers are more effective than strangers; and for older preschool children, adult heroes are more effective than ordinary adults (Birch 1999).”
“Promise of a reward is a time-honoured parental tactic for promoting consumption of healthy food. Nevertheless, it has been argued that treating food consumption in this way may actually decrease liking for that food. Lepper and Greene’s (1978) overjustification theory argues that offering a reward for an action devalues it for the child. In support of this a number of studies have reported decreased liking for foods when children are rewarded for eating them […] Horne et al. (2004) argue that in order for rewards to be effective, it is important that they are highly desirable and that they indicate to the child that they are for behaviour which is enjoyable and high status. Other studies have investigated the impact of using food as a reward. For example, Birch et al. (1980) presented children with foods either as a reward, a snack or in a non-social situation and found that acceptance increased if the food was presented as a reward. It is easy to generalise this finding to real life situations. High fat and sweet items are used repeatedly in positive contexts, for example on special occasions. The consumption of already pleasurable items in this way is reinforced. If children are given foods as rewards for approved behaviour, preference for those foods is enhanced (Benton 2004).”
“Cognitive models of eating behaviour explore the extent to which cognitions predict and explain behaviour. Most research from a cognitive perspective has drawn on social cognition models and several models have been developed […] All […] share the assumption that attitudes and beliefs are major determinants of eating behaviour, however they vary in terms of the cognitions they include and whether they use behavioural intentions or actual behaviour as their outcome measure […] Some research using the TRA and TPB has focused on predicting behavioural intentions. Research suggests however, that behavioural intentions are not that successful in predicting actual behaviour.”
“Traditionally habit has been measured by the number of times behaviour has been performed in the past […] Nevertheless behavioural recurrence does not constitute direct evidence for habitual processes. Verplanken and Orbell (2003) argue that habit is a psychological construct rather than behavioural recurrence and involves lack of awareness, difficulty to control, mental efficiency and repetition. Although repetition is a necessary requirement for a habit to develop, subsequent research has supported the hypothesis that frequency of past behaviour and habit are separate constructs [I also pointed this out elsewhere recently, but I think it’s an important insight. Revisiting my coverage of Buskirk et al.’s text after posting that comment I incidentally realized that Eysenck and Keane‘s coverage may well in some respects be more relevant/useful than the former.] […] It may be […] useful to conceptualise habits as established patterns of behaviour that may once have been initiated by rational choice but which are now under the control of specific situational cues that trigger the behaviour without cognitive effort. […] Reasoned action as represented in social cognition models and habit can be considered as two extremes of a conscious decision-making continuum. In between may lie a number of heuristic decision-making strategies that involve varying degrees of cognition.”
“Foltin et al. (1988) gave volunteers two cigarettes containing active marijuana or a placebo and found that active marijuana increased total caloric intake by 40 per cent. […] studies exploring the relationship between alcohol and food intake have been contradictory. In a mini-review Gee (2006) found that among eight studies reviewed, only one showed a significant difference in appetite ratings between the alcohol and no alcohol pre-load. […] Gee (2006) concluded that the effect of alcohol on appetite appears to be unsubstantiated; however alcohol’s effect on energy intake does appear significant. As well as recreational drugs, anti-psychotics and antidepressants have also been shown to influence hunger and satiety.” [Of course there are a large number of variables involved, but they don’t actually go into much detail in their coverage. To add to the list, sleep can also be quite important].
“According to Bourn (2001) approximately two-thirds of the UK’s population visit their GP at least annually, so primary care provides an unparalleled opportunity for health promotion and preventive interventions.” (This number is old, but a number like this one seems relevant to a wide variety of topics so even if it’s dated I decided to include it here anyway in order to increase the likelihood that I’ll remember the context of the estimate later).
“Despite considerable efforts over a number of years, there is limited evidence to suggest that educational approaches to dietary change (that is providing basic information about what constitutes a ‘healthy’ diet) alter children’s eating habits […] Hundreds of interventions to combat the obesity epidemic are currently being introduced worldwide, but there are significant gaps in the evidence base for such interventions and few been evaluated in a way that enables any definitive conclusions to be drawn about their effectiveness. Those that have shown an impact are limited to easily controlled settings and it remains unclear how promising small-scale initiatives would be scaled up for whole population impact (Butland et al. 2007). […] NICE recommends that interventions to improve diet should be multicomponent (i.e. including dietary modification, targeted advice, family involvement and goal setting), tailored to the individual, provide ongoing support, include behaviour change strategies and include awareness raising promotional activities as part of a longer term, multicomponent intervention rather than a one off activity.”
“The Office for National Statistics (2003) reported that distances walked annually dropped by 63 miles between 1975 and 2003 [I was actually sort of surprised the number wasn’t higher…]. Similarly, distances cycled dropped by 16 miles in the same period [I must admit part of the reason why I picked out the quote was that I wanted to illustrate once again why I gave this book a low rating on goodreads; the book here clearly gives you the impression that people walk less and bicycle less than they used to do. But try to look at those numbers and divide each of them with 365. There’s no way in hell those 16 miles of bicycling *per year* per person makes any measurable difference on any semi-relevant health variable of interest – this is something like 40 meters per day per person, or 10 seconds of bicycling per day, assuming an average speed of 15 km/hour…]. The proportion of people who travel by walking or cycling has declined by 26 per cent (Department of Health, Physical Activity, Health Improvement and Prevention 2004). [This number on the other hand seems much more likely to have health-relevance. But then you immediately start asking yourself: if that number is true, why are the other numbers so low? And the inclusion of all of the above numbers in the coverage actually illustrates perfectly a recurring issue I had with the coverage; there are a lot of numbers here, and they don’t all tell the same story, and the authors aren’t always making it the least bit easier to make sense of them because they seem to treat many of them quite uncritically. Maybe fewer people cycle, but those that do put in more kilometers – but the authors aren’t suggesting this in the text, so you sort of need to come up with these sorts of explanations for the semi-weird constellation of research results yourself]. Consequently, it has been argued that active transport is a key factor in the achievement of healthy levels of physical activity […] All four national surveys demonstrate the same sex difference in activity levels. Physical activity is the only lifestyle behaviour where men are more likely to achieve government guidelines than women […]. Sport is a traditional male activity which may contribute to this finding.”
“The relationship between [physical] activity and social class as measured by the National Statistics Socio-Economic Classification (NS-SEC) […] is complex. […] The relationship between NS-SEC and physical activity can be described by an inverted U-shaped curve, with those at either end of the NS-SEC scale being the least likely to be active. […] Compared to the general population, South Asian and Chinese men and women were much less likely to participate in physical activity of any kind. Bangladeshi men and women were the most inactive and were almost twice as likely as the general population to be classified as sedentary. […] Physical activity reduces the risk of premature mortality for everyone, regardless of their age, sex or ethnicity […] In England, the Department of Health, Physical Activity, Health Improvement and Prevention (2004) has estimated that adults who are physically active have a 20–30 per cent reduced risk of premature death. Warburton et al. (2006) have suggested that a 50 per cent reduction in risk from death is possible for the physically fit. The effect of physical activity on health manifests itself by its influence on a wide range of diseases. In particular, people who are physically active can achieve up to a 50 per cent reduced risk of developing the major lifestyle diseases: coronary heart disease, stroke, diabetes and cancers […] not only do inactive people face shorter lives, but also they face poor quality of life in the years preceding death. While the relationship of physical activity to each disease is important in its own right, what makes physical activity so important is the strength of its effect over such a wide range of conditions. […] Associations with health are generally stronger for measured cardiorespiratory fitness than for reported physical activity […] but a self-reported physical activity is still convincingly associated with reduced mortality […]. In short, cardiorespiratory fitness will benefit health but levels of physical activity that may not be of an intensity to alter physical fitness parameters may still have health benefits. […] Obesity is the main visible sign of inactivity, yet obesity is just one of possibly 20 chronic diseases and disorders for which low activity levels are a known contributory factor. […] it is easier to influence the energy intake–output balance through diet than through activity […] The evidence suggests that for physical activity to have a significant effect on bodyweight and in particular on weight loss then 30 minutes of moderate activity for five days a week is unlikely to be a high enough level of activity.”
“Social cognition theory has identified self-efficacy and perceived behavioural control as key factors in the practice of healthy levels of physical activity, but at best such models can predict 50 per cent of the variation in physical activity […] Extensive evaluation of social cognition models’ ability to predict uptake of physical activity leads to the conclusion that a perception of the risks of non-activity and the benefits of activity for health has at best a small impact of overall variation in physical activity behaviour. […] Kahn et al. (2002) in their review of informational campaigns found no evidence that informational only media-based campaigns were effective, in line with the theoretically derived conclusion that attempts to inform people of the benefits and costs of activity and inactivity are unlikely to facilitate substantial changes in behaviour. Similarly, Ogilvie et al. (2004) found no evidence that informational campaigns to increase active transport were successful. […] Behavioural interventions are more likely to be at a small group or individual level of intervention. Kahn et al. (2002) found that individually adapted behavioural change programmes were effective in increasing physical activity levels. Ogilvie et al. (2004) found that targeted behavioural change programmes were the most effective way to promote walking and cycling. […] Many public health interventions to increase physical activity in the community are not individualised, do not recognise the role of psychological processes in effective behavioural change and are carried out by professionals with no psychological training”.
“Improving lifestyles is thought to be one of the most effective means of reducing mortality and morbidity in the developed world. However, despite decades of health promotion, there has been no significant difference to lifestyles and instead there are rising levels of inactivity and obesity. The Psychology of Lifestyle addresses the role psychology can play in reversing the trend of deleterious lifestyle choices. It considers the common characteristics of lifestyle behaviours and reflects on how we can inform and improve interventions to promote healthy lifestyles. […] The chapters cover key lifestyle behaviours that impact on health – eating, physical activity, drinking, smoking, sex and drug use – as well as combinations of behaviours.”
I gave the book two stars on goodreads. There are multiple reasons why it did not get a higher rating despite containing quite a lot of material which I consider to be worth blogging. One reason is that the book is really UK-centric; it’s written by British authors for a British audience. Which is fine if you’re from Britain, but it does mean that some of the details included (such as drinking pattern breakdowns for England, Scotland, and Wales) may not be super interesting to the non-British readership. Another reason is that some of the numbers included in the publication are frankly not trustworthy, and the inclusion of those numbers without critical comments on part of the authors occasionally made me question their judgment. To give an example, it is at one point during the coverage noted that: “Women aged 16–19 were least likely to be using contraception despite almost two-thirds of teenagers having had intercourse by age 13 (CDC 2007b).” The problem I have with this quote is that they don’t comment anywhere in the publication upon the fact that this estimate is, if applied to the general population, frankly unbelievable, taking into account other estimates from the literature, including other estimates from US samples (see e.g. this previous post of mine). It’s clear that it’s an estimate derived from a specific sample, but it’s not made clear that the characteristics of the sample were probably very different from the characteristics of the population about which the reader is using the quote to make inferences. To illustrate just how difficult it is to believe that the estimate has much, if any, external validity, according to the estimates reported in fig. 6.2 in the link in the parenthesis above, you don’t get to the point where two-thirds have had sexual intercourse before the age of 19. The estimate they include in the book is not just weird and strange, it’s so weird and strange that anybody who knows anything about that literature would know the estimate is weird and strange, and would at least comment upon why it is perhaps not to be trusted (my guess would be that this estimate is derived from a sample displaying a substantial amount of selection bias due to opportunistic sampling from a very high-risk group). Yet they don’t comment on these things at all, apparently not only taking it to some extent at face value, but also asking the reader to do the same. This was almost an unforgivable error on part of the authors and I was strongly considering not reading on when I got to this point – I don’t really think you can not comment on this kind of thing if you decide to include numbers like those in your coverage in the first place.
Another problem is that there’s also occasionally some sloppy reporting which makes it hard to understand what the research they’re reporting on is actually saying; one example is that they note in the publication (p.185) that: “Young people aged over 15 accounted for 40 per cent of new HIV infections in 2006″ – which immediately makes me start wondering whether e.g. a 25-year old would be considered ‘young’, according to this estimate? What about a 30-year old? The publication is silent on the issue of where the right-hand side cut-off is located, making the estimate much less useful than it otherwise would be.
A fourth(?) issue is that a lot of this stuff is correlational research, and there are a lot of cross-section studies and pretty much no longitudinal studies. At a few points do the authors caution against drawing strong conclusions from this kind of research and are frank about the problems which are present, but at other points in the coverage they then to me seem to later on just draw some of those semi-strong conclusions anyway, disregarding the methodological concerns (which are huge).
A fifth issue is that there are some hidden assumptions hidden in the coverage, assumptions which some people might categorize as ‘political’ or something along those lines; these didn’t much bother me because politics and that kind of thing isn’t something I care very much about, as mentioned many times before (though do also see my comments below..), but I’m sure some readers will take issue with what in some sense might be described as ‘the tone’ of the coverage. To be fair they do briefly touch upon e.g. the ethics of smoking bans, but you’re never in doubt where they stand on these issues (bans are fine, most interventions aimed at making the population healthier seem to be fine with the authors), and readers who find government interventions less desirable/justifiable than the authors do may take issue with specific recommendations and implicit assumptions in the coverage. The coverage in the last chapter is sort of a counter-weight to much of the rest of the coverage in the sense that ‘the case against bans and regulation’ gets reasonable coverage here, but I’d say the rest of the book is not really written in a manner which would lead most readers to believe it’s not a good idea to regulate *a lot*.
A sixth personal issue I have with the book is that the book is written in a manner I personally consider to be somewhat disagreeable. It’s a really classic textbook with stuff like a section in the beginning of the chapter outlining ‘what you’ll learn from this chapter’. These kinds of things perhaps wouldn’t be as much of an issue to me if I actually agreed with the authors about what you might be argued to be learning, or not learning, from the coverage in a given chapter. To take an example of what I’m talking about, at the beginning of chapter 7 you learn that: “At the end of this chapter you will: […] understand the nature of sexually transmitted diseases and their health consequences, along with their extent nationwide”. This is just one of 6 learning goals presented. Having read roughly the first third of Holmes et al., I can safely say that reading that book instead would be a lot more helpful than reading the chapter in this book in terms of achieving the learning goal presented, and I might add that if an author of a textbook thinks that you’ll ‘understand the nature of sexually transmitted diseases and their health consequences’ after having read a chapter in a textbook like this one, maybe that author shouldn’t be writing textbooks. This isn’t really fair because the chapter has a lot of useful stuff (and because I have a nagging suspicion that such silly learning goals may well be (politically?) mandated, and that this is probably part of the explanation for why they’re included in books like this one in the first place), but I hate interacting with clueless people with delusions of competence/knowledge, and if people are writing textbooks this way you’ll end up with a lot of people like that coming out the other end.
Despite the above-mentioned problems (and a few others) there’s also a lot of nice stuff in the book, and I’ll share some of that stuff below and in future posts about the book.
“One of the problems with attempting to arrive at a conclusion about what constitutes a lifestyle disease is the myriad of definitions under which diseases are categorised. […] Interestingly, few authors would include sexually transmitted diseases under the lifestyle umbrella, although they could be argued to be entirely under behavioural control, with none of the genetic component that plays a part in aetiology of the six major lifestyle diseases as identified by Doyle (2001). […] In between an ‘imprudent lifestyle’ (Doyle 2001) and the development of a chronic life-threatening or life-foreshortening condition lie a number of precursors of disease. High cholesterol, high blood pressure and obesity are risk factors for the development of a number of the aforementioned lifestyle diseases. The distinction between these precursors, the diseases they predict and the behaviours that are associated with them is often blurred. They are often presented as diseases per se”.
Even though there’s some disagreement about whether or not risk factors are actually Diseases or not, I would caution against the idea that they’re somehow ‘less severe’ than ‘an actual Disease’, unless they actually are; high blood pressure increases the risk of e.g. stroke substantially, so in some ways it’s actually quite a bit worse than some ‘agreed-upon Diseases’ which have less significant health impacts and may not actually kill anybody. I was reminded of this stuff (the blurring of diseases and risk factors) and some related problems very recently during a conversation with a friend, and I’ll allow myself to digress a bit to talk about this stuff in a little more detail here even though it’s only marginally related to the book coverage. Anyway, it seems to me that a lot of people who’d prefer a more ‘fair’ health care resource allocation (‘less money for people who caused their own health problems and more for the others’), a goal towards which I feel sympathetically inclined, are not really aware of how complicated these things are and how difficult it may be to make anything even resembling ‘fair’ distinctions between conditions which are/may be caused by behaviour and conditions which are not, to take but one of many issues. I can usually easily see the impetus for ‘changing things in the direction suggested’, but new problems pop up at every junction and it seems perfectly obvious to me that you’re not going to get rid of unfairness by not giving fat people any money to pay for their insulin. Some of the politically feasible solutions may conceivably make matters worse, e.g. because restricting access to (some types of) medical care may just shift expenditures and perhaps lead to higher expenditures on other treatments to which coverage is maintained (and you’d expect coverage to be maintained to some degree – alternatives are not politically viable). I’m aware that the role of preventative care is from a ‘pure cost standpoint’ probably somewhat overblown (usually preventative care does not save money in the long run, as they tend to cost more money than they save – see e.g. Glied and Smith’s coverage), but this stuff is complicated for many reasons. Some of the current disease treatment modalities in widespread use might well be conceived of as preventative medicine as well, and it’d probably make sense to think of them that way in the case of major changes to insurance coverage profiles. Let’s for example try to compare two models. In the first one insulin for type 2 diabetes is covered, and acute hospitalizations as a result of hypo- and hyperglycemia (DKA, HHS) are also covered. Assume now that the coverage for insulin is removed, but acute hospitalizations would still be covered. It would be quite easy for this change to result in an increase in the total costs incurred by the insurance provider, because hospitalizations are a lot more expensive than insulin, and it’s easy to see why excluding coverage of insulin might lead to more acute hospitalizations among type 2 diabetics (I’m too lazy to look up the numbers, but to people who have no idea about the magnitudes involved here one number which I seem to recall and which should illustrate the issues quite nicely is that in terms of the costs involved, one diabetes-related hospitalization corresponds to something like 8 months of treatment – not insulin, all treatment, including doctor’s visits, blood tests, etc., etc.). Evaluating efficiency in such a context would be really difficult because the conclusion drawn would also depend upon how a third factor, long-term complications, are managed. On the margin, a lot of patients face a tradeoff between the risk of hospitalization from hypoglycemia and the risk of developing chronic health complications such as kidney disease (many patients could decrease their risk of e.g. diabetic retinopathy, -neuropathy or -nephropathy by lowering their Hba1c, but this could easily lead to an increased risk of hypoglycemic episodes – which is part of why patients don’t), and if insurance companies are only expected to care about short-term complications/acute stuff then that may lead to some interesting dynamics, e.g. insurers offering cheaper contracts to diabetics with poor (and known to be sub-optimal, from a health standpoint) glycemic control. Another problem/complication is that even if preventative care-interventions tend to cost more money than they save by decreasing the need for other interventions long-term, they may easily cost less money (sometimes substantially less) per unit of health than a lot of other stuff we’re willing to have cost-sharing mechanisms, whether public or private, pay for – which means that if you’re very strongly in favour of ‘not subsidizing the unhealthy’, you may end up rejecting cost-sharing mechanisms promoting interventions which could potentially add a lot of health on the cheap and might be considered no-brainers in any other context. One could also talk about genes and how the impact of life-style is probably highly heterogeneous, so that some people have a lot more leeway in terms of living unhealthily than do others, making a ‘nobody gets insurance coverage if it might be their own fault’ perhaps just as unfair as the converse position where everybody gets covered. I don’t know, I haven’t added it all together and done the math, but I’m willing to bet that neither have the people who may suggest that sort of thing, and I’d be skeptical about assuming you even can ‘do the math’ given the amount of knowledge required to make sense of all the complications. I’m reasonably certain the system most people would evaluate as optimal through a Rawlsian veil of ignorance would not be at either end of the extremes of what might be termed ‘the responsibility axis’ (‘if there’s any chance it might be your own fault, you don’t get any money from us’ being at one end, and ‘it doesn’t matter how you’ve behaved during your life – of course we’ll cover all your treatment costs related to those five chronic, very expensive, and completely preventable diseases you seem to have contracted’-being at the other end), even assuming the proposed model would be the only one available (thus sidestepping the problem that both models would certainly be outcompeted by alternatives in an actual insurance market where different options might be available to health care consumers). Tradeoffs are everywhere, and they’re not going away. I could probably add another related rant here about how many of the issues private insurance market decision-makers have to deal with are identical to the ones confronting public sector decisions-makers, but I think I’ll stop here as the post is quite long enough as it is – back to the book coverage:
“The behaviours that are usually cited as being involved in the aetiology of lifestyle diseases are poor diet, lack of physical activity, cigarette smoking […] and, increasingly, excess drinking […] The taking of illegal drugs is also lifestyle behaviour with health consequences […] Sexual practices are also often described as health and/or lifestyle behaviours by public health professionals […] Major lifestyle diseases are coronary heart disease, stroke, lung cancer, colon cancer, diabetes and chronic obstructive pulmonary disease. […] health-related lifestyles can be defined as behavioural choices made by individuals about eating, physical activity, drinking alcohol, smoking tobacco, taking drugs and sexual practices. […] lifestyle behaviours are all chronic rather than acute behaviours. Usually individuals will practise regular patterns of these behaviours and their future behaviour will be best predicted by the choices they have made in the past. […] lifestyle behaviours have the majority of their positive consequences in the present and the majority of their negative outcomes in the future. Any lifestyle behavioural change intervention consequently requires individuals to be future orientated.”
“Measuring any type of behaviour creates a number of challenges for psychologists. Instruments need to be valid, reliable, practical, non-reactive (that is to say they should not alter the behaviour they seek to measure) and have the appropriate degree of specificity […]. Few methods of measurement meet all these requirements. For none of the lifestyle behaviours identified by this text is there a single accepted ‘gold standard’ measurement tool. Methods of behavioural assessment can be categorised as observational, self report or physiological. Observational and self-report methods are often not validated effectively, whereas physiological methods are often valid but impractical or unacceptable to the study population. […] The variation in methods available to measure lifestyle behaviours creates problems in interpreting research and survey data. First, researchers differ in what they choose to measure and second, even if they choose to measure the same aspect of behaviour, they can differ widely in the method they choose to collect their data and the way they choose to present their findings. Throughout the research literature on lifestyle behaviours, different methods of measurement confuse and hinder direct comparisons.”
“Since the late 1970s regular travel by foot or by bicycle has declined by 26 per cent (Department of Health, Physical Activity, Health Improvement and Prevention 2004).”
“emotional reactions to risky situations can often diverge from cognitive assessments of the same situation. If division occurs emotional reactions usually override cognitive reactions and drive behaviour. One reason for the domination of emotional responses over cognitive assessment is that emotional responses are rapid and rational analyses usually take time […] Many researchers investigating the role of emotion in risk perception conceptualise it as inferior to analytical responses. Indeed it is often dismissed as a source of lay error […] The emotion most usually associated with risk is anxiety (Joffe 2003). Dismissing anxiety as a biasing factor in ‘accurate’ risk perception is problematic. Anxiety is the intermediate goal of many risk communications, particularly public health communications. The primary goal is preventative behaviour but anxiety is considered an essential initiating motivation. Many health promotions are based on this fear drive hypothesis […]. The fear-drive model is generally considered outdated in academic health psychology […] but it is worth considering as it remains a central, if unacknowledged, tenet of many health promotion campaigns. […] The fear-drive model principally proposes that fear is an unpleasant emotion and people are motivated to try to reduce their state of fear. Health promotion has taken this notion and applied it to communication. If a communication evokes fear or anxiety then the fear drive model suggests that the recipient will be motivated to reduce this unpleasant emotive state. If the communication also contains behavioural advice, either implicitly or explicitly, then individuals may follow this advice […] Fear is intuitively appealing as a means of promoting behavioural change but the role it plays in initiating behavioural change is not clear cut or consistent […]. However, this has been effectively denied […] by health professionals for over half a century.”
“Self-efficacy is the belief that one can carry out specific behaviours in specified situations […]. Self-efficacy has been extensively studied [and] has been argued to be enhanced by personal accomplishment or mastery, vicarious experience or verbal persuasion […]. Self-efficacy is not unrealistic optimism as it is based on experience […]. Self-efficacy is similar to the broader construct of self-esteem but can be distinguished by three aspects: self-efficacy implies a personal attribution; it is prospective, referring to future behaviours and finally it is an operative construct in that the cognition is proximal to the behaviour […]. Self-efficacy is one of the best predictors of behavioural change whereas self-esteem has been found to be a poor predictor of behavioural change […]. Ajzen (1988, 1998) has consistently argued that behaviour-specific constructs fare better than generalised dispositions in predicting behaviour. The success of self-efficacy and the failure of self-esteem in predicting a range of behaviours adds considerable weight to this principle of compatibility [I remember an analogous argument being made in Leary et al.]. […] Perceived self-efficacy has been found to be the major instigating force in both intentions to change lifestyle behaviours and actual behavioural change […] Outcome expectancies, goals and perceived impediments have also been found to be predictive in some studies”
“Stage theories have become increasing popular in recent years […]. Many theorists have argued that different cognitions may be important at different stages in promoting health behaviour […] According to all stage theories a person can move through a series of stages in the process of behavioural change […] Different factors are important at different stages, although the theory allows for some overlap. […] interpreting whether the data supports a stage theory of behaviour is fraught with difficulties. […] Regardless of the method of analysis there appears [however] to be little empirical evidence for the existent of discrete stages that could not equally well be explained as categorisation of a continuum […].”
“There are differences in the level of obesity between the different UK countries. In Northern Ireland, some 64 per cent of men and 53 per cent of women are overweight or obese (NISRA 2006). Similarly, in Scotland 64 per cent of men and 57 per cent of women are so classified (Scottish Executive 2005) […] In England, 65.2 per cent of men and 57 per cent of women were reported as being at least overweight. The results from the Health Survey for England show that the proportion of adults with a desirable BMI decreased between 1993 and 2005, from 41.0 per cent to 32.2 per cent among men and from 49.5 per cent to 40.7 per cent among women. There was no significant change in the proportion of adults who were overweight. The proportion who were categorised as obese (BMI 30+) increased from 13.2 per cent of men in 1993 to 23.1 per cent in 2005 and from 16.4 per cent to 24.8 per cent of women (Information Centre 2006).”
“The National Diet and Nutrition Survey (DoH/FSA 2002) reported on a range of socio-demographic factors related to diet and obesity. For example, those in the low working-class group consumed more calories, considerably more fat, more salt and non-milk extrinsic sugars than those in the middle and upper classes. Furthermore those on low income eat a less varied diet compared to those in the upper classes. […] people living on state benefits and reduced income eat less fruit and vegetables, less fish and less high-fibre foods […] children of semi-skilled and unskilled manual workers are more likely to eat fatty food, less fruit and vegetables, and more sweets than those children of professionals and managers. […] research suggests that nearly 20 per cent of those aged between 4 and 18 years eat no fruit at all during a typical week […] Rayner and Scarborough (2005) estimated that food related ill-health is responsible for about 10 per cent of morbidity and mortality in the UK. […] They estimated that food accounts for costs of £6 billion a year (9 per cent of the NHS budget).”
“the amount of sedentary time spent watching TV by children in the UK has doubled since the 1960s (Reilly and Dorosty 1999)”