Econstudentlog

Quotes

i. “A drawback to success in life is that failure, when it does come, acquires an exaggerated importance.” (P. G. Wodehouse).

ii. “Truth is the cry of all, but the game of the few.” (George Berkeley).

iii. “It is always the best policy to speak the truth, unless, of course, you are an exceptionally good liar.” (Jerome K. Jerome).

iv. “I don’t believe any man ever existed without vanity, and if he did he would be an extremely uncomfortable person to have anything to do with. He would, of course, be a very good man, and we should respect him very much. He would be a very admirable man—a man to be put under a glass case and shown round as a specimen—a man to be stuck upon a pedestal and copied, like a school exercise—a man to be reverenced, but not a man to be loved, not a human brother whose hand we should care to grip. Angels may be very excellent sort of folk in their way, but we, poor mortals, in our present state, would probably find them precious slow company. Even mere good people are rather depressing. It is in our faults and failings, not in our virtues, that we touch one another and find sympathy. We differ widely enough in our nobler qualities. It is in our follies that we are at one.” (-ll-).

v. “A shy man’s lot is not a happy one. The men dislike him, the women despise him, and he dislikes and despises himself. […] A shy man means a lonely man—a man cut off from all companionship, all sociability. He moves about the world, but does not mix with it. Between him and his fellow-men there runs ever an impassable barrier—a strong, invisible wall that, trying in vain to scale, he but bruises himself against. He sees the pleasant faces and hears the pleasant voices on the other side, but he cannot stretch his hand across to grasp another hand. He stands watching the merry groups, and he longs to speak and to claim kindred with them. But they pass him by, chatting gayly to one another, and he cannot stay them. He tries to reach them, but his prison walls move with him and hem him in on every side. In the busy street, in the crowded room, in the grind of work, in the whirl of pleasure, amid the many or amid the few—wherever men congregate together, wherever the music of human speech is heard and human thought is flashed from human eyes, there, shunned and solitary, the shy man, like a leper, stands apart. His soul is full of love and longing, but the world knows it not. The iron mask of shyness is riveted before his face, and the man beneath is never seen.” (-ll-).

vi. “We cannot tell the precise moment when friendship is formed. As in filling a vessel drop by drop, there is at last a drop which makes it run over; so in a series of kindnesses there is at last one which makes the heart run over.” (James Boswell).

vii. “Men might as well project a voyage to the Moon as attempt to employ steam navigation against the stormy North Atlantic Ocean.” (Dr. Dionysus Lardner (1793-1859). Many more quotes of a similar nature here).

viii. “We pity in others only those evils which we have ourselves experienced.” (Jean-Jacques Rousseau).

ix. “All that time is lost which might be better employed.” (-ll-).

x. “Virtue is a state of war, and to live in it means one always has some battle to wage against oneself.” (-ll-).

xi. “Remorse sleeps during a prosperous period but wakes up in adversity.” (-ll-).

xii. “Hatred, as well as love, renders its votaries credulous.” (-ll-).

xiii. “He that is choice of his time will be choice of his company, and choice of his actions.” (Jeremy Taylor).

xiv. “To say that a man is vain means merely that he is pleased with the effect he produces on other people. A conceited man is satisfied with the effect he produces on himself.” (Max Beerbohm).

xv. “Moderation is the silken string running through the pearl chain of all virtues.” (Joseph Hall).

xvi. “If you make people think they’re thinking, they’ll love you; but if you really make them think, they’ll hate you.” (Donald Marquis).

xvii. “Some luck lies in not getting what you thought you wanted but getting what you have, which once you have got it you may be smart enough to see is what you would have wanted had you known.” (Garrison Keillor)

xviii. “Once I believed that sooner or later I would come across a really wise person; today I couldn’t even say what wisdom is.” (Fausto Cercignani).

xix. “If you are living in the past or in the future, you will never find a meaning in the present.” (-ll-)

xx. “A secret remains a secret until you make someone promise never to reveal it.” (-ll-)

Update: According to the category count, this is the 150th post of quotes here on this blog (the category cloud seems to be slow to update the number, but I assume it’ll do it eventually).

It’s probably worth pointing out to new readers in particular that if you like this post and perhaps have liked a few of the previous posts in the series, you can access a collection of all the other posts in the series simply by clicking the blue category link, ‘quotes’, at the bottom of this post, or by clicking the ‘quotes’ link provided in the category cloud in the sidebar to the right.

April 26, 2015 Posted by | quotes | 3 Comments

Data on Danish diabetics (Dansk Diabetes Database – National årsrapport 2013-2014)

[Warning: Long post].

I’ve blogged data related to the data covered in this post before here on the blog, but when I did that I only provided coverage in Danish. Part of my motivation for providing some coverage in English here (which is a slightly awkward and time consuming thing to do as all the source material is in Danish) is that this is the sort of data you probably won’t ever get to know about if you don’t understand Danish, and it seems like some of it might be worth knowing about also for people who do not live in Denmark. Another reason for posting stuff in English is of course that I dislike writing a blog post which I know beforehand that some of my regular readers will not understand. I should perhaps note that some of the data is at least peripherally related to my academic work at the moment.

The report which I’m covering in this post (here’s a link to it) deals primarily with various metrics collected in order to evaluate whether treatment goals which have been set centrally are being met by the Danish regions, one of the primary political responsibilities of which is to deal with health care service delivery. To take an example from the report, a goal has been set that at least 95 % of patients with known diabetes in the Danish regions should have their Hba1c (an important variable in the treatment context) measured at least once per year. The report of course doesn’t just contain a list of goals etc. – it also presents a lot of data which has been collected throughout the country in order to figure out to which extent the various goals have been met at the local levels. Hba1c is just an example; there are also goals set in relation to the variables hypertension, regular eye screenings, regular kidney function tests, regular foot examinations, and regular tests for hyperlipidemia, among others.

Testing is just one aspect of what’s being measured; other goals relate to treatment delivery. There’s for example a goal that the proportion of (known) type 2 diabetics with an Hba1c above 7.0% who are not receiving anti-diabetic treatment should be at most 5% within regions. A thought that occurred to me while reading the report was that it seemed to me that some interesting incentive problems might pop up here if these numbers were more important than I assume they are in the current decision-making context, because adding this specific variable without also adding a goal for ‘finding diabetics who do not know they are sick’ – and no such goal is included in the report, as far as I’ve been able to ascertain – might lead to problems; in theory a region that would do well in terms of identifying undiagnosed type 2 patients, of which there are many, might get punished for this if their higher patient population in treatment as a result of better identification might lead to binding capacity constraints at various treatment levels; capacity constraints which would not affect regions which are worse at identifying (non-)patients at risk because of the existence of a tradeoff between resources devoted to search/identification and resources devoted to treatment. Without a goal for identifying undiagnosed type 2 diabetics, it seems to me that to the extent that there’s a tradeoff between devoting resources to identifying new cases and devoting resources to the treatment of known cases, the current structure of evaluation, to the extent that it informs decision-making at the regional level, favours treatment over identification – which might or might not be problematic from a cost-benefit point of view. I find it somewhat puzzling that no goals relate to case-finding/diagnostics because a lot of the goals only really make sense if the people who are sick actually get diagnosed so that they can receive treatment in the first place; that, say, 95% of diabetics with a diagnosis receives treatment option X is much less impressive if, say, a third of all people with the disease do not have a diagnosis. Considering the relatively low amount of variation in some of the metrics included you’d expect a variable of this sort to be included here, at least I did.

The report has an appendix with some interesting information about the sex ratios, age distributions, how long people have had diabetes, whether they smoke, what their BMIs and blood pressures are like, how well they’re regulated (in terms of Hba1c), what they’re treated with (insulin, antihypertensive drugs, etc.), their cholesterol levels and triglyceride levels, etc. I’ll talk about these numbers towards the end of the post – if you want to get straight to this coverage and don’t care about the ‘main coverage’, you can just scroll down until you reach the ‘…’ point below.

The report has 182 pages with a lot of data, so I’m not going to talk about all of it. It is based on very large data sets which include more than 37.000 Danish diabetes patients from specialized diabetes units (diabetesambulatorier) (these are usually located in hospitals and provide ambulatory care only) as well as 34.000 diabetics treated by their local GPs – the aim is to eventually include all Danish diabetics in the database, and more are added each year, but even as it is a very big proportion of all patients are ‘accounted for’ in the data. Other sources also provide additional details, for example there’s a database on children and young diabetics collected separately. Most of the diabetics which are not included here are patients treated by their local GPs, and there’s still a substantial amount of uncertainty related to this group; approximately 90% of all patients connected to the diabetes units are assumed at this point to be included in the database, but the report also notes that approximately 80 % of diabetics are assumed to be treated in general practice. Coverage of this patient population is currently improving rapidly and it seems that most diabetics in Denmark will likely be included in the database within the next few years. They speculate in the report that the inclusion of more patients treated in general practice may be part of the explanation why goal achievement seems to have decreased slightly over time; this seems to me like a likely explanation considering the data they present as the diabetes units in general are better at achieving the goals set than are the GPs. The data is up to date – as some of you might have inferred from the presumably partly unintelligible words in the parenthesis in the title, the report deals with data from the time period 2013-2014. I decided early on not to copy tables into this post directly as it’s highly annoying to have to translate terms in such tables; instead I’ve tried to give you the highlights. I may or may not have succeeded in doing that, but you should be aware, especially if you understand Danish, that the report has a lot of details, e.g. in terms of intraregional variation etc., which are excluded from this coverage. Although I far from cover all the data, I do cover most of the main topics dealt with in the publication in at least a little bit of detail.

The report concludes in the introduction that for most treatment indicators no clinically significant differences in the quality of the treatment provided to diabetics are apparent when you compare the different Danish regions – so if you’re looking at the big picture, if you’re a Danish diabetic it doesn’t matter all that much if you live in Jutland or in Copenhagen. However some significant intra-regional differences do exist. In the following I’ll talk in a bit more detail about some of data included in the report.

When looking at the Hba1c goal (95% should be tested at least once per year), they evaluate the groups treated in the diabetes units and the groups treated in general practice separately; so you have one metric for patients treated in diabetes units living in the north of Jutland (North Denmark Region) and you have another group of patients treated in general practice living in the north of Jutland – this breakdown of the data makes it possible to not only compare people across regions but also to investigate whether there are important differences between the care provided by diabetes units and the care provided by general practitioners. When dealing with patients receiving ambulatory care from the diabetes units all regions meet the goal, but in Copenhagen (Capital Region of Denmark, (-CRD)) only 94% of patients treated in general practice had their Hba1c measured within the last year – this was the only region which did not meet the goal for the patient population treated in general practice. I would have thought beforehand that all diabetes units would have 100% coverage here, but that’s actually only the case in the region in which I live (Central Denmark Region) – on the other hand in most other regions, aside from Copenhagen again, the number is 99%, which seems reasonable as I’m assuming a substantial proportion of the remainder is explained by patient noncompliance, which is difficult to avoid completely. I speculate that patient compliance differences between patient populations treated at diabetes units and patient populations treated by their GP might also be part of the explanation for the lower goal achievement of the general practice population; as far as I’m aware diabetes units can deny care in the case of non-compliance whereas GPs cannot, so you’d sort of expect the most ‘difficult’ patients to end up in general practice; this is speculation to some extent and I’m not sure it’s a big effect, but it’s worth keeping in mind when analyzing this data that not all differences you observe necessarily relate to service delivery inputs (whether or not a doctor reminds a patient it’s time to get his eyes checked, for example); the two main groups analyzed are likely to also be different due to patient population compositions. Differences in patient population composition may of course also drive some of the intraregional variation observed. They mention in their discussion of the results for the Hba1c variable that they’re planning on changing the standard here to one which relate to the distributional results of the Hba1c, not just whether the test was done, which seems like a good idea. As it is, the great majority of Danish diabetics have their Hba1c measured at least annually, which is good news because of the importance of this variable in the treatment context.

In the context of hypertension, there’s a goal that at least 95% of diabetics should have their blood pressure measured at least once per year. In the context of patients treated in the diabetes units, all regions achieve the goal and the national average for this patient population is 97% (once again the region in which I live is the only one that achieved 100 % coverage), but in the context of patients treated in general practice only one region (North Denmark Region) managed to get to 95% and the national average is 90%. In most regions, one in ten diabetics treated in general practice do not have their blood pressure measured once per year, and again Copenhagen (CRD) is doing worst with a coverage of only 87%. As mentioned in the general comments above some of the intraregional variation is actually quite substantial, and this may be a good example because not all hospitals are doing great on this variable. Sygehus Sønderjylland, Aabenraa (in southern Jutland), one of the diabetes units, had a coverage of only 67%, and the percentage of patients treated at Hillerød Hospital in Copenhagen (CRD), another diabetes unit, was likewise quite low, with 83% of patients having had their blood pressure measured within the last year. These hospitals are however the exceptions to the rule. Evaluating whether it has been tested if patients do or do not have hypertension is different from evaluating whether hypertension is actually treated after it has been discovered, and here the numbers are less impressive; for the type 1 patients treated in the diabetes units, roughly one third (31%) of patients with a blood pressure higher than 140/90 are not receiving treatment for hypertension (the goal was at most 20%). The picture was much better for type 2 patients (11% at the national level) and patients treated in general practice (13%). They note that the picture has not improved over the last years for the type 1 patients and that this is not in their opinion a satisfactory state of affairs. A note of caution is that the variable only includes patients who have had a blood pressure measured within the last year which was higher than 140/90 and that you can’t use this variable as an indication of how many patients with high blood pressure are not being treated; some patients who are in treatment for high blood pressure have blood pressures lower than 140/90 (achieving this would in many cases be the point of treatment…). Such an estimate will however be added to later versions of the report. In terms of the public health consequences of undertreatment, the two patient populations are of course far from equally important. As noted later in the coverage, the proportion of type 2 patients on antihypertensive agents is much higher than the proportion of type 1 diabetics receiving treatment like this, and despite this difference the blood pressure distributions of the two patient populations are reasonably similar (more on this below).

Screening for albuminuria: The goal here is that at least 95 % of adult diabetics are screened within a two-year period (There are slightly different goals for children and young adults, but I won’t go into those). In the context of patients treated in the diabetes units, the northern Jutland Region and Copenhagen/RH failed to achieve the goal with a coverage slightly below 95% – the other regions achieved the goal, although not much more than that; the national average for this patient population is 96%. In the context of patients treated in general practice none of the regions achieve the goal and the national average for this patient population is 88%. Region Zealand was doing worst with 84%, whereas the region in which I live, Region Midtjylland, was doing best with a 92% coverage. Of the diabetes units, Rigshospitalet, “one of the largest hospitals in Denmark and the most highly specialised hospital in Copenhagen”, seems to also be the worst performing hospital in Denmark in this respect, with only 84 % of patients being screened – which to me seems exceptionally bad considering that for example not a single hospital in the region in which I live is below 95%. Nationally roughly 20% of patients with micro- or macroalbuminuria are not on ACE-inhibitors/Angiotensin II receptor antagonists.

Eye examination: The main process goal here is at least one eye examination every second year for at least 90% of the patients, and a requirement that the treating physician knows the result of the eye examination. This latter requirement is important in the context of the interpretation of the results (see below). For patients treated in diabetes units, four out of five regions achieved the goal, but there were also what to me seemed like large differences across regions. In Southern Denmark, the goal was not met and only 88 % had had an eye examination within the last two years, whereas the number was 98% in Region Zealand. Region Zealand was a clear outlier here and the national average for this patient population was 91%. For patients treated in general practice no regions achieved the goal, and this variable provides a completely different picture from the previous variables in terms of the differences between patients treated in diabetes units and patients treated in general practice: In most regions, the coverage here for patients in general practice is in the single digits and the national average for this patient population is just 5 %. They note in the report that this number has decreased over the years through which this variable has been analyzed, and they don’t know why (but they’re investigating it). It seems to be a big problem that doctors are not told about the results of these examinations, which presumably makes coordination of care difficult.

The report also has numbers on how many patients have had their eyes checked within the last 4 years, rather than within the last two, and this variable makes it clear that more infrequent screening is not explaining anything in terms of the differences between the patient populations; for patients treated in general practice the numbers are still here in the single digits. They mention that data security requirements imposed on health care providers are likely the reason why the numbers are low in general practice as it seems common that the GP is not informed of the results of screenings taking place, so that the only people who gets to know about the results are the ophthalmologists doing them. A new variable recently included in the report is whether newly-diagnosed type 2 diabetics are screened for eye-damage within 12 months of receiving their diagnosis – here they have received the numbers directly from the ophthalmologists so uncertainty about information sharing doesn’t enter the picture (well, it does, but the variable doesn’t care; it just measures whether an eye screen has been performed or not) – and although the standard set is 95% (at most one in twenty should not have their eyes checked within a year of diagnosis) at the national level only half of patients actually do get an eye screen within the first year (95% CI: 46-53%) – uncertainty about the date of diagnosis makes it slightly difficult to interpret some of the specific results, but the chosen standard is not achieved anywhere and this once again underlines how diabetic eye care is one of the areas where things are not going as well as the people setting the goals would like them to. The rationale for screening people within the first year of diagnosis is of course that many type 2 patients have complications at diagnosis – “30–50 per cent of patients with newly diagnosed T2DM will already have tissue complications at diagnosis due to the prolonged period of antecedent moderate and asymptomatic hyperglycaemia.” (link).

The report does include estimates of the number of diabetics who receive eye screenings regardless of whether the treating physician knows the results or not; at the national level, according to this estimate 65% of patients have their eyes screened at least once every second year, leaving more than a third of patients in a situation where they are not screened as often as is desirable. They mention that they have had difficulties with the transfer of data and many of the specific estimates are uncertain, including two of the regional estimates, but the general level – 65% or something like that – is based on close to 10.000 patients and is assumed to be representative. Approximately 1% of Danish diabetics are blind, according to the report.

Foot examinations: Just like most of the other variables: At least 95 % of patients, at least once every second year. For diabetics treated in diabetes units, the national average is here 96% and the goal was not achieved in Copenhagen (CRD) (94%) and northern Jutland (91%). There are again remarkable differences within regions; at Helsingør Hospital only 77% were screened (95% CI: 73-82%) (a drop from 94% the year before), and at Hillerød Hospital the number was even lower, 73% (95% CI: 70-75), again a drop from the previous year where the coverage was 87%. Both these numbers are worse than the regional averages for all patients treated in general practice, even though none of the regions meet the goal. Actually I thought the year-to-year changes in the context of these two hospitals were almost as interesting as the intraregional differences because I have a hard time explaining those; how do you even set up a screening programme such that a coverage drop of more than 10 % from one year to the next is possible? To those who don’t know, diabetic feet are very expensive and do not seem to get the research attention one might from a cost-benefit perspective assume they would (link, point iii). Going back to the patients in general practice on average 81 % of these patients have a foot examination at least once every second year. The regions here vary from 79% to 84%. The worst covered patients are patients treated in general practice in the Vordingborg sygehus catchment area in the Zealand Region, where only roughly two out of three (69%, 95% CI: 62-75%) patients have regularly foot examinations.

Aside from all the specific indicators they’ve collected and reported on, the authors have also constructed a combined indicator, an ‘all-or-none’ indicator, in which they measure the proportion of patients who have not failed to get their Hba1c measured, their feet checked, their blood pressure measured, kidney function tests, etc. … They do not include in this metric the eye screening variable because of the problems associated with this variable, but this is the only process variable not included, and the variable is sort of an indicator of how many of the patients are actually getting all of the care that they’re supposed to get. As patients treated in general practice are generally less well covered than patients treated in the diabetes units at the hospitals I was interested to know how much these differences ‘added up to’ in the end. For the diabetes units, 11 % of patients failed on at least one metric (i.e. did not have their feet checked/Hba1c measured/blood pressure measured/etc.), whereas this was the case for a third of patients in general practice (67%). Summed up like that it seems to me that if you’re a Danish diabetes patient and you want to avoid having some variable neglected in your care, it matters whether you’re treated by your local GP or by the local diabetes unit and that you’re probably going to be better off receiving care from the diabetes unit.

Some descriptive statistics from the appendix (p. 95 ->):

Sex ratio: In the case of this variable, they have multiple reports on the same variable based on data derived from different databases. In the first database, including 16.442 people, 56% are male and 44% are female. In the next database (n=20635), including only type 2 diabetics, the sex ratio is more skewed; 60% are males and 40% are females. In a database including only patients in general practice (n=34359), like in the first database 56% of the diabetics are males and 44% are females. For the patient population of children and young adults included (n=2624), the sex ratio is almost equal (51% males and 49% females). The last database, Diabase, based on evaluation of eye screening and including only adults (n=32842), have 55% males and 45% females. It seems to me based on these results that the sex ratio is slightly skewed in most patient populations, with slightly more males than females having diabetes – and it seems not improbable that this is to due to a higher male prevalence of type 2 diabetes (the children/young adult database and type 2 database seem to both point in this direction – the children/young adult group mainly consists of type 1 patients as 98% of this sample is type 1. The fact that the prevalence of autoimmune disorders is in general higher in females than in males also seems to support this interpretation; to the extent that the sex ratio is skewed in favour of males you’d expect lifestyle factors to be behind this.

Next, age distribution. In the first database (n=16.442), the average and the median age is 50, the standard deviation is 16, the youngest individual is 16 and the oldest is 95. It is worth remembering in this part of the reporting that the oldest individual in the sample is not a good estimate of ‘how long a diabetic can expect to live’ – for all we know the 95 year old in the database got diagnosed at the age of 80. You need diabetes duration before you can begin to speculate about that variable. Anyway, in the next database, of type 2 patients (n=20635), the average age is 64 (median=65), the standard deviation is 12 and the oldest individual is 98. In the context of both of the databases mentioned so far some regions do better than others in terms of the oldest individual, but it also seems to me that this may just be a function of the sample size and ‘random stuff’ (95+ year olds are rare events); Northern Jutland doesn’t have a lot of patients so the oldest patient in that group is not as old as the oldest patient from Copenhagen – this is probably but what you’d expect. In the general practice database (n=34359), the average age is 68 (median=69) and the standard deviation is 11; the oldest individual there is 102. In the Diabase database (n=32842), the average age is 62 (median=64), the standard deviation is 15 and the oldest individual is 98. It’s clear from these databases that most diabetics in Denmark are type 2 diabetics (this is no surprise) and that a substantial proportion of them are at or close to retirement age.

The appendix has a bit of data on diabetes type, but I think the main thing to take away from the tables that break this variable down is that type 1 is overrepresented in the databases compared to the true prevalence – in the Diabase database for example almost half of patients are type 1 (46%), despite the fact that type 1 diabetics are estimated to make up only 10% of the total in Denmark (see e.g. this (Danish source)). I’m sure this is to a significant extent due to lack of coverage of type 2 diabetics treated in general practice.

Diabetes duration: In the first data-set including 16.442 individuals the patients have a median diabetes duration of 21,2 years. The 10% cutoff is 5,4 years, the 25% cutoff is 11,3 years, the 75% cutoff is 33,5 years, and the 90% cutoff is 44,2 years. High diabetes durations are more likely to be observed in type 1 patients as they’re in general diagnosed earlier; in the next database involving only type 2 patients (n=20635), the median duration is 12.9 years and the corresponding cutoffs are 3,8 years (10%); 7,4 years (25%); 18,6 years (75%); and 24,7 years (90%). In the database involving patients treated in general practice, the median duration is 6,8 years and the cutoffs reported for the various percentiles are 2,5 years (10%), 4,0 (25%), 11,2 (75%) and 15,6 (90%). One note not directly related to the data but which I thought might be worth adding here is that of one were to try to use these data for the purposes of estimating the risk of complications as a function of diabetes duration, it would be important to have in mind that there’s probably often a substantial amount of uncertainty associated with the diabetes duration variable because many type 2 diabetics are diagnosed after a substantial amount of time with sub-optimal glycemic control; i.e. although diabetes duration is lower in type 2 populations than in type 1 populations, I’d assume that the type 2 estimates of duration are still biased downwards compared to type 1 estimates causing some potential issues in terms of how to interpret associations found here.

Next, smoking. In the first database (n=16.442), 22% of diabetics smoke daily and another 22% are ex-smokers who have not smoked within the last 6 months. According to the resource to which you’re directed when you’re looking for data on that kind of stuff on Statistics Denmark, the percentage of daily smokers was 17% in 2013 in the general population (based on n=158.870 – this is a direct link to the data), which seems to indicate that the trend (this is a graph of the percentage of Danes smoking daily as a function of time, going back to the 70es) I commented upon (Danish link) a few years back has not reversed or slowed down much. If we go back to the appendix and look at the next source, dealing with type 2 diabetics, 19% of them are smoking daily and 35% of them are ex-smokers (again, 6 months). In the general practice database (n=34.359) 17% of patients smoke daily and 37% are ex-smokers.

BMI. Here’s one variable where type 1 and type 2 look very different. The first source deals with type 1 diabetics (n=15.967) and here the median BMI is 25.0, which is comparable to the population median (if anything it’s probably lower than the population median) – see e.g. page 63 here. Relevant percentile cutoffs are 20,8 (10%), 22,7 (25%), 28,1 (75%), and 31,3 (90%). Numbers are quite similar across regions. For the type 2 data, the first source (n=20.035) has a median BMI of 30,7 (almost equal to the 1 in 10 cutoff for type 1 diabetics), with relevant cutoffs of 24,4 (10%), 27,2 (25%), 34,9 (75%), and 39,4 (90%). According to this source, one in four type 2 diabetics in Denmark are ‘severely obese‘ and more diabetics are obese than are not. It’s worth remembering that using these numbers to implicitly estimate the risk of type 2 diabetes associated with overweight is problematic as especially some of the people in the lower end of the distribution are quite likely to have experienced weight loss post-diagnosis. For type 2 patients treated in general practice (n=15.736), the median BMI is 29,3 and cutoffs are 23,7 (10%), 26,1 (25%), 33,1 (75%), and 37,4 (90%).

Distribution of Hba1c. The descriptive statistics included also have data on the distribution of Hba1c values among some of the patients who have had this variable measured. I won’t go into the details here except to note that the differences between type 1 and type 2 patients in terms of the Hba1c values achieved are smaller than I’d perhaps expected; the median Hba1c among type 1s was estimated at 62, based on 16.442 individuals, whereas the corresponding number for type 2s was 59, based on 20.635 individuals. Curiously, a second data source finds a median Hba1c of only 48 for type 2 patients treated in general practice; the difference between this one and the type 1 median is definitely high enough to matter in terms of the risk of complications (it’s more questionable how big the effect of a jump from 59 to 62 is, especially considering measurement error and the fact that the type 1 distribution seems denser than the type 2 distribution so that there aren’t that many more exceptionally high values in the type 1 dataset), but I wonder if this actually quite impressive level of metabolic control in general practice may not be due to biased reporting, with GPs doing well in terms of diabetes management being also more likely to report to the databases; it’s worth remembering that most patients treated in general practice are still not accounted for in these data-sets.

Oral antidiabetics and insulin. In one sample of 20.635 type 2 patients, 69% took oral antidiabetics, and in another sample of 34.359 type 2 patients treated in general practice the number was 75%. 3% of type 1 diabetics in a sample of 16.442 individuals also took oral antidiabetics, which surprised me. In the first-mentioned sample of type 2 patients 69% (but not the same amount of individuals – this was not a reporting error) also took insulin, so there seems to be a substantial number of patients on both treatments. In the general practice sample included the number of patients on insulin was much lower, as only 14% of type 2 patients were on insulin – again concerns about reporting bias may play a role here, but even taking this number at face value and extrapolating out of sample you reach the conclusion that the majority of patients on insulin are probably type 2 diabetics, as only roughly one patient in 10 is type 1.

Antihypertensive treatment and treatment for hyperlipidemia: Although there as mentioned above seems to be less focus on hypertension in type 1 patients than on hypertension in type 2 patients, it’s still the case that roughly half (48%) of all patients in the type 1 sample (n=16.442) was on antihypertensive treatment. In the first type 2 sample (n=20635), 82% of patients were receiving treatment against hypertension, and this number was similar in the general practice sample (81%). The proportions of patients in treatment for hyperlipidemia are roughly similar (46% of type 1, and 79% and 73% in the two type 2 samples, respectively).

Blood pressure. The median level of systolic blood pressure among type 1 diabetics (n=16442) was 130, with the 75% cutoff intersecting the hypertension level (140) and 10% of patients having a systolic blood pressure above 151. These numbers are almost identical to the sample of type 2 patients treated in general practice, however as earlier mentioned this blood pressure level is achieved with a lower proportion of patients in treatment for hypertension. In the second sample of type 2 patients (n=20635), the numbers were slightly higher (median: 133, 75% cutoff: 144, 90% cutoff: 158). The median diastolic blood pressure was 77 in the type 1 sample, with 75 and 90% cutoffs of 82 and 89; the data in the type 2 samples are almost identical.

April 24, 2015 Posted by | data, diabetes, medicine | Leave a comment

Civil Wars (II)

Here’s my first post about the book. In this post I’ll continue my coverage where I left off in my first post. A few of the chapters covered below I did not think very highly of, but other parts of the coverage are about as good as you could expect (given problems such as e.g. limited data etc.). Some of the stuff I found quite interesting. As people will note in the coverage below the book does address the religious dimension to some extent, though in my opinion far from to the extent that the variable deserves. An annoying aspect of the chapter on religion was to me that although the author of the chapter includes data which to me cannot but lead to some very obvious conclusions, the author seems to be very careful avoiding drawing those conclusions explicitly. It’s understandable, but still annoying. For related reasons I also got annoyed at him for presumably deliberately completely disregarding which seems in the context of his own coverage to be an actually very important component of Huntington’s thesis, that conflict at the micro level seems to very often be between muslims and ‘the rest’. Here’s a relevant quote from Clash…, p. 255:

“ethnic conflicts and fault line wars have not been evenly distributed among the world’s civilizations. Major fault line fighting has occurred between Serbs and Croats in the former Yugoslavia and between Buddhists and Hindus in Sri Lanka, while less violent conflicts took place between non-Muslim groups in a few other places. The overwhelming majority of fault line conflicts, however, have taken place along the boundary looping across Eurasia and Africa that separates Muslims from non-Muslims. While at the macro or global level of world politics the primary clash of civilizations is between the West and the rest, at the micro or local level it is between Islam and the others.”

This point, that conflict at the local level – which seems to be the type of conflict level you’re particularly interested in if you’re researching civil wars, as also argued in previous chapters in the coverage – according to Huntington seems to be very islam-centric, is completely overlooked (ignored?) in the handbook chapter, and if you haven’t read Huntington and your only exposure to him is through the chapter in question you’ll probably conclude that Huntington was wrong, because that seems to be the conclusion the author draws, arguing that other models are more convincing (I should add here that these other models do seem useful, at least in terms of providing (superficial) explanations; the point is just that I feel the author is misrepresenting Huntington and I dislike this). Although there are parts of the coverage in that chapter where I feel that it’s obvious the author and I do not agree, I should note that the fact that he talks about the data and the empirical research makes up for a lot of other stuff.

Anyway, on to the coverage – it’s perhaps worth noting, in light of the introductory remarks above, that the post has stuff on a lot of things besides religion, e.g. the role of natural resources, regime types, migration, and demographics.

“Elites seeking to end conflict must: (1) lead followers to endorse and support peaceful solutions; (2) contain spoilers and extremists and prevent them from derailing the process of peacemaking; and (3) forge coalitions with more moderate members of the rival ethnic group(s) […]. An important part of the two-level nature of the ethnic conflict is that each of the elites supporting the peace process be able to present themselves, and the resulting terms of the peace, as a “win” for their ethnic community. […] A strategy that a state may pursue to resolve ethnic conflict is to co-opt elites from the ethnic communities demanding change […]. By satisfying elites, it reduces the ability of the aggrieved ethnic community to mobilize. Such a process of co-option can also be used to strengthen ethnic moderates in order to undermine ethnic extremists. […] the co-opted elites need to be careful to be seen as still supporting ethnic demands or they may lose all credibility in their respective ethnic community. If this occurs, the likely outcome is that more extreme ethnic elites will be able to capture the ethnic community, possibly leading to greater violence.
It is important to note that “spoilers,” be they an individual or a small sub-group within an ethnic community, can potentially derail any peace process, even if the leaders and masses support peace (Stedman, 2001).”

“Three separate categories of international factors typically play into identity and ethnic conflict. The first is the presence of an ethnic community across state boundaries. Thus, a single community exists in more than one state and its demands become international. […] This division of an ethnic community can occur when a line is drawn geographically through a community […], when a line is drawn and a group moves into the new state […], or when a diaspora moves a large population from one state to another […] or when sub-groups of an ethnic community immigrate to the developed world […] When ethnic communities cross state boundaries, the potential for one state to support an ethnic community in the other state exists. […] There is also the potential for ethnic communities to send support to a conflict […] or to lobby their government to intervene […]. Ethnic groups may also form extra-state militias and cross international borders. Sometimes these rebel groups can be directly or indirectly sponsored by state governments, leading to a very complex situation […] A second set of possible international factors is non-ethnic international intervention. A powerful state may decide to intervene in an ethnic conflict for a variety of reasons, ranging from humanitarian support, to peacekeeping, to outright invasion […] The third and last factor is the commitment of non-governmental organizations (NGOs) or third-party mediators to a conflict. […] The record of international interventions in ethnic civil wars is quite mixed. There are many difficulties associated with international action [and] international groups cannot actually change the underlying root of the ethnic conflict (Lake and Rothchild, 1998; Kaufman, 1996).”

“A relatively simple way to think of conflict onset is to think that for a rebellion to occur two conditions need to be satisfactorily fulfilled: There must be a motivation and there must be an opportunity to rebel.3 First, the rebels need a motive. This can be negative – a grievance against the existing state of affairs – or positive – a desire to capture resource rents. Second, potential rebels need to be able to achieve their goal: The realization of their desires may be blocked by the lack of financial means. […] Work by Collier and Hoeffler (1998, 2004) was crucial in highlighting the economic motivation behind civil conflicts. […] Few conflicts, if any, can be characterized purely as “resource conflicts.” […] It is likely that few groups are solely motivated by resource looting, at least in the lower rank level. What is important is that valuable natural resources create opportunities for conflicts. To feed, clothe, and arm its members, a rebel group needs money. Unless the rebel leaders are able to raise sufficient funds, a conflict is unlikely to start no matter how severe the grievances […] As a consequence, feasibility of conflict – that is, valuable natural resources providing opportunity to engage in violent conflict – has emerged as a key to understanding the relation between valuable resources and conflict.”

“It is likely that some natural resources are more associated with conflict than others. Early studies on armed civil conflict used resource measures that aggregated different types of resources together. […] With regard to financing conflict start-up and warfare the most salient aspect is probably the ease with which a resource can be looted. Lootable resources can be extracted with simple methods by individuals or small groups, are easy to transport, and can be smuggled across borders with limited risks. Examples of this type of resources are alluvial gemstones and gold. By contrast, deep-shaft minerals, oil, and natural gas are less lootable and thus less likely sources of financing. […] Using comprehensive datasets on all armed civil conflicts in the world, natural resource production, and other relevant aspects such as political regime, economic performance, and ethnic composition, researchers have established that at least some high-value natural resources are related to higher risk of conflict onset. Especially salient in this respect seem to be oil and secondary diamonds[7] […] The results regarding timber […] and cultivation of narcotics […] are inconclusive. […] [An] important conclusion is that natural resources should be considered individually and not lumped together. Diamonds provide an illustrative example: the geological form of the diamond deposit is related to its effect on conflict. Secondary diamonds – the more lootable form of two deposit types – makes conflict more likely, longer, and more severe. Primary diamonds on the other hand are generally not related to conflict.”

“Analysis on conflict duration and severity confirm that location is a salient factor: resources matter for duration and severity only when located in the region where the conflict is taking place […] That the location of natural resources matters has a clear and important implication for empirical conflict research: relying on country-level aggregates can lead to wrong conclusions about the role of natural resources in armed civil conflict. As a consequence of this, there has been effort to collect location-specific data on oil, gas, drug cultivation, and gemstones”.

“a number of prominent studies of ethnic conflict have suggested that when ethnic groups grow at different rates, this may lead to fears of an altered political balance, which in turn might cause political instability and violent conflict […]. There is ample anecdotal evidence for such a relationship [but unfortunately little quantitative research…]. The civil war in Lebanon, for example, has largely been attributed to a shift in the delicate ethnic balance in that state […]. Further, in the early 1990s, radical Serb leaders were agitating for the secession of “Serbian” areas in Bosnia-Herzegovina by instigating popular fears that Serbs would soon be outnumbered by a growing Muslim population heading for the establishment of a Shari’a state”.

“[One] part of the demography-conflict literature has explored the role of population movements. Most of this literature […] treats migration and refugee flows as a consequence of conflict rather than a potential cause. Some scholars, however, have noted that migration, and refugee migration in particular, can spur the spread of conflict both between and within states […]. Existing work suggests that environmentally induced migration can lead to conflict in receiving areas due to competition for scarce resources and economic opportunities, ethnic tensions when migrants are from different ethnic groups, and exacerbation of socioeconomic “fault lines” […] Salehyan and Gleditsch (2006) point to spill-over effects, in the sense that mass refugee migration might spur tensions in neighboring or receiving states by imposing an economic burden and causing political stability [sic]. […] Based on a statistical analysis of refugees from neighboring countries and civil war onset during the period 1951–2001, they find that countries that experience an influx of refugees from neighboring states are significantly more likely to experience wars themselves. […] While the youth bulge hypothesis [large groups of young males => higher risk of violence/war/etc.] in general is supported by empirical evidence, indicating that countries and areas with large youth cohorts are generally at a greater risk of low-intensity conflict, the causal pathways relating youth bulges to increased conflict propensity remain largely unexplored quantitatively. When it comes to the demographic factors which have so far received less attention in terms of systematic testing – skewed sex ratios, differential ethnic growth, migration, and urbanization – the evidence is somewhat mixed […] a clear challenge with regard to the study of demography and conflict pertains to data availability and reliability. […] Countries that are undergoing armed conflict are precisely those for which we need data, but also those in which census-taking is hampered by violence.”

“Most research on the duration of civil war find that civil wars in democracies tend to be longer than other civil wars […] Research on conflict severity finds some evidence that democracies tend to see fewer battledeaths and are less likely to target civilians, suggesting that democratic institutions may induce some important forms of restraints in armed conflict […] Many researchers have found that democratization often precedes an increase in the risk of the onset of armed conflict. Hegre et al. (2001), for example, find that the risk of civil war onset is almost twice as high a year after a regime change as before, controlling for the initial level of democracy […] Many argue that democratic reforms come about when actors are unable to rule unilaterally and are forced to make concessions to an opposition […] The actual reforms to the political system we observe as democratization often do not suffice to reestablish an equilibrium between actors and the institutions that regulate their interactions; and in its absence, a violent power struggle can follow. Initial democratic reforms are often only partial, and may fail to satisfy the full demands of civil society and not suffice to reduce the relevant actors’ motivation to resort to violence […] However, there is clear evidence that the sequence matters and that the effect [the increased risk of civil war after democratization, US] is limited to the first election. […] civil wars […] tend to be settled more easily in states with prior experience of democracy […] By our count, […] 75 percent of all annual observations of countries with minor or major armed conflicts occur in non-democracies […] Democracies have an incidence of major armed conflict of only 1 percent, whereas nondemocracies have a frequency of 5.6 percent.”

“Since the Iranian revolution in the late 1970s, religious conflicts and the rise of international terror organizations have made it difficult to ignore the facts that religious factors can contribute to conflict and that religious actors can cause or participate in domestic conflicts. Despite this, comprehensive studies of religion and domestic conflict remain relatively rare. While the reasons for this rarity are complex there are two that stand out. First, for much of the twentieth century the dominant theory in the field was secularization theory, which predicted that religion would become irrelevant and perhaps extinct in modern times. While not everyone agreed with this extreme viewpoint, there was a consensus that religious influences on politics and conflict were a waning concern. […] This theory was dominant in sociology for much of the twentieth century and effectively dominated political science, under the title of modernization theory, for the same period. […] Today supporters of secularization theory are clearly in the minority. However, one of their legacies has been that research on religion and conflict is a relatively new field. […] Second, as recently as 2006, Brian Grim and Roger Finke lamented that “religion receives little attention in international quantitative studies. Including religion in cross-national studies requires data, and high-quality data are in short supply” […] availability of the necessary data to engage in quantitative research on religion and civil wars is a relatively recent development.”

“[Some] studies [have] found that conflicts involving actors making religious demands – such as demanding a religious state or a significant increase in religious legislation – were less likely to be resolved with negotiated settlements; a negotiated settlement is possible if the settlement focused on the non-religious aspects of the conflict […] One study of terrorism found that terror groups which espouse religious ideologies tend to be more violent (Henne, 2012). […] The clear majority of quantitative studies of religious conflict focus solely on inter-religious conflicts. Most of them find religious identity to influence the extent of conflict […] but there are some studies which dissent from this finding”.

“Terror is most often selected by groups that (1) have failed to achieve their goals through peaceful means, (2) are willing to use violence to achieve their goals, and (3) do not have the means for higher levels of violence.”

“the PITF dataset provides an accounting of the number of domestic conflicts that occurred in any given year between 1960 and 2009. […] Between 1960 and 2009 the modified dataset includes 817 years of ethnic war, 266 years of genocides/politicides, and 477 years of revolutionary wars. […] Cases were identified as religious or not religious based on the following categorization:
1 Not Religious.
2 Religious Identity Conflict: The two groups involved in the conflict belong to different religions or different denominations of the same religion.[11]
3 Religious Wars: The two sides of the conflict belong to the same religion but the description of the conflict provided by the PITF project identifies religion as being an issue in the conflict. This typically includes challenges by religious fundamentalists to more secular states. […]
The results show that both numerically and as a proportion of all conflict, religious state failures (which include both religious identity conflicts and religious wars) began increasing in the mid-1970s. […] As a proportion of all conflict, religious state failures continued to increase and became a majority of all state failures in 2002. From 2002 onward, religious state failures were between 55 percent and 62 percent of all state failures in any given year.”

“Between 2002 and 2009, eight of 12 new state failures were religious. All but one of the new religious state failures were ongoing as of 2009. These include:
• 2002: A rebellion in the Muslim north of the Ivory Coast (ended in 2007)
• 2003: The beginning of the Sunni–Shia violent conflict in Iraq (ongoing)
• 2003: The resumption of the ethnic war in the Sudan [97% muslims, US] (ongoing)
• 2004: Muslim militants challenged Pakistan’s government in South and North Waziristan. This has been followed by many similar attacks (ongoing)
• 2004: Outbreak of violence by Muslims in southern Thailand (ongoing)
• 2004: In Yemen [99% muslims, US], followers of dissident cleric Husain Badr al-Din al-Huthi create a stronghold in Saada. Al-Huthi was killed in September 2004, but serious fighting begins again in early 2005 (ongoing)
• 2007: Ethiopia’s invasion of southern Somalia causes a backlash in the Muslim (ethnic- Somali) Ogaden region (ongoing)
• 2008: Islamist militants in the eastern Trans-Caucasus region of Russia bordering on Georgia (Chechnya, Dagestan, and Ingushetia) reignited their violent conflict against Russia[12] (ongoing)” [my bold]

“There are few additional studies which engage in this type of longitudinal analysis. Perhaps the most comprehensive of such studies is presented in Toft et al.’s (2011) book God’s Century based on data collected by Toft. They found that religious conflicts – defined as conflicts with a religious content – rose from 19 percent of all civil wars in the 1940s to about half of civil wars during the first decade of the twenty-first century. Of these religious conflicts, 82 percent involved Muslims. This analysis includes only 135 civil wars during this period. The lower number is due to a more restrictive definition of civil war which includes at least 1,000 battle deaths. This demonstrates that the findings presented above also hold when looking at the most violent of civil wars.” [my bold]

April 22, 2015 Posted by | anthropology, books, data, demographics, Geography, history, religion | Leave a comment

Civil Wars (I)

“This comprehensive new Handbook explores the significance and nature of armed intrastate conflict and civil war in the modern world.

Civil wars and intrastate conflict represent the principal form of organised violence since the end of World War II, and certainly in the contemporary era. These conflicts have a huge impact and drive major political change within the societies in which they occur, as well as on an international scale. The global importance of recent intrastate and regional conflicts in Afghanistan, Pakistan, Iraq, Somalia, Nepal, Côte d’Ivoire, Syria and Libya – amongst others – has served to refocus academic and policy interest upon civil war. […] This volume will be of much interest to students of civil wars and intrastate conflict, ethnic conflict, political violence, peace and conflict studies, security studies and IR in general.”

I’m currently reading this handbook. One observation I’ll make here before moving on to the main coverage is that although I’ve read more than 100 pages and although every single one of the conflicts argued in the introduction above to be motivating study into these topics aside from one (the exception being Nepal) involve muslims, the word ‘islam’ has been mentioned exactly once in the coverage so far (an updated list would arguably include yet another muslim country, Yemen, as well). I noted while doing the text search that they seem to take up the topic of religion and religious motivation later on, so I sort of want to withhold judgment for now, but if they don’t deal more seriously with this topic later on than they have so far, I’ll have great difficulties giving this book a high rating, despite the coverage being in general actually quite interesting, detailed and well written so far – chapter 7, on so-called ‘critical perspectives’ is in my opinion a load of crap [a few illustrative quotes/words/concepts from that chapter: “Frankfurt School-inspired Critical Theory”, “approaches such as critical constructivism, post-structuralism, feminism, post-colonialism”, “an openly ethical–normative commitment to human rights, progressive politics”, “labelling”, “dialectical”, “power–knowledge structures”, “conflict discourses”, “Foucault”, “an abiding commitment to being aware of, and trying to overcome, the Eurocentric, Orientalist and patriarchal forms of knowledge often prevalent within civil war studies”, “questioning both morally and intellectually the dominant paradigm”… I read the chapter very fast, to the point of almost only skimming it, and I have not quoted from that chapter in my coverage below, for reasons which should be obvious – I was reminded of Poe’s Corollary while reading the chapter as I briefly started wondering along the way if the chapter was an elaborate joke which had somehow made it into the publication, and I also briefly was reminded of the Sokal affair, mostly because of the unbelievable amount of meaningless buzzwords], but that’s just one chapter and most of the others so far have been quite okay. A few of the points in the problematic chapter are actually arguably worth having in mind, but there’s so much bullshit included as well that you’re having a really hard time taking any of it seriously.

Some observations from the first 100 pages:

“There are wide differences of opinion across the broad field of scholars who work on civil war regarding the basis of legitimate and scientific knowledge in this area, on whether cross-national studies can generate reliable findings, and on whether objective, value-free analysis of armed conflict is possible. All too often – and perhaps increasingly so, with the rise in interest in econometric approaches – scholars interested in civil war from different methodological traditions are isolated from each other. […] even within the more narrowly defined empirical approaches to civil war studies there are major disagreements regarding the most fundamental questions relating to contemporary civil wars, such as the trends in numbers of armed conflicts, whether civil wars are changing in nature, whether and how international actors can have a role in preventing, containing and ending civil wars, and the significance of [various] factors”.

“In simplest terms civil war is a violent conflict between a government and an organized rebel group, although some scholars also include armed conflicts primarily between non-state actors within their study. The definition of a civil war, and the analytical means of differentiating a civil war from other forms of large-scale violence, has been controversial […] The Uppsala Conflict Data Program (UCDP) uses 25 battle-related deaths per year as the threshold to be classified as armed conflict, and – in common with other datasets such as the Correlates of War (COW) – a threshold of 1,000 battle-related deaths for a civil war. While this is now widely endorsed, debate remains regarding the rigor of this definition […] differences between two of the main quantitative conflict datasets – the UCDP and the COW – in terms of the measurement of armed conflict result in significant differences in interpreting patterns of conflict. This has led to conflicting findings not only about absolute numbers of civil wars, but also regarding trends in the numbers of such conflicts. […] According to the UCDP/PRIO data, from 1946 to 2011 a total of 102 countries experienced civil wars. Africa witnessed the most with 40 countries experiencing civil wars between 1946 and 2011. During this period 20 countries in the Americas experienced civil war, 18 in Asia, 13 in Europe, and 11 in the Middle East […]. There were 367 episodes (episodes in this case being separated by at least one year without at least 25 battle-related deaths) of civil wars from 1946 to 2009 […]. The number of active civil wars generally increased from the end of the Cold War to around 1992 […]. Since then the number has been in decline, although whether this is likely to be sustained is debatable. In terms of onset of first episode by region from 1946 to 2011, Africa leads the way with 75, followed by Asia with 67, the Western Hemisphere with 33, the Middle East with 29, and Europe with 25 […]. As Walter (2011) has observed, armed conflicts are increasingly concentrated in poor countries. […] UCDP reports 137 armed conflicts for the period 1989–2011. For the overlapping period 1946–2007, COW reports 179 wars, while UCDP records 244 armed conflicts. As most of these conflicts have been fought over disagreements relating to conditions within a state, it means that civil war has been the most common experience of war throughout this period.”

“There were 3 million deaths from civil wars with no international intervention between 1946 and 2008. There were 1.5 million deaths in wars where intervention occurred. […] In terms of region, there were approximately 350,000 civil war-related deaths in both Europe and the Middle East from the years 1946 to 2008. There were 467,000 deaths in the Western Hemisphere, 1.2 million in Africa, and 3.1 million in Asia for the same period […] In terms of historical patterns of civil wars and intrastate armed conflict more broadly, the most conspicuous trend in recent decades is an apparent decline in absolute numbers, magnitude, and impact of armed conflicts, including civil wars. While there is wide – but not total – agreement regarding this, the explanations for this downward trend are contested. […] the decline seems mainly due not to a dramatic decline of civil war onsets, but rather because armed conflicts are becoming shorter in duration and they are less likely to recur. While this is undoubtedly welcome – and so is the tendency of civil wars to be generally smaller in magnitude – it should not obscure the fact that civil wars are still breaking out at a rate that has been fairly static in recent decades.”

“there is growing consensus on a number of findings. For example, intrastate armed conflict is more likely to occur in poor, developing countries with weak state structures. In situations of weak states the presence of lootable natural resources and oil increase the likelihood of experiencing armed conflict. Dependency upon the export of primary commodities is also a vulnerability factor, especially in conjunction with drastic fluctuations in international market prices which can result in economic shocks and social dislocation. State weakness is relevant to this – and to most of the theories regarding armed conflict proneness – because such states are less able to cushion the impact of economic shocks. […] Authoritarian regimes as well as entrenched democracies are less likely to experience civil war than societies in-between […] Situations of partial or weak democracy (anocracy) and political transition, particularly a movement towards democracy in volatile or divided societies, are also strongly correlated to conflict onset. The location of a society – especially if it has other vulnerability factors – in a region which has contiguous neighbors which are experiencing or have experienced armed conflict is also an armed conflict risk.”

“Military intervention aimed at supporting a protagonist or influencing the outcome of a conflict tends to increase the intensity of civil wars and increase their duration […] It is commonly argued that wars ending with military victory are less likely to recur […]. In these terminations one side no longer exists as a fighting force. Negotiated settlements, on the other hand, are often unstable […] The World Development Report 2011 notes that 90 percent of the countries with armed conflicts taking place in the first decade of the 2000s also had a major armed conflict in the preceding 30 years […] of the 137 armed conflicts that were fought after 1989 100 had ended by 2011, while 37 were still ongoing”

“Cross-national, aggregated, analysis has played a leading role in strengthening the academic and policy impact of conflict research through the production of rigorous research findings. However, the […] aggregation of complex variables has resulted in parsimonious findings which arguably neglect the complexity of armed conflict; simultaneously, differences in the codification and definition of key concepts result in contradictory findings. The growing popularity of micro-studies is therefore an important development in the field of civil war studies, and one that responds to the demand for more nuanced analysis of the dynamics of conflict at the local level.”

“Jason Quinn, University of Notre Dame, has calculated that the number of scholarly articles on the onset of civil wars published in the first decade of the twenty-first century is larger than the previous five decades combined”.

“One of the most challenging aspects of quantitative analysis is transforming social concepts into numerical values. This difficulty means that many of the variables used to capture theoretical constructs represent crude indicators of the real concept […] econometric studies of civil war must account for the endogenising effect of civil war on other variables. Civil war commonly lowers institutional capacity and reduces economic growth, two of the primary conditions that are consistently shown to motivate civil violence. Scholars have grown more capable of modelling this process […], but still too frequently fail to capture the endogenising effect of civil conflict on other variables […] the problems associated with the rare nature of civil conflict can [also] cause serious problems in a number of econometric models […] Case-based analysis commonly suffers from two fundamental problems: non-generalisability and selection bias. […] Combining research methods can help to enhance the validity of both quantitative and qualitative research. […] the combination of methods can help quantitative researchers address measurement issues, assess outliers, discuss variables omitted from the large-N analysis, and examine cases incorrectly predicted by econometric models […] The benefits of mixed methods research designs have been clearly illustrated in a number of prominent studies of civil war […] Yet unfortunately the bifurcation of conflict studies into qualitative and quantitative branches makes this practice less common than is desirable.”

“Ethnography has elicited a lively critique from within and without anthropology. […] Ethnographers stand accused of argument by ostension (pointing at particular instances as indicative of a general trend). The instances may not even be true. This is one of the reasons that the economist Paul Collier rejected ethnographic data as a source of insight into the causes of civil wars (Collier 2000b). According to Collier, the ethnographer builds on anecdotal evidence offered by people with good reasons to fabricate their accounts. […] The story fits the fact. But so might other stories. […] [It might be categorized as] a discipline that still combines a mix of painstaking ethnographic documentation with brilliant flights of fancy, and largely leaves numbers on one side.”

“While macro-historical accounts convincingly argue for the centrality of the state to the incidence and intensity of civil war, there is a radical spatial unevenness to violence in civil wars that defies explanation at the national level. Villages only a few miles apart can have sharply contrasting experiences of conflict and in most civil wars large swathes of territory remain largely unaffected by violence. This unevenness presents a challenge to explanations of conflict that treat states or societies as the primary unit of analysis. […] A range of databases of disaggregated data on incidences of violence have recently been established and a lively publication programme has begun to explore sub-national patterns of distribution and diffusion of violence […] All of these developments testify to a growing recognition across the social sciences that spatial variation, territorial boundaries and bounding processes are properly located at the heart of any understanding of the causes of civil war. It suggests too that sub-national boundaries in their various forms – whether regional or local boundaries, lines of control established by rebels or no-go areas for state security forces – need to be analysed alongside national borders and in a geopolitical context. […] In both violent and non-violent contention local ‘safe territories’ of one kind or another are crucial to the exercise of power by challengers […] the generation of violence by insurgents is critically affected by logistics (e.g. roads), but also shelter (e.g. forests) […] Schutte and Weidmann (2011) offer a […] dynamic perspective on the diffusion of insurgent violence. Two types of diffusion are discussed; relocation diffusion occurs when the conflict zone is shifted to new locations, whereas escalation diffusion corresponds to an expansion of the conflict zone. They argue that the former should be a feature of conventional civil wars with clear frontlines, whereas the latter should be observed in irregular wars, an expectation that is borne out by the data.”

“Research on the motivation of armed militants in social movement scholarship emphasises the importance of affective ties, of friendship and kin networks and of emotion […] Sageman’s (2004, 2008) meticulous work on Salafist-inspired militants emphasises that mobilisation is a collective rather than individual process and highlights the importance of inter-personal ties, networks of friendship, family and neighbours. That said, it is clear that there is a variety of pathways to armed action on the part of individuals rather than one single dominant motivation”.

“While it is often difficult to conduct real experiments in the study of civil war, the micro study of violence has seen a strong adoption of quasi-experimental designs and in general, a more careful thinking about causal identification”.

“Condra and Shapiro (2012) present one of the first studies to examine the effects of civilian targeting in a micro-level study. […] they show that insurgent violence increases as a result of civilian casualties caused by counterinsurgent forces. Similarly, casualties inflicted by the insurgents have a dampening effect on insurgent effectiveness. […] The conventional wisdom in the civil war literature has it that indiscriminate violence by counterinsurgent forces plays into the hands of the insurgents. After being targeted collectively, the aggrieved population will support the insurgency even more, which should result in increased insurgent effectiveness. Lyall (2009) conducts a test of this relationship by examining the random shelling of villages from Russian bases in Chechnya. He matches shelled villages with those that have similar histories of violence, and examines the difference in insurgent violence between treatment and control villages after an artillery strike. The results clearly disprove conventional wisdom and show that shelling reduces subsequent insurgent violence. […] Other research in this area has looked at alternative counterinsurgency techniques, such as aerial bombings. In an analysis that uses micro-level data on airstrikes and insurgent violence, Kocher et al. (2011) show that, counter to Lyall’s (2009) findings, indiscriminate violence in the form of airstrikes against villages in the Vietnam war was counterproductive […] Data availability […] partly dictates what micro-level questions we can answer about civil war. […] not many conflicts have datasets on bombing sorties, such as the one used by Kocher et al. (2011) for the Vietnam war.”

April 21, 2015 Posted by | anthropology, data, econometrics, history | Leave a comment

Wikipedia articles of interest

i. Lock (water transport). Zumerchik and Danver’s book covered this kind of stuff as well, sort of, and I figured that since I’m not going to blog the book – for reasons provided in my goodreads review here – I might as well add a link or two here instead. The words ‘sort of’ above are in my opinion justified because the book coverage is so horrid you’d never even know what a lock is used for from reading that book; you’d need to look that up elsewhere.

On a related note there’s a lot of stuff in that book about the history of water transport etc. which you probably won’t get from these articles, but having a look here will give you some idea about which sort of topics many of the chapters of the book are dealing with. Also, stuff like this and this. The book coverage of the latter topic is incidentally much, much more detailed than is that wiki article, and the article – as well as many other articles about related topics (economic history, etc.) on the wiki, to the extent that they even exist – could clearly be improved greatly by adding content from books like this one. However I’m not going to be the guy doing that.

ii. Congruence (geometry).

iii. Geography and ecology of the Everglades (featured).

I’d note that this is a topic which seems to be reasonably well covered on wikipedia; there’s for example also a ‘good article’ on the Everglades and a featured article about the Everglades National Park. A few quotes and observations from the article:

“The geography and ecology of the Everglades involve the complex elements affecting the natural environment throughout the southern region of the U.S. state of Florida. Before drainage, the Everglades were an interwoven mesh of marshes and prairies covering 4,000 square miles (10,000 km2). […] Although sawgrass and sloughs are the enduring geographical icons of the Everglades, other ecosystems are just as vital, and the borders marking them are subtle or nonexistent. Pinelands and tropical hardwood hammocks are located throughout the sloughs; the trees, rooted in soil inches above the peat, marl, or water, support a variety of wildlife. The oldest and tallest trees are cypresses, whose roots are specially adapted to grow underwater for months at a time.”

“A vast marshland could only have been formed due to the underlying rock formations in southern Florida.[15] The floor of the Everglades formed between 25 million and 2 million years ago when the Florida peninsula was a shallow sea floor. The peninsula has been covered by sea water at least seven times since the earliest bedrock formation. […] At only 5,000 years of age, the Everglades is a young region in geological terms. Its ecosystems are in constant flux as a result of the interplay of three factors: the type and amount of water present, the geology of the region, and the frequency and severity of fires. […] Water is the dominant element in the Everglades, and it shapes the land, vegetation, and animal life of South Florida. The South Florida climate was once arid and semi-arid, interspersed with wet periods. Between 10,000 and 20,000 years ago, sea levels rose, submerging portions of the Florida peninsula and causing the water table to rise. Fresh water saturated the limestone, eroding some of it and creating springs and sinkholes. The abundance of fresh water allowed new vegetation to take root, and through evaporation formed thunderstorms. Limestone was dissolved by the slightly acidic rainwater. The limestone wore away, and groundwater came into contact with the surface, creating a massive wetland ecosystem. […] Only two seasons exist in the Everglades: wet (May to November) and dry (December to April). […] The Everglades are unique; no other wetland system in the world is nourished primarily from the atmosphere. […] Average annual rainfall in the Everglades is approximately 62 inches (160 cm), though fluctuations of precipitation are normal.”

“Between 1871 and 2003, 40 tropical cyclones struck the Everglades, usually every one to three years.”

“Islands of trees featuring dense temperate or tropical trees are called tropical hardwood hammocks.[38] They may rise between 1 and 3 feet (0.30 and 0.91 m) above water level in freshwater sloughs, sawgrass prairies, or pineland. These islands illustrate the difficulty of characterizing the climate of the Everglades as tropical or subtropical. Hammocks in the northern portion of the Everglades consist of more temperate plant species, but closer to Florida Bay the trees are tropical and smaller shrubs are more prevalent. […] Islands vary in size, but most range between 1 and 10 acres (0.40 and 4.05 ha); the water slowly flowing around them limits their size and gives them a teardrop appearance from above.[42] The height of the trees is limited by factors such as frost, lightning, and wind: the majority of trees in hammocks grow no higher than 55 feet (17 m). […] There are more than 50 varieties of tree snails in the Everglades; the color patterns and designs unique to single islands may be a result of the isolation of certain hammocks.[44] […] An estimated 11,000 species of seed-bearing plants and 400 species of land or water vertebrates live in the Everglades, but slight variations in water levels affect many organisms and reshape land formations.”

“Because much of the coast and inner estuaries are built by mangroves—and there is no border between the coastal marshes and the bay—the ecosystems in Florida Bay are considered part of the Everglades. […] Sea grasses stabilize sea beds and protect shorelines from erosion by absorbing energy from waves. […] Sea floor patterns of Florida Bay are formed by currents and winds. However, since 1932, sea levels have been rising at a rate of 1 foot (0.30 m) per 100 years.[81] Though mangroves serve to build and stabilize the coastline, seas may be rising more rapidly than the trees are able to build.[82]

iv. Chang and Eng Bunker. Not a long article, but interesting:

Chang (Chinese: ; pinyin: Chāng; Thai: จัน, Jan, rtgsChan) and Eng (Chinese: ; pinyin: Ēn; Thai: อิน In) Bunker (May 11, 1811 – January 17, 1874) were Thai-American conjoined twin brothers whose condition and birthplace became the basis for the term “Siamese twins”.[1][2][3]

I loved some of the implicit assumptions in this article: “Determined to live as normal a life they could, Chang and Eng settled on their small plantation and bought slaves to do the work they could not do themselves. […] Chang and Adelaide [his wife] would become the parents of eleven children. Eng and Sarah [‘the other wife’] had ten.”

A ‘normal life’ indeed… The women the twins married were incidentally sisters who ended up disliking each other (I can’t imagine why…).

v. Genie (feral child). This is a very long article, and you should be warned that many parts of it may not be pleasant to read. From the article:

Genie (born 1957) is the pseudonym of a feral child who was the victim of extraordinarily severe abuse, neglect and social isolation. Her circumstances are prominently recorded in the annals of abnormal child psychology.[1][2] When Genie was a baby her father decided that she was severely mentally retarded, causing him to dislike her and withhold as much care and attention as possible. Around the time she reached the age of 20 months Genie’s father decided to keep her as socially isolated as possible, so from that point until she reached 13 years, 7 months, he kept her locked alone in a room. During this time he almost always strapped her to a child’s toilet or bound her in a crib with her arms and legs completely immobilized, forbade anyone from interacting with her, and left her severely malnourished.[3][4][5] The extent of Genie’s isolation prevented her from being exposed to any significant amount of speech, and as a result she did not acquire language during childhood. Her abuse came to the attention of Los Angeles child welfare authorities on November 4, 1970.[1][3][4]

In the first several years after Genie’s early life and circumstances came to light, psychologists, linguists and other scientists focused a great deal of attention on Genie’s case, seeing in her near-total isolation an opportunity to study many aspects of human development. […] In early January 1978 Genie’s mother suddenly decided to forbid all of the scientists except for one from having any contact with Genie, and all testing and scientific observations of her immediately ceased. Most of the scientists who studied and worked with Genie have not seen her since this time. The only post-1977 updates on Genie and her whereabouts are personal observations or secondary accounts of them, and all are spaced several years apart. […]

Genie’s father had an extremely low tolerance for noise, to the point of refusing to have a working television or radio in the house. Due to this, the only sounds Genie ever heard from her parents or brother on a regular basis were noises when they used the bathroom.[8][43] Although Genie’s mother claimed that Genie had been able to hear other people talking in the house, her father almost never allowed his wife or son to speak and viciously beat them if he heard them talking without permission. They were particularly forbidden to speak to or around Genie, so what conversations they had were therefore always very quiet and out of Genie’s earshot, preventing her from being exposed to any meaningful language besides her father’s occasional swearing.[3][13][43] […] Genie’s father fed Genie as little as possible and refused to give her solid food […]

In late October 1970, Genie’s mother and father had a violent argument in which she threatened to leave if she could not call her parents. He eventually relented, and later that day Genie’s mother was able to get herself and Genie away from her husband while he was out of the house […] She and Genie went to live with her parents in Monterey Park.[13][20][56] Around three weeks later, on November 4, after being told to seek disability benefits for the blind, Genie’s mother decided to do so in nearby Temple City, California and brought Genie along with her.[3][56]

On account of her near-blindness, instead of the disabilities benefits office Genie’s mother accidentally entered the general social services office next door.[3][56] The social worker who greeted them instantly sensed something was not right when she first saw Genie and was shocked to learn Genie’s true age was 13, having estimated from her appearance and demeanor that she was around 6 or 7 and possibly autistic. She notified her supervisor, and after questioning Genie’s mother and confirming Genie’s age they immediately contacted the police. […]

Upon admission to Children’s Hospital, Genie was extremely pale and grossly malnourished. She was severely undersized and underweight for her age, standing 4 ft 6 in (1.37 m) and weighing only 59 pounds (27 kg) […] Genie’s gross motor skills were extremely weak; she could not stand up straight nor fully straighten any of her limbs.[83][84] Her movements were very hesitant and unsteady, and her characteristic “bunny walk”, in which she held her hands in front of her like claws, suggested extreme difficulty with sensory processing and an inability to integrate visual and tactile information.[62] She had very little endurance, only able to engage in any physical activity for brief periods of time.[85] […]

Despite tests conducted shortly after her admission which determined Genie had normal vision in both eyes she could not focus them on anything more than 10 feet (3 m) away, which corresponded to the dimensions of the room she was kept in.[86] She was also completely incontinent, and gave no response whatsoever to extreme temperatures.[48][87] As Genie never ate solid food as a child she was completely unable to chew and had very severe dysphagia, completely unable to swallow any solid or even soft food and barely able to swallow liquids.[80][88] Because of this she would hold anything which she could not swallow in her mouth until her saliva broke it down, and if this took too long she would spit it out and mash it with her fingers.[50] She constantly salivated and spat, and continually sniffed and blew her nose on anything that happened to be nearby.[83][84]

Genie’s behavior was typically highly anti-social, and proved extremely difficult for others to control. She had no sense of personal property, frequently pointing to or simply taking something she wanted from someone else, and did not have any situational awareness whatsoever, acting on any of her impulses regardless of the setting. […] Doctors found it extremely difficult to test Genie’s mental age, but on two attempts they found Genie scored at the level of a 13-month-old. […] When upset Genie would wildly spit, blow her nose into her clothing, rub mucus all over her body, frequently urinate, and scratch and strike herself.[102][103] These tantrums were usually the only times Genie was at all demonstrative in her behavior. […] Genie clearly distinguished speaking from other environmental sounds, but she remained almost completely silent and was almost entirely unresponsive to speech. When she did vocalize, it was always extremely soft and devoid of tone. Hospital staff initially thought that the responsiveness she did show to them meant she understood what they were saying, but later determined that she was instead responding to nonverbal signals that accompanied their speaking. […] Linguists later determined that in January 1971, two months after her admission, Genie only showed understanding of a few names and about 15–20 words. Upon hearing any of these, she invariably responded to them as if they had been spoken in isolation. Hospital staff concluded that her active vocabulary at that time consisted of just two short phrases, “stop it” and “no more”.[27][88][99] Beyond negative commands, and possibly intonation indicating a question, she showed no understanding of any grammar whatsoever. […] Genie had a great deal of difficulty learning to count in sequential order. During Genie’s stay with the Riglers, the scientists spent a great deal of time attempting to teach her to count. She did not start to do so at all until late 1972, and when she did her efforts were extremely deliberate and laborious. By 1975 she could only count up to 7, which even then remained very difficult for her.”

“From January 1978 until 1993, Genie moved through a series of at least four additional foster homes and institutions. In some of these locations she was further physically abused and harassed to extreme degrees, and her development continued to regress. […] Genie is a ward of the state of California, and is living in an undisclosed location in the Los Angeles area.[3][20] In May 2008, ABC News reported that someone who spoke under condition of anonymity had hired a private investigator who located Genie in 2000. She was reportedly living a relatively simple lifestyle in a small private facility for mentally underdeveloped adults, and appeared to be happy. Although she only spoke a few words, she could still communicate fairly well in sign language.[3]

April 20, 2015 Posted by | biology, books, Geography, history, mathematics, Psychology, wikipedia | Leave a comment

Stuff

i. World Happiness Report 2013. A few figures from the publication:

Fig 2.2

Fig 2.4

Fig 2.5

ii. Searching for Explanations: How the Internet Inflates Estimates of Internal Knowledge.

“As the Internet has become a nearly ubiquitous resource for acquiring knowledge about the world, questions have arisen about its potential effects on cognition. Here we show that searching the Internet for explanatory knowledge creates an illusion whereby people mistake access to information for their own personal understanding of the information. Evidence from 9 experiments shows that searching for information online leads to an increase in self-assessed knowledge as people mistakenly think they have more knowledge “in the head,” even seeing their own brains as more active as depicted by functional MRI (fMRI) images.”

A little more from the paper:

“If we go to the library to find a fact or call a friend to recall a memory, it is quite clear that the information we seek is not accessible within our own minds. When we go to the Internet in search of an answer, it seems quite clear that we are we consciously seeking outside knowledge. In contrast to other external sources, however, the Internet often provides much more immediate and reliable access to a broad array of expert information. Might the Internet’s unique accessibility, speed, and expertise cause us to lose track of our reliance upon it, distorting how we view our own abilities? One consequence of an inability to monitor one’s reliance on the Internet may be that users become miscalibrated regarding their personal knowledge. Self-assessments can be highly inaccurate, often occurring as inflated self-ratings of competence, with most people seeing themselves as above average [here’s a related link] […] For example, people overestimate their own ability to offer a quality explanation even in familiar domains […]. Similar illusions of competence may emerge as individuals become immersed in transactive memory networks. They may overestimate the amount of information contained in their network, producing a “feeling of knowing,” even when the content is inaccessible […]. In other words, they may conflate the knowledge for which their partner is responsible with the knowledge that they themselves possess (Wegner, 1987). And in the case of the Internet, an especially immediate and ubiquitous memory partner, there may be especially large knowledge overestimations. As people underestimate how much they are relying on the Internet, success at finding information on the Internet may be conflated with personally mastered information, leading Internet users to erroneously include knowledge stored outside their own heads as their own. That is, when participants access outside knowledge sources, they may become systematically miscalibrated regarding the extent to which they rely on their transactive memory partner. It is not that they misattribute the source of their knowledge, they could know full well where it came from, but rather they may inflate the sense of how much of the sum total of knowledge is stored internally.

We present evidence from nine experiments that searching the Internet leads people to conflate information that can be found online with knowledge “in the head.” […] The effect derives from a true misattribution of the sources of knowledge, not a change in understanding of what counts as internal knowledge (Experiment 2a and b) and is not driven by a “halo effect” or general overconfidence (Experiment 3). We provide evidence that this effect occurs specifically because information online can so easily be accessed through search (Experiment 4a–c).”

iii. Some words I’ve recently encountered on vocabulary.com: hortatory, adduce, obsequious, enunciate, ineluctable, guerdon, chthonic, condignphilippic, coruscate, exceptionable, colophon, lapidary, rubicund, frumpish, raiment, prorogue, sonorous, metonymy.

iv.

v. I have no idea how accurate this test of chess strength is, (some people in this thread argue that there are probably some calibration issues at the low end) but I thought I should link to it anyway. I’d be very cautious about drawing strong conclusions about over-the-board strength without knowing how they’ve validated the tool. In over-the-board chess you have at minimum a couple of minutes/move on average and this tool never gives you more than 30 seconds, so some slow players will probably suffer using this tool (I’d imagine this is why u/ViktorVamos got such a low estimate). For what it’s worth my Elo estimate was 2039 (95% CI: 1859, 2220).

In related news, I recently defeated my first IM – Pablo Garcia Castro – in a blitz (3 minutes/player) game. It actually felt a bit like an anticlimax and afterwards I was thinking that it would probably have felt like a bigger deal if I’d not lately been getting used to winning the occasional bullet game against IMs on the ICC. Actually I think my two wins against WIM Shiqun Ni during the same bullet session at the time felt like a bigger accomplishment, because that specific session was played during the Women’s World Chess Championship and I realized while looking up my opponent that this woman was actually stronger than one of the contestants who made it to the quarter-finals in that event (Meri Arabidze). On the other hand bullet isn’t really chess, so…

April 15, 2015 Posted by | astronomy, Chess, Lectures, papers, Psychology | 2 Comments

Curiosity… (2)

Here’s the first post about the book. This post will cover some of the stuff included in the remaining chapters of the book.

“It’s not easy to get an accurate or reliable picture of children’s curiosity at school. To begin with, the data are, almost by definition, descriptive. We can watch to see how many questions children ask, how often they tinker, open, take apart, or watch — but it’s virtually impossible to track the thoughts of twenty-three children during a classroom activity. However, we can measure how much curiosity children express while they are in school. […] We wanted to find out whether children expressed curiosity when they began grade school, and how different things looked by the time children were finished. We recorded ten hours in each of five kindergarten classrooms and five fifth-grade classrooms. Each time we visited, we recorded the children for two hours. […] Three students were trained to code the data, and achieved a high rate of inter-coder reliability. It turned out it’s not all that hard to spot curiosity in action. But what we found took us aback. Or rather what we didn’t find. On average, in any given kindergarten classroom, there were 2.36 episodes of curiosity in a two-hour stretch. Expressions of curiosity were even scarcer in the older grades. The average number of episodes in a fifth-grade classroom was 0.48. In other words, on average, classroom activity over a two-hour stretch included less than one expression of curiosity. In the schools we studied, the expression of curiosity was, at best, infrequent. Nine of the ten classrooms had at least one two-hour stretch where there were no expressions of curiosity. In other words, we rarely saw children take things apart, ask questions about topics either children or adults had raised, watch interesting phenomena unfold in front of their eyes, or in any way show signs that there were things they were eager to know more about it, much less actually follow up with any visible sort of investigation, whether in words or actions. The easiest interpretation is that children are simply less curious by the time they are in kindergarten and grow even less so by the end of grade school. However, the data don’t support that conclusion. For one thing, we saw as much variation between classrooms as we did between grade levels.”

“Our discovery, that there is little curiosity in grade school, is confirmed by the work others have done. Recall that Tizard and Hughes fitted preschoolers with tape recorders to get a picture of how many questions they asked at home with their parents (the answer […] is that preschoolers ask a lot of questions). However, Tizard and Hughes also recorded those same children when they went to preschool (1984). Once inside a school building, the picture changes dramatically. While the preschoolers they studied asked, on average, twenty-six questions per hour at home, that rate dropped to two per hour when the children were in school. […] One striking feature […] was how curious children were about anything that seemed exotic to them. Topics that led to a series of eager questions included the Rocky Mountains, Pangaea, Venus flytraps, unusual geometric shapes, trips to Mexico, and the Australopithecus Lucy’s descendants. But their episodes of curiosity were brief, often fleeting. Some 78 percent of the curiosity episodes involved fewer than four conversational turns. We also timed these sequences, since we were interested in nonverbal inquiry. Not one episode lasted longer than six minutes, and all but three lasted less than three minutes. We never saw an episode of curiosity that led to a more structured classroom activity, or that redirected a classroom discussion for more than a few moments.”

“Our impression was that most of the time teachers had very specific objectives for each stretch of time, and that a great deal of effort was put into keeping children on task and in reaching those objectives. […] Mastery rather than inquiry seemed to be the dominant goal for almost all the classrooms in which we observed. Often it seemed that finishing specific assignments (worksheets, writing assignments) was an even more salient goal than actually learning the material. In other words, the structure of the classroom made it clear that the educational activities we saw were not designed to encourage curiosity — nor were teachers using the children’s curiosity as a guide to what and how to teach. […] in the classrooms we visited, there was little or no evidence that an implicit or explicit goal of the curriculum was to help children pose questions. […] an important but easily overlooked distinction [is] between children’s engagement and children’s curiosity. A teacher can be talking about things that captivate the students, and the students can be deeply interested in a topic — quite engaged in a discussion or activity. But that in and of itself doesn’t mean the children are asking questions, or that their questions reflect curiosity. […] a key finding of our research so far [is that often] the reason children ask few questions, and fail to examine objects or tinker with things, is that the teacher feels such exploration would get in the way of learning. I have even heard teachers say as much. […] “I can’t answer questions right now. Now it’s time for learning.” […] A student and I sent out surveys to 114 teachers. In one part of the survey, they were asked to list the five skills or attributes they most wanted to instill or encourage in their students over the course of the school year. In the second part of the survey they were asked to circle five such desirable attributes from a list of ten. The list included words like “polite,” “cooperative,” “thoughtful,” “knowledgeable,” and also “curious.” Some 77 percent of the teachers surveyed circled “curious” as one of their top five. However, when asked to come up with their own ideas, only twenty- three listed curiosity. […] The impediments to curiosity in school consist of more than just the absence of enthusiasm for it. There are also powerful, somewhat invisible forces working against the expression and cultivation of curiosity in classrooms. Two primary impediments are the way in which plans and scripts govern what happens in most classrooms, and the pressure to get a lot of things “done” each day. […] Once children get to school, they exhibit a lot less curiosity. They ask fewer questions, examine objects less frequently and less thoroughly, and in general seem less inclined to persevere in sating their appetite for information.”

“When children have trouble learning, we think we need to teach it in a different way, or impress upon them the importance or usefulness of what they are learning. We encourage them to try harder, or spend more time trying to learn, even though it’s usually more effective to elicit their interest in the material. […] Several studies confirm the commonsense idea that children remember text better, and understand it more fully, when it has piqued their interest in one way or another (Silvia 2006; Knobloch et al. 2004).”

“Some would argue that the work of researchers like Robert Bjork (Bjork and Linn 2006) and Nate Kornell (Kornell and Bjork 2008) demonstrates that difficulty is key to learning. In what is now a large series of studies, researchers have shown that when students struggle a bit with the material they are learning, they learn it better.”

“Though researchers and teachers must deal with the fact that there are significant individual differences in what stirs a child’s interest or urge to know more, it is also possible to identify some general qualities that seem to make an object or a topic more or less intriguing to the majority of students. […] In the observations of curiosity that my students and I have done in classrooms, we have noticed one […] topic that consistently sparked children’s curiosity — intellectual exotica. […] Often what ignited a line of questioning was a reference to something outside the children’s zone of familiarity — unfamiliar places, historically distant times. […] children are often as curious about things they cannot see, touch, or directly experience as they are about what is going on right around them. […] the more unknown and unfamiliar a topic, and the denser with details its presentation, the more it may invite learning. […] The characteristics that fuel curiosity are not mysterious. Adults who use words and facial expressions to encourage children to explore; access to unexpected, opaque, and complex materials and topics; a chance to inquire with others; and plenty of suspense . . . these turn out to be the potent ingredients.”

“children are frequently privy to language not directed at them. The conversations adults have with one another influence how children talk and think. […] By the time children are four or so, they not only listen to their parents talk about other people — they also begin, in fledgling form, to gossip themselves. […] Daniela O’Neill and her colleagues tape-recorded the snack-time conversations of twenty-five preschoolers over a period of twenty-five weeks. Over 77 percent of the conversations children initiated with one another referenced other people, and nearly 30 percent mentioned people’s mental states. […] Peggy Miller’s work (Miller et al. 1992) shows that by the time children are five, more of their stories include information not just about themselves, but about themselves in relation to other people.”

“Sandra Hofferth and John Sandberg (2001) drew subjects from the 1997 Child Health Development Supplement to the Panel Study of Income Dynamics, a thirty-year longitudinal survey of a representative sample of families. […] While three-to-five-year-olds spent approximately seventeen hours a week in free play, most of them spent less than one hour a week outside, and less than two hours a week reading. By the time children were nine years old, they spent no more time outside, and far less time in free play (just under nine hours a week). They spent even less time reading (one and a quarter hours per week).”

“In an examination of how adults use the Internet to pursue a recreational interest in genealogy, Crystal Fulton (2009) found a link between amount of pleasure and effective persistent information-foraging strategies. The key to her argument is the role of time — she points out that when students feel pressured to complete an assignment, they experience less pleasure, and also engage in less thorough search behavior. That finding is replicated in a wide range of studies of online foraging.”

“The children who will get the most out of opportunities to work on their own (deciding what to tackle, and what to concentrate on) are the ones who can stay focused, stick with a question, and plan how to solve what ever problem intrigues them. In other words, at their best, autonomy and self-regulation go hand in hand. But in the world of real classrooms, every teacher must figure out how to balance the two. If a child doesn’t seem to have a great deal of perseverance, focus, or self-control, the teacher must decide whether to give him more autonomy so that he has a chance develop self-regulation, or whether to make autonomy the prize for self-control. […] This book for the most part has not focused on fleeting moments of curiosity, but the kind of curiosity that persists, unfolding over time and leading to sustained action (inquiry, discovery, tinkering, question asking, observation, research, reflection). Such sustained inquiry may be more likely to blossom when children have free time, and some time alone.”

“Many teachers […] discourage uncertainty, emphasizing instead what they know, or feel the students should know. They are more comfortable encouraging students to learn trustworthy information than to explore questions to which they themselves do not know the answer. Instead of using school as a place to formalize and extend the power of a young child’s zest for tackling the unknown or uncertain, teachers tend to squelch curiosity. They don’t do this out of meanness, or small-mindedness. They do it in the interests of making sure children master certain skills and established facts. While an emphasis on acquiring knowledge is reasonable, discouraging the disposition that leads to gaining new knowledge squanders a child’s most formidable learning tool. […] curiosity takes time to unfold, and even more time to bear fruit. In order to help children build on their curiosity, teachers have to be willing to spend time doing so. Nurturing curiosity takes time, but also saturation. It cannot be confined to science class. […] Teachers should provide children with interesting materials, seductive details, and desirable difficulty. Instead of presenting children with material that has been made as straightforward and digested as possible, teachers should make sure their students encounter objects, texts, environments, and ideas that will draw them in and pique their curiosity. […] to cultivate students’ curiosity, teachers need to give them both time to seek answers and guidance about various routes to getting answers, such as looking things up in reliable sources or testing hypotheses.”

“Few teachers readily see that they’re discouraging students’ questions, just as few parents readily see that they’re short-tempered with their children. […] One of the key findings of research is that children are heavily influenced not only by what adults say to them, but also by how the adults themselves behave. If schools value children’s curiosity, they’ll need to hire teachers who are curious. It is hard to fan the flames of a drive you yourself rarely experience. Many principals hire teachers who seem smart, who like children, and who have the kind of drive that supports academic achievement. They know that teachers who possess these qualities will foster the same in their students. Why not put curiosity at the top of the list of criteria for good teachers? […] in order to flourish, curiosity needs to be cultivated.”

April 15, 2015 Posted by | books, Psychology | Leave a comment

The hungry mind: The origins of curiosity in childhood (I)

“I will […] argue that curiosity is a fragile seed — for some the seed bears fruit, and for others, it shrivels and dies all too soon. By the time a child is five years old, his curiosity has been carved to reflect his personality, family life, daily encounters, and school experience. By the time that five-year-old is twenty-two, the intensity and object of his curiosity has become a defining, though often invisible part of who he is — something that will shape much of his future life. But the journey curiosity takes, from a universal and ubiquitous characteristic, one that accompanies much of the infant’s daily experience, to a quality that defines certain adults and barely exists in others, is subtle. In the chapters that follow, I’ll try to show that there are several sources of individual variation, and each has its developmental moment. Attachment in toddlerhood, language in the three-year-old, and a succession of environmental limitations and open doors all contribute to a person’s particular kind and intensity of curiosity. […] This book is about why some children remain curious and others do not, and how we can encourage more curiosity in everyone.”

Here’s what I wrote about the book on goodreads:

“I’d expected more from a Harvard University Press publication. The book has too many personal anecdotes and too much speculation, and not enough data; also, the coverage would have benefited from the author being more familiar with ethological research such as e.g. some of the stuff included in Natural Conflict Resolution. However it was interesting enough for me to read it to the end, despite the format, and I assume many people who don’t mind reading popular science books might like the book.”

I’ve mentioned before how my expectations sort of depend, a bit, on who the publisher is; I have one set of (implicit) criteria for books published by academic publishers, and a different set of (implicit) criteria which needs to be met if the book is published by other publishing companies. Over the last couple of years I’ve pretty much exclusively read academic publications (I think I read two or three non-academic non-fiction publications last year, out of 72), but at least I’m aware there’s an argument to be made for having different standards for different kinds of books. I gave this book two stars, and part of the reason why it did not get a higher rating is that this kind of publication is the kind of publication I’m actively trying to avoid by sticking to only reading academic publications. I don’t care about reading anecdotes about somebody’s grandmother, and I don’t need two-page long anecdotes used to introduce readers to relatively simple concepts which could be covered in a paragraph by a skilled textbook author. I consider much of the fluff in normal popular science publications to be a waste of my time, and I get annoyed and confused when I find that kind of stuff in supposedly academic publications (this book was published by Harvard University Press). The book is not bad and it has some interesting ideas, but there’s way too much fluff for my taste. In the post I’ll talk a little about some of the ideas presented in the first four chapters of the book.

This observation from the book, made early in the coverage, might arguably be one of the most important things to take away from the book: “People who are curious learn more than people who are not, and people learn more when they are curious than when they are not.”

Attention is an important variable in the learning context, and curiosity helps with that; the author notes both that it’s quite obvious that curiosity helps children (much of the book is about the curiosity of children) learn, but also that we don’t actually know a great deal about how to make children curious about stuff in order to help them learn – this is not something people have researched very much. I find this, curious. An important observation in that context is however that we do know that curiosity is not what you might term dimension-less; people are curious about different things, and children are most curious when they are given the opportunity to inquire about things that mystify them or attract their attention. Research indicates that children are very curious early on in their lives (babies, toddlers), and that curiosity then seems to decline later on. One way to think about this is that babies don’t have good working models of what to expect will happen in the world around them yet given specific input, in part because they don’t have a lot of experience, so they’re often surprised; later on, they come to expect certain things to happen in specific ways (gravity causes both the plate and the cup (and the cutlery…) to drop to the floor if you pick it up and throw it – my example, derived from avuncular experience…), and as their working models improve habituation kicks in and removes the need to attend to the inputs which previously demanded their attention, freeing up mental resources which can then be devoted to other purposes. Actually adults wouldn’t be very well off if they were all as curious as two-year-olds, because the need to constantly react to new stimuli presenting themselves would likely mean they’d never get anything done (the author does not bring this up, but it’s also not really important in the context of the coverage). As put in the book: “during the first three years, children are gathering the material they need to establish, and then enrich, the schemas that help them navigate the physical, psychological, and social worlds. Key to this mastery of pattern and order is their alertness to novelty. This fundamental characteristic of early development explains why toddlers seem practically voracious in their appetite for new information.”

Curiosity has multiple faces, but a working definition presented early in the work is that “curiosity is an expression, in words or behaviors, of the urge to know more — an urge that is typically sparked when expectations are violated.” Breadth and depth are important variables, as is persistence. Even if there’s sort of an identifiable general trajectory for the variable during childhood, with much curiosity early on and then lower values later, you still as argued in the quote above have a lot of interpersonal variation, and the book spends some time trying to figure out why it is that some people end up a lot more curious than others and how they might be different. It seems to be the case that differences present quite early, and as usual Bowlby‘s name pops up. It pops up because although exploration of the unknown may have positive consequences, it also involves taking a risk – anxiety is argued to be an important curiosity-mediator, so that children who are worried about abandonment may be less likely to go exploring than are children who have a secure attachment bond and feel that they have a safe haven to which they can retreat without much risk. Longitudinal research has indicated that at least for one curiosity conceptualization (a so-called ‘curiosity box’-setup), individuals who were securely attached at the age of 2 were more curious two to three years later than were individuals who were not securely attached at baseline. A study on monkeys done more than fifty years ago likewise found that monkeys raised without an attachment figure were more fearful and that fear prevented the animals from exploring their environment. Not impressive, but it seems plausible. This is incidentally one of the only (if not the only? Can’t remember…) monkey studies included in the coverage, and if I had to explain my annoyance in my goodreads review at the absence of such research, the main reason was that the author in my opinion early on in the coverage pushes the ‘humans are exceptional’-point further than it can be supported, which is the sort of behaviour that always tends to make me irritated.

It seems likely that feedback processes start early and may be important; if you explore and have positive experiences doing it early on, you’ll probably be more likely to explore in the future; and if you’re too fearful to go look behind that curtain, you may never realize it wasn’t dangerous. Although trait variables matter, environmental mediation also seems really important and there’s quite a bit of stuff about this in the book. There’s incidentally some research suggesting that too little inhibition may not be desirable, but too much will certainly contribute to a lack of curiosity.

Although it’s very obvious that children in what might be termed the ‘asks a lot of questions’-age are incredibly curious, it’s become clear from research on these matters that they’re actually quite curious even before that time, if you know where to look for this curiosity; in a series of experiments it’s been shown that children will point at objects to get information about them long before they learn how to verbally form questions, and it’s clear both that children point more often at unfamiliar objects and events than familiar ones, and that they’re more likely to point when they’re in the presence of someone they consider to be a knowledgeable informant (e.g. a mother). When they do reach the asks-a-lot-of-questions age, they, well, ask a lot of questions, and it turns out that some people have actually collected data on this stuff. One really neat sample mentioned in the book involved four children followed for almost four years, from they were fourteen months old until they were five years and one month old, and here the recordings included 24,741 questions presenting 229,5 hours of conversation; the children asked an average of 107 questions per hour. That’s an average, and it hides a huge variation among the individuals even in that small sample; one of the children asked an average of close to 200 questions per hour, whereas another asked only slightly less than 70. I’d suggest these numbers are higher than average due to selection bias and perhaps also due to Hawthorne, but I find it quite incredible that data such as this is even available in the first place, and the numbers do sort of illustrate what kind of level we’re talking about. It’s obvious from the conversational strategies the children employ at that point in time that they aren’t just asking questions to get their parents’ attention or in order to monopolize their time (though this may be a convenient side-effect); children act differently depending on how questions are answered and question-sequences display path-dependence, indicating that they use the questions to gather knowledge about the world around them, rather than e.g. just to train their language skills.

Most children acquire language in roughly the same sequence. They point long before they start talking in sentences, and after pointing they begin to use an object to represent another object. After that they realize that objects have names, and at that point they start learning new words very fast. While their vocabulary develops very rapidly during this first learning-new-words phase, they start combining them in orderly ways; i.e. they start speaking in sentences.

In diary studies the data seem to indicate that children who hear adults ask many questions in their environment are more likely to get their questions answered (causality is iffy, though). How many questions they ask depends on what they consider to constitute a satisfactory answer, but in general they are more likely to continue asking questions than are children who rarely see other people ask informational questions and who are not rewarded with satisfactory answers when they ask questions. The data suggest that three-year-olds generally ask more questions than seven-year-olds, but also that there are already at that point (at the age of three) important differences in terms of how many questions are asked by different children; interindividual differences can be spotted quite early and the feedback processes involved may be one mechanism leading to those differences growing over time.

Small children depend a great deal on their parents and other adults to interpret stuff in the world around them, and they don’t quickly outgrow this dependence on adults; however as children age the range of responses towards specific stimuli expands. A toddler might want to know whether or not a fear response is proper in a specific context and so will observe the parents before reacting to a new stimuli to learn what’s the proper response; but as the child ages and the cognitive abilities increase the child might also have to make a decision, implicitly or explicitly, of e.g whether or not to play with (how many of?) the toys on the floor. In a study on this stuff they tried to manipulate the curiosity of the mother of a child by asking her either to manipulate objects lying on a table, look towards the corner of the table, or talk to another adult elsewhere in the room, with the child observing through a one-way mirror – the child was then later let into the room, and it turned out that children who had observed their mother manipulating the objects were not only more likely to manipulate the toys in manners similar to how the parents had done, but they were also more likely to explore to the toys in other ways. How parents (and other adults) behave will be noticed by children whether or not the parents know they’re being observed, and I think many parents might be surprised to learn how much observed behaviours, as opposed to verbally communicated behavioural norms, matter. A quote from the coverage:

“To sum up so far, from infancy until at least the elementary school years, children look to adults for cues about how to respond to objects and events, how to interpret the things they witness and experience, and how to interact with the world. The cues children take from adults are powerful in the moment, but have long-term impact as well. Moreover, the influence extends beyond problem solving. Children also learn from the adults around them what kind of stance they can or should take toward the objects and events they encounter as the day unfolds. This is particularly important when it comes to inquiry. Because, as should be clear by now, inquiry does not bubble up simply because a child is intrinsically curious. Nor does it simply erupt when something in the environment is particularly intriguing. Whether a child has the impulse, day in and day out, to find out more, ebbs and flows as a result of the adults who surround her.” [my emphasis].

Parents aren’t the only adults with whom children interact, and multiple studies have indicated that when preschoolers receive informative answers by their teachers they ask significantly more questions. In a curiosity-box setup (basic setup: Leave a box with lots of drawers, each one including a small item, in a classroom and then observe how many children approach it, how fast they approach it, how often they do, etc.), “there was a direct link between how much the teacher smiled and talked in an encouraging manner and the level of curiosity [as measured by box-related behaviours] the children in the room expressed.” Even subtle adult behaviours like encouraging nods and smiles by a teacher may affect behaviours/curiosity.

A very important point in the context of social modelling is that many of the behaviours adults display are not necessarily geared towards the children, but that these behaviours still matter:

“Parents and teachers are not always gearing their behavior directly toward the children they are with. They are to a great degree just being themselves. They lift lids, tinker, look things up, watch things carefully, and ask questions. Or they don’t. In fact, many adults do not express much curiosity in their everyday lives. There are plenty of adults who rarely want to find out about something new, or probe beneath the surface. Why wouldn’t this have an impact on children? […] children watch and learn from adult behavior in the short run and in the long run. And now we have some evidence that the same is true when it comes to children’s interest in finding out more. When parents give their children some freedom to wander, explore, and tinker, it makes a difference. When parents express fear or disapproval of inquiry, that too has an effect. But parents are just the beginning. When it comes to their urge to know more, children at least as old as nine continue to be extremely susceptible to the behavior of adults. And here it’s worth remembering that children learn a lot at home from behaviors not directed toward them, and that at school the same is true.”

April 12, 2015 Posted by | books, Psychology | Leave a comment

An Introduction to Medical Diagnosis (4)

Here’s a previous post in the series covering this book. There’s a lot of stuff in these chapters, so the stuff below’s just some of the things I thought were interesting and worth being aware of. I’ve covered three chapters in this post: One about skin, nails and hair, one about the eye, and one about infectious and tropical diseases. I may post one more post about the book later on, but I’m not sure if I’ll do that or not at this point so this may be the last post in the series.

Okay, on to the book – skin, nails and hair (my coverage mostly deals with the skin):

“The skin is a highly specialized organ that covers the entire external surface of the body. Its various roles include protecting the body from trauma, infection and ultraviolet radiation. It provides waterproofing and is important for fluid and temperature regulation. It is essential for the detection of some sensory stimuli. […] Skin problems are extremely common and are responsible for 10–15 per cent of all consultations in general practice. […] Given that there are around 2000 dermatological conditions described, only common and important conditions, including some that might be especially relevant in the examination setting, can be covered here.”

Urticaria is characterized by the development of red dermal swellings known as weals […]. Scaling is not seen and the lesions are typically very itchy. The lesions result from the release of histamine from mast cells. An important clue to the diagnosis is that individual lesions come and go within 24 hours, although new lesions may be appearing at other sites. Another associated feature is dermographism: a firm scratch of the skin with an orange stick will produce a linear weal within a few minutes. Urticaria is common, estimated to affect up to 20 per cent of the population at some point in their lives.”

“Stevens–Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) are thought to be two ends of a spectrum of the same condition. They are usually attributable to drug hypersensitivity, though a precipitant is not always identified. The latent period following initiation of the drug tends to be longer than seen with a classical maculopapular drug eruption. The disease is termed:
*SJS when 10 per cent or less of the body surface area epidermis detaches
*TEN when greater than 30 per cent detachment occurs.
Anything in between is designated SJS/TEN overlap. Following a prodrome of fever, an erythematous eruption develops. Macules, papules, or plaques may be seen. Some or all of the affected areas become vesicular or bullous, followed by sloughing off of the dead epidermis. This leads to potentially widespread denudation of skin. […] The affected skin is typically painful rather than itchy. […] The risk of death relates to the extent of epidermal loss and can exceed 30 per cent. […] A widespread ‘drug rash’ that is very painful should ring alarm bells.”

“Various skin problems arise in patients with diabetes mellitus. Bacterial and fungal infections are more common, due to impaired immunity. Vascular disease and neuropathy lead to ulceration on the feet, which can sometimes be very deep and there may be underlying osteomyelitis. Granuloma annulare […] and necrobiosis lipoidica have also been associated with diabetes, though many cases are seen in non-diabetic patients. The former produces smooth papules in an annular configuration, often coalescing into a ring. The latter usually occurs over the shins giving rise to yellow-brown discoloration, with marked atrophy and prominent telangiectasia. There is often an annular appearance, with a red or brown border. Acanthosis nigricans, velvety thickening of the flexural skin […], is seen with insulin resistance, with or without frank diabetes. […] Diabetic bullae are also occasionally seen and diabetic dermopathy produces hyperpigmented, atrophic plaques on the legs. The aetiology of these is unknown.”

“Malignant melanoma is one of the commonest cancers in young adults [and it] is responsible for almost three-quarters of skin cancer deaths, despite only accounting for around 4 per cent of skin cancers. Malignant melanoma can arise de novo or from a pre-existing naevus. Most are pigmented, but some are amelanotic. The most important prognostic factor for melanoma is the depth of the tumour when it is excised – Breslow’s thickness. As most malignant melanomas undergo a relatively prolonged radial (horizontal) growth phase prior to invading vertically, there is a window of opportunity for early detection and management, while the prognosis remains favourable. […] ‘Red flag’ findings […] in pigmented lesions are increasing size, darkening colour, irregular pigmentation, multiple colours within the same lesion, and itching or bleeding for no reason. […] In general, be suspicious if a lesion is rapidly changing.”

The eye:

“Most ocular surface diseases […] are bilateral, whereas most serious pathology (usually involving deeper structures) is unilateral […] Any significant reduction of vision suggests serious pathology [and] [s]udden visual loss always requires urgent investigation and referral to an ophthalmologist. […] Sudden loss of vision is commonly due to a vascular event. These may be vessel occlusions giving rise to ischaemia of vision-serving structures such as the retina, optic nerve or brain. Alternatively there may be vessel rupture and consequent bleeding which may either block transmission of light as in traumatic hyphaema (haemorrhage into the anterior chamber) and vitreous haemorrhage, or may distort the retina as in ‘wet’ age-related macular degeneration (AMD). […] Gradual loss of vision is commonly associated with degenerations or depositions. […] Transient loss of vision is commonly due to temporary or subcritical vascular insufficiency […] Persistent loss of vision suggests structural changes […] or irreversible damage”.

There are a lot of questions one might ask here, and I actually found it interesting to know how much can be learned simply by asking some questions which might help narrow things down – the above are just examples of variables to consider, and there are others as well, e.g. whether or not there is pain (“Painful blurring of vision is most commonly associated with diseases at the front of the eye”, whereas “Painless loss of vision usually arises from problems in the posterior part of the eye”), whether there’s discharge, just how the vision is affected (a blind spot, peripheral field loss, floaters, double vision, …), etc.

“Ptosis (i.e. drooping lid) and a dilated pupil suggest an ipsilateral cranial nerve III palsy. This is a neuro-ophthalmic emergency since it may represent an aneurysm of the posterior communicating artery. […] In such cases the palsy may be the only warning of impending aneurysmal rupture with subsequent subarachnoid haemorrhage. One helpful feature that warns that a cranial nerve III palsy may be compressive is pupil involvement (i.e. a dilated pupil).”

“Although some degree of cataract (loss of transparency of the lens) is almost universal in those >65 years of age, it is only a problem when it is restricting the patient’s activity. It is most commonly due to ageing, but it may be associated with ocular disease (e.g. uveitis), systemic disease (e.g. diabetes), drugs (e.g. systemic corticosteroids) or it may be inherited. It is the commonest cause of treatable blindness worldwide. […] Glaucoma describes a group of eye conditions characterized by a progressive optic neuropathy and visual field loss, in which the intraocular pressure is sufficiently raised to impair normal optic nerve function. Glaucoma may present insidiously or acutely. In the more common primary open angle glaucoma, there is an asymptomatic sustained elevation in intraocular pressure which may cause gradual unnoticed loss of visual field over years, and is a significant cause of blindness worldwide. […] Primary open angle glaucoma is asymptomatic until sufficiently advanced for field loss to be noticeable to the patient. […] Acute angle closure glaucoma is an ophthalmic emergency in which closure of the drainage angle causes a sudden symptomatic elevation of intraocular pressure which may rapidly damage the optic nerve.”

“Age-related macular degeneration is the commonest cause of blindness in the older population (>65 years) in the Western world. Since it is primarily the macula […] that is affected, patients retain their peripheral vision and with it a variable level of independence. There are two forms: ‘dry’ AMD accounts for 90 per cent of cases and the more dramatic ‘wet’ (also known as neovascular) AMD accounts for 10 per cent. […] Treatments for dry AMD do not alter the course of the disease but revolve around optimizing the patient’s remaining vision, such as using magnifiers. […] Treatments for wet AMD seek to reverse the neovascular process”.

“Diabetes is the commonest cause of blindness in the younger population (<65 years) in the Western world. Diabetic retinopathy is a microvascular disease of the retinal circulation. In both type 1 and type 2 diabetes glycaemic control and blood pressure should be optimized to reduce progression. Progression of retinopathy to the proliferative stage is most commonly seen in type 1 diabetes, whereas maculopathy is more commonly a feature of type 2 diabetes. […] Symptoms
*Bilateral. *Usually asymptomatic until either maculopathy or vitreous haemorrhage. [This is part of why screening programs for diabetic eye disease are so common – the first sign of eye disease may well be catastrophic and irreversible vision loss, despite the fact that the disease process may take years or decades to develop to that point] *Gradual loss of vision – suggests diabetic maculopathy (especially if distortion) or cataract. *Sudden loss of vision – most commonly vitreous haemorrhage secondary to proliferative diabetic retinopathy.”

Recap of some key points made in the chapter:
“*For uncomfortable/red eyes, grittiness, itchiness or a foreign body sensation usually indicate ocular surface problems such as conjunctivitis.
*Severe ‘aching’ eye pain suggests serious eye pathology such as acute angle closure glaucoma or scleritis.  *Photophobia is most commonly seen with acute anterior uveitis or corneal disease (ulcers or trauma). [it’s also common in migraine]
*Sudden loss of vision is usually due to a vascular event (e.g. retinal vessel occlusions, anterior ischaemic optic neuropathy, ‘wet’ AMD).
*Gradual loss of vision is common in the ageing population. It is frequently due to cataract […], primary open angle glaucoma (peripheral field loss) or ‘dry’ AMD (central field loss).
*Recent-onset flashes and floaters should be presumed to be retinal tear/detachment.
*Double vision may be monocular (both images from the same eye) or binocular (different images from each eye). Binocular double vision is serious, commonly arising from a cranial nerve III, IV or VI palsy. […]
the following presentations are sufficiently serious to warrant urgent referral to an ophthalmologist: sudden loss of vision, severe ‘aching’ eye pain, new-onset flashes and floaters, [and] new-onset binocular diplopia.”

Infectious and tropical diseases:

“Patients with infection (and inflammatory conditions or, less commonly, malignancy) usually report fever […] Whatever the cause, body temperature generally rises in the evening and falls during the night […] Fever is often lower or absent in the morning […]. A sensation of ‘feeling hot’ or ‘feeling cold’ is unreliable – healthy individuals often feel these sensations, as may those with menopausal flushing, thyrotoxicosis, stress, panic, or migraine. The height and duration of fever are important. Rigors (chills or shivering, often uncontrollable and lasting for 20–30 minutes) are highly significant, and so is a documented temperature over 37.5 °C taken with a reliable oral thermometer. Drenching sweats are also highly significant. Rigors generally indicate serious bacterial infections […] or malaria. An oral temperature >39 °C has the same significance as rigors. Rigors generally do not occur in mild viral infections […] malignancy, connective tissue diseases, tuberculosis and other chronic infections. […] Anyone with fever lasting longer than a week should have lost weight – if a patient reports a prolonged fever but no weight loss, the ‘fever’ usually turns out to be of no consequence. […] untouched meals indicate ongoing illness; return of appetite is a reliable sign of recovery.”

“Bacterial infections are the most common cause of sepsis, but other serious infections (e.g. falciparum malaria) or inflammatory states (e.g. pancreatitis, pre-eclamptic toxaemia, burns) can cause the same features. Below are listed the indicators of sepsis – the more abnormal the result, the more severe is the patient’s condition.
Temperature
*Check if it is above 38 °C or below 36 °C.
*Simple viral infections seldom exceed 39 °C.
*Temperatures (from any cause) are generally higher in the evening than in the early morning.
*As noted above, rigors (uncontrollable shivering) are important indicators of severe bacterial infection or malaria. […] A heart rate greater than 90 beats/min is abnormal, and in severe sepsis a pulse of 140/min is not unusual. […] Peripheries (fingers, toes, nose) are often markedly cooler than central skin (trunk, forehead) with prolonged capillary refill time […] Blood pressure (BP) is low in the supine position (systolic BP <90 mmHg) and falls further when the patient is repositioned upright. In septic shock sometimes the BP is unrecordable on standing, and the patient may faint when they are helped to stand up […] The first sign [of respiratory disturbance] is a respiratory rate greater than 20 breaths/min. This is often a combination of two abnormalities: hypoxia caused by intrapulmonary shunts, and lactic acidosis. […] in hypoxia, the respiratory pattern is normal but rapid. Acidotic breathing has a deep, sighing character (also known as Kussmaul’s respiration). […] Also called toxic encephalopathy or delirium, confusion or drowsiness is often present in sepsis. […] Sepsis is always severe when it is accompanied by organ dysfunction. Septic shock is defined as severe sepsis with hypotension despite adequate fluid replacement.”

“Involuntary neck stiffness (‘nuchal rigidity’) is a characteristic sign of meningitis […] Patients with meningitis or subarachnoid haemorrhage characteristically lie still and do not move the head voluntarily. Patients who complain about a stiff neck are often worried about meningitis; patients with meningitis generally complain of a sore head, not a sore neck – thus neck stiffness is a sign, not a symptom, of meningitis.”

“General practitioners are generally correct when they say an infection is ‘a virus’, but the doctor needs to make an accurate assessment to be sure of not missing a serious bacterial infection masquerading as ‘flu’. […]
*Influenza is highly infectious, so friends, family, or colleagues should also be affected at the same time – the incubation period is short (1–3 days). If there are no other cases, question the diagnosis.
*The onset of viraemic symptoms is abrupt and often quite severe, with chills, headache, and myalgia. There may be mild rigors on the first day, but these are not sustained.
*As the next few days pass, the fever improves each day, and by day 3 the fever is settling or absent. A fever that continues for more than 3 days is not uncomplicated ’flu, and nor is an illness with rigors after the first day.
*As the viraemia subsides, so the upper respiratory symptoms become prominent […] The patient experiences a combination of: rasping sore throat, dry cough, hoarseness, coryza, red eyes, congested sinuses. These persist for a long time (10 days is not unusual) and the patient feels ‘miserable’ but the fever is no longer prominent.”

“Several infections cause a similar picture to ‘glandular fever’. The commonest is EBV [Epstein–Barr Virus], with cytomegalovirus (CMV) a close second; HIV seroconversion may look clinically identical, and acute toxoplasmosis similar (except for the lack of sore throat). Glandular fever in the USA is called ‘infectious mononucleosis’ […] The illness starts with viraemic symptoms of fever (without marked rigors), myalgia, lassitude, and anorexia. A sore throat is characteristic, and the urine often darkens (indicating liver involvement). […] Be very alert for any sign of stridor, or if the tonsils meet in the middle or are threatening to obstruct (a clue is that the patient is unable to swallow their saliva and is drooling or spitting it out). If there are any of these signs of upper airway obstruction, give steroids, intravenous fluids, and call the ENT surgeons urgently – fatal obstruction occasionally occurs in the middle of the night. […] Be very alert for a painful or tender spleen, or any signs of peritonism. In glandular fever the spleen may rupture spontaneously; it is rare, but tragic. It usually begins as a subcapsular haematoma, with pain and tenderness in the left upper quadrant. A secondary rupture through the capsule then occurs at a later date, and this is often rapidly fatal.”

April 7, 2015 Posted by | books, diabetes, medicine | Leave a comment

A few lectures

(This was a review lecture for me as I read a textbook on these topics a few months back going into quite a lot more detail – the post I link to has some relevant links if you’re curious to explore this topic further).

A few relevant links: Group (featured), symmetry group, Cayley table, Abelian group, Symmetry groups of Platonic solids, dual polyhedron, Lagrange’s theorem (group theory), Fermat’s little theorem. I think he was perhaps trying to cover a little bit too much ground in too little time by bringing up the RSA algorithm towards the end, but I’m sort of surprised how many people disliked the video; I don’t think it’s that bad.

The beginning of the lecture has a lot of remarks about Fourier‘s life which are in some sense not ‘directly related’ to the mathematics, and so if this is what you’re most interested in knowing more about you can probably skip the first 11 minutes or so of the lecture without missing out on much. The lecture is very non-technical compared to coverage like this, this, and this (…or this).

I think one thing worth mentioning here is that the lecturer is the author of a rather amazing book on the topic he talks about in the lecture.

April 2, 2015 Posted by | history, Lectures, mathematics | Leave a comment

Personal Relationships (7)

I noted in my last post about the book that although I’d initially thought I’d cover the rest of the book in that post, I at the end found myself unable to do so because the post would end up being too long; this post will cover the remaining chapters and points of interest and will be the last post about the book.

The first of the remaining chapters is a chapter about ‘Maintaining Relationships’; as usual most of the coverage focuses on romantic relationships. Some quotes:

“The most frequent focus of maintenance research has been the identification of behaviors or interactions that relational partners can enact to sustain their relationship […]. Numerous typologies of such behaviors exist […] Stafford and Canary’s (1991) initial research on the topic generated five positive and proactive maintenance strategies, which have become widely used […] Positivity refers to attempts to make interactions pleasant. These include acting nice and cheerful when one does not feel that way, performing favors for the partner, and withholding complaints. Openness involves direct discussion about the relationship, including talk about the history of the involvement, rules made, and personal disclosure. Assurances involve support of the partner, comforting the partner, and making one’s commitment clear. Social networks refers to relying on friends and family to support the relationship (e.g., having dinner every Sunday at the in-laws). Finally, sharing tasks refers to doing one’s fair share of household chores […] Early on, Duck (1988) questioned the extent to which maintenance behaviors are intentionally enacted. This issue is central because it addresses whether maintenance as a process requires effort and planning or occurs as a by-product of relating. […] some behaviors might start as strategies but over time become routine […] Dainton and Aylor (2002) found that the same behaviors are used intentionally and unintentionally […] [They] speculated that maintenance might be performed routinely until something happens to disrupt the routine. At that point, relational partners might turn to strategic maintenance enactment. As such, routine maintenance might be used during times when preferred levels of satisfaction and commitment are experienced, and strategic maintenance might be enacted during times of perceived uncertainty.[1]”

“One popular axiom is that relationships are easy to get into and hard to get out of, and evidence exists to support this axiom. Attridge (1994) reviewed various “barriers” to dissolving romantic relationships […] Attridge noted that both internal and external barriers prevent people from treating marriages like blind dates and that smart relational partners would make use of barriers to keep their relationships intact (e.g., remind the partner of religious premises of marriage). In terms of internal barriers that Attridge (1994) reviewed, the first is commitment. […] Next, one’s religious beliefs regarding the sanctity of marriage compel people to remain. Also, one’s self-identity – that is, viewing oneself in terms of the relationship – acts as a barrier to dissolution. Next, irretrievable personal investments (such as spending time with the partner) work against dissolution. Finally, Attridge argued that the presence of children acted as an internal barrier, especially for women; women who have children are more likely to remain in a marriage than are women without children.
In terms of external barriers, Attridge (1994) cited several. Not surprisingly, these include legal barriers, financial obligations, and social networks that promote the bond. In addition to these, we would add a perception of a lack of alternatives. Both Rusbult and Johnson’s models indicate that having no perceived alternatives increases one’s commitment to the partner. Both Johnson (2001) and Rusbult and Martz (1995) have shown that abused women remain in these marriages because they perceive that they have no alternative associations or resources that they can leverage to leave their unhappy state. Conversely, Heaton and Albrecht (1991) found that “social contact – whether having potential sources of help, receiving help, or spending social and recreational time away from home – is positively associated with instability” […] Relationships with barriers are probably stable, but they do not necessarily contain characteristics that demarcate a high-quality relationship. To ensure the continuation of such qualities, one needs to engage in individual and relational strategies that help create and sustain liking, love, commitment, and so forth.”

“research shows that maintenance strategies provide the bases for increases in intimacy […]. That is, the use of maintenance behaviors helps dating partners develop their involvements. Moreover, people who do not engage in maintenance behaviors are more likely to de-escalate or terminate their relationships […] Yet the functional utility of maintenance behaviors does not endure for long. […] Canary, Stafford, and Semic (2002) conducted a panel study examining married partners’ maintenance activity and relational characteristics (liking, commitment, and control mutuality) at three points in time, each a month apart. They found that maintenance behaviors are strongly associated with relational characteristics concurrently, but that the effects completely fade within a month’s time (when controlling for the previous months’ reports). Thus, it appears that maintenance strategies must be used continuously if they are to sustain desired relational characteristics. Being positive, assuring the partner of one’s love and commitment, sharing tasks, and so forth represent proactive relational behaviors to be sure, but they must be enacted on a regular basis to matter.”

“Rusbult (1987) identified variations in the way that people respond to their partners during troubled times. These tendencies to accommodate reflect two dimensions: passive versus active and constructive versus destructive. Exit is an active and destructive behavior that includes threats to leave the partner; voice is an active and constructive strategy that involves discussing the problem without hostility; Loyalty is a passive and constructive approach that involves giving in to the partner; and Neglect is a passive and destructive approach that includes passive– aggressive reactions. Several studies have shown that committed individuals are more likely to engage in the more civil forms of accommodation – voice and loyalty – and that these behaviors have a more positive associations than do neglect or exit with relational quality. […] Tests of Rusbult’s model have largely endorsed its basic tenets, as reported elsewhere (Canary & Zelley, 2000).”

“a longstanding assumption is that in established relationships much communication involves taken-for-granted presumptions and expectations, and “habits of adjustment to the other person become perfected and require less participation of the consciousness” (Waller, 1951, p. 311). This would imply that over time maintenance would be achieved routinely rather than strategically. […] Research supports these presuppositions.”

The next chapter is called ‘The Treatment of Relationship Distress: Theoretical Perspectives and Empirical Findings’ – a few observations from the chapter:

“distressed married couples are more prone than nondistressed couples to aversive, destructive patterns of communication […] distressed couples are more likely to engage in exchanges in which one person’s hurtful comment is reciprocated with greater intensity by the receiving partner. […] Studies of couples’ conversations have shown that distressed partners are more likely to respond negatively to each other’s expressions of negative affect than are members of nondistressed couples (negative reciprocity); furthermore, these expressions of negative affect are not as likely to be offset by high levels of positive affect as they are in nondistressed relationships […] social learning theory emphasizes that a spouse’s behavior is both learned and influenced by the other partner’s behavior. Over time, spouses’ influence on each other becomes a stronger predictor of current behavior than the influences of previous close relationships.”

CBCT [Cognitive–Behavioral Couple Therapy] researchers have identified five major types of cognitions involved in couple relationship functioning […] The first three cognitions involve evaluations of specific events. Selective attention involves how each member of a couple idiosyncratically notices, or fails to notice, particular aspects of relationship events. Selective attention contributes to distressed couples’ low rates of agreement about the occurrence and quality of specific events, as well as negative biases in perceptions of each other’s messages […] Attributions are inferences made about the determinants of partners’ positive and negative behaviors. The tendency of distressed partners to attribute each other’s negative actions to global, stable traits has been referred to as “distress-maintaining attributions” because they leave little room for future optimism that one’s partner will behave in a more pleasing manner in other situations […] Expectancies, or predictions that each member of the couple makes about particular relationship events in the immediate or more distant future, are the last type of cognitions involving specific events. Negative relationship expectancies have been associated with lower [relationship] satisfaction […] The fourth and fifth categories of cognition are forms of what cognitive therapists have referred to as basic or core beliefs shaping one’s experience of the world. These include (a) assumptions, or beliefs that each individual holds about the characteristics of individuals and intimate relationships, and (b) standards, or each individual’s personal beliefs about the characteristics that an intimate relationship and its members “should” have […] Couples’ assumptions and standards are associated with current relationship distress, either when these beliefs are unrealistic or when the partners are not satisfied with how their personal standards are being met in their relationship […] many of the problematic behavioral interactions between spouses may evolve from the partners’ relatively stable cognitions about the relationship. Unless these cognitions are taken into account, successful intervention is likely to be compromised.” [The important point being that in a distressed relationship you can address: a) behaviours, b) how people in the relationship think about the behaviours, or c) both – and c seems at least theoretically to be superior to either of the other choices].

“CBCT teaches partners to monitor and test the appropriateness of their cognitions. It incorporates some standard cognitive restructuring strategies, such as (a) considering alternative attributions for a partner’s negative behavior; (b) asking for behavioral data to test a negative perception concerning a partner (e.g., that the partner never complies with requests); and (c) evaluating extreme standards by generating lists of the advantages and disadvantages of expectations to live up to this standard. […] Overall, we propose that some of the common elements in the effective approaches that we have reviewed include (a) broadening partners’ perspectives on sources of their difficulties as a couple, as well as on their strengths as a couple; (b) increasing the partners’ abilities to differentiate between the strengths and problems within their current relationship, versus characteristics that occurred in prior relationships; (c) motivating and directing the couple to reduce behavioral patterns that maintain or worsen relationship distress; and (d) increasing the range of constructive strategies that partners have available for influencing each other. […] Although the quality of the therapeutic alliance in explaining treatment effects has not been investigated empirically in couple therapy, the therapeutic alliance has received considerable attention in psychotherapy research more generally. A recent meta-analysis of psychotherapy concluded that the therapeutic alliance explains between 38% and 77% of the variance in treatment outcome, whereas specific techniques account for only 0% to 8% of the variance (Wampold, 2001).”

The last chapter is a sort of ‘bringing it all together’-chapter with some key points to take away from the book. I thought I’d include a few of these here even if I’ve talked about them before:

“The ratio of positive and negative behaviors during conflict interactions is also critical to relationships as viewed from a social exchange perspective […]. The study of conflict communication in married couples, however, has shown that negative behavior tends to have a stronger impact on relationship satisfaction than positive behavior. […] In discussing social exchange processes and emotion, Planalp, Fitness, and Fehr debunk the idea that social exchange processes are cold and calculating and argue that “the basic concepts and processes of social exchange theory can be viewed as deeply emotional.” For example, they note that rewards and costs are often experienced as positive and negative feelings. In addition, our reactions to inequity and inequality in our relationships are likely to be highly emotional, and indeed such social exchange concepts as comparison levels and comparison levels for alternatives are basically about positive and negative feelings toward the partner and toward potential alternatives. […] Although there is some controversy about the extent to which social exchange processes are relevant to committed relationships that are going well, it is clear that people want their relationships to be fair and equitable, and exchange processes tend to become the focus when relationships are not going well.”

“Fincham and Beach suggest that the evidence for an association between attributions and relationship satisfaction is one of the most robust findings in the area of close relationships […] understanding a person’s interpretation of partner behavior may be as important as observing that behavior […] [However] many cognitive variables, apart from attributions, are associated with relationship satisfaction. Their list includes discrepancies between the partner’s behavior and one’s ideal standards, social comparison processes such as seeing one’s relationships as superior to the norm, memory processes that lead to the recall of positive versus negative memories, and self-evaluation maintenance processes that serve to maintain self-esteem even when one compares poorly with the partner.”

“Commitment seems to be the strongest predictor of relational stability, and other factors include religious beliefs about the sanctity of marriage, viewing one’s identity in terms of the relationship, personal investments in the relationship, and children. Le and Agnew (2003) conducted a meta-analysis to test Rusbult’s (1980) investment model of commitment. They found that Rusbult’s three variables of satisfaction with, alternatives to, and investment in the relationship were significantly related to commitment to that relationship and together accounted for two-thirds of the variance in commitment.”

“cognitive distortions in a positive direction tend to be characteristic of happy couples. Those who idealize their partners and who tend to see their partners in a more positive light than their partners view themselves are likely to be happier than other couples. The attributions of these couples are likely to be affected, and they are likely to blame themselves for negative events and give their partners the credit for positive events […] there is a lot of evidence in this volume supporting the powerful role that cognitions can play in personal relationships. Whether our focus is on cognitions at the cultural level or at the interpersonal level, they seem to have powerful effects on relationship behavior and satisfaction. Also, the effects are likely to be reciprocal, with cognitions affecting relationship satisfaction and satisfaction affecting cognitions.”

April 1, 2015 Posted by | books, Psychology | Leave a comment