Econstudentlog

A few diabetes papers of interest

i. Impact of Parental Socioeconomic Status on Excess Mortality in a Population-Based Cohort of Subjects With Childhood-Onset Type 1 Diabetes.

“Numerous reports have shown that individuals with lower SES during childhood have increased morbidity and all-cause mortality at all ages (10–14). Although recent epidemiological studies have shown that all-cause mortality in patients with T1D increases with lower SES in the individuals themselves (15,16), the association between parental SES and mortality among patients with childhood-onset T1D has not been reported to the best of our knowledge. Our hypothesis was that low parental SES additionally increases mortality in subjects with childhood-onset T1D. In this study, we used large population-based Swedish databases to 1) explore in a population-based study how parental SES affects mortality in a patient with childhood-onset T1D, 2) describe and compare how the effect differs among various age-at-death strata, and 3) assess whether the adult patient’s own SES affects mortality independently of parental SES.”

“The Swedish Childhood Diabetes Registry (SCDR) is a dynamic population-based cohort reporting incident cases of T1D since 1 July 1977, which to date has collected >16,000 prospective cases. […] All patients recorded in the SCDR from 1 January 1978 to 31 December 2008 were followed until death or 31 December 2010. The cohort was subjected to crude analyses and stratified analyses by age-at-death groups (0–17, 18–24, and ≥25 years). Time at risk was calculated from date of birth until death or 31 December 2010. Kaplan-Meier analyses and log-rank tests were performed to compare the effect of low maternal educational level, low paternal educational level, and family income support (any/none). Cox regression analyses were performed to estimate and compare the hazard ratios (HRs) for the socioeconomic variables and to adjust for the potential confounding variables age at onset and sex.”

“The study included 14,647 patients with childhood-onset T1D. A total of 238 deaths (male 154, female 84) occurred in 349,762 person-years at risk. The majority of mortalities occurred among the oldest age-group (≥25 years of age), and most of the deceased subjects had onset of T1D at the ages of 10–14.99 years […]. Mean follow-up was 23.9 years and maximum 46.5 years. The overall standardized mortality ratio up to the age of 47 years was 2.3 (95% CI 1.35–3.63); for females, it was 2.6 (1.28–4.66) and for males, 2.1 (1.27–3.49). […] Analyses on the effect of low maternal educational level showed an increased mortality for male patients (HR 1.43 [95% CI 1.01–2.04], P = 0.048) and a nonsignificant increased mortality for female patients (1.21 [0.722–2.018], P = 0.472). Paternal educational level had no significant effect on mortality […] Having parents who ever received income support was associated with an increased risk of death in both males (HR 1.89 [95% CI 1.36–2.64], P < 0.001) and females (2.30 [1.43–3.67], P = 0.001) […] Excluding the 10% of patients with the highest accumulated income support to parents during follow-up showed that having parents who ever received income support still was a risk factor for mortality.”

“A Cox model including maternal educational level together with parental income support, adjusting for age at onset and sex, showed that having parents who received income support was associated with a doubled mortality risk (HR 1.96 [95% CI 1.49–2.58], P < 0.001) […] In a Cox model including the adult patient’s own SES, having parents who received income support was still an independent risk factor in the younger age-at-death group (18–24 years). Among those who died at age ≥25 years of age, the patient’s own SES was a stronger predictor for mortality (HR 2.46 [95% CI 1.54–3.93], P < 0.001)”

“Despite a well-developed health-care system in Sweden, overall mortality up to the age of 47 years is doubled in both males and females with childhood-onset T1D. These results are in accordance with previous Swedish studies and reports from other comparable countries […] Previous studies indicated that low SES during childhood is associated with low glycemic control and diabetes-related morbidity in patients with T1D (8,9), and the current study implies that mortality in adulthood is also affected by parental SES. […] The findings, when stratified by age-at-death group, show that adult patients’ own need of income support independently predicted mortality in those who died at ≥25 years of age, whereas among those who died in the younger age-group (18–24 years), parental requirement of income support was still a strong independent risk factor. None of the present SES measures seem to predict mortality in the ages 0–17 years perhaps due to low numbers and, thus, power.”

ii. Exercise Training Improves but Does Not Normalize Left Ventricular Systolic and Diastolic Function in Adolescents With Type 1 Diabetes.

“Adults and adolescents with type 1 diabetes have reduced exercise capacity (810), which increases their risk for cardiovascular morbidity and mortality (11). The causes for this reduced exercise capacity are unclear. However, recent studies have shown that adolescents with type 1 diabetes have lower stroke volume during exercise, which has been attributed to alterations in left ventricular function (9,10). Reduced left ventricular compliance resulting in an inability to fill the left ventricle appropriately during exercise has been shown to contribute to the lower stroke volume during exercise in both adults and adolescents with type 1 diabetes (12).

Exercise training is recommended as part of the management of type 1 diabetes. However, the effects of exercise training on left ventricular function at rest and during exercise in adolescents with type 1 diabetes have not been investigated. In particular, it is unclear whether exercise training improves cardiac hemodynamics during exercise in adolescents with diabetes. Therefore, we aimed to assess left ventricular volumes at rest and during exercise in a group of adolescents with type 1 diabetes compared with adolescents without diabetes before and after a 20-week exercise-training program. We hypothesized that exercise training would improve exercise capacity and exercise stroke volume in adolescents with diabetes.”

RESEARCH DESIGN AND METHODS Fifty-three adolescents with type 1 diabetes (aged 15.6 years) were divided into two groups: exercise training (n = 38) and nontraining (n = 15). Twenty-two healthy adolescents without diabetes (aged 16.7 years) were included and, with the 38 participants with type 1 diabetes, participated in a 20-week exercise-training intervention. Assessments included VO2max and body composition. Left ventricular parameters were obtained at rest and during acute exercise using MRI.

RESULTS Exercise training improved aerobic capacity (10%) and stroke volume (6%) in both trained groups, but the increase in the group with type 1 diabetes remained lower than trained control subjects. […]

CONCLUSIONS These data demonstrate that in adolescents, the impairment in left ventricular function seen with type 1 diabetes can be improved, although not normalized, with regular intense physical activity. Importantly, diastolic dysfunction, a common mechanism causing heart failure in older subjects with diabetes, appears to be partially reversible in this age group.”

“This study confirms that aerobic capacity is reduced in [diabetic] adolescents and that this, at least in part, can be attributed to impaired left ventricular function and a blunted cardiac response to exercise (9). Importantly, although an aerobic exercise-training program improved the aerobic capacity and cardiac function in adolescents with type 1 diabetes, it did not normalize them to the levels seen in the training group without diabetes. Both left ventricular filling and contractility improved after exercise training in adolescents with diabetes, suggesting that aerobic fitness may prevent or delay the well-described impairment in left ventricular function in diabetes (9,10).

The increase in peak aerobic capacity (∼12%) seen in this study was consistent with previous exercise interventions in adults and adolescents with diabetes (14). However, the baseline peak aerobic capacity was lower in the participants with diabetes and improved with training to a level similar to the baseline observed in the participants without diabetes; therefore, trained adolescents with diabetes remained less fit than equally trained adolescents without diabetes. This suggests there are persistent differences in the cardiovascular function in adolescents with diabetes that are not overcome by exercise training.”

“Although regular exercise potentially could improve HbA1c, the majority of studies have failed to show this (3134). Exercise training improved aerobic capacity in this study without affecting glucose control in the participants with diabetes, suggesting that the effects of glycemic status and exercise training may work independently to improve aerobic capacity.”

….

iii. Change in Medical Spending Attributable to Diabetes: National Data From 1987 to 2011.

“Diabetes care has changed substantially in the past 2 decades. We examined the change in medical spending and use related to diabetes between 1987 and 2011. […] Using the 1987 National Medical Expenditure Survey and the Medical Expenditure Panel Surveys in 2000–2001 and 2010–2011, we compared per person medical expenditures and uses among adults ≥18 years of age with or without diabetes at the three time points. Types of medical services included inpatient care, emergency room (ER) visits, outpatient visits, prescription drugs, and others. We also examined the changes in unit cost, defined by the expenditure per encounter for medical services.”

RESULTS The excess medical spending attributed to diabetes was $2,588 (95% CI, $2,265 to $3,104), $4,205 ($3,746 to $4,920), and $5,378 ($5,129 to $5,688) per person, respectively, in 1987, 2000–2001, and 2010–2011. Of the $2,790 increase, prescription medication accounted for 55%; inpatient visits accounted for 24%; outpatient visits accounted for 15%; and ER visits and other medical spending accounted for 6%. The growth in prescription medication spending was due to the increase in both the volume of use and unit cost, whereas the increase in outpatient expenditure was almost entirely driven by more visits. In contrast, the increase in inpatient and ER expenditures was caused by the rise of unit costs. […] The increase was observed across all components of medical spending, with the greatest absolute increase in the spending on prescription medications ($1,528 increase), followed by inpatient visits ($680 increase) and outpatient visits ($430 increase). The absolute change in the spending on ER and other medical services use was relatively small. In relative terms, the spending on ER visits grew more than five times, faster than that of prescription medication and other medical components. […] Among the total annual diabetes-attributable medical spending, the spending on inpatient and outpatient visits dropped from 40% and 23% to 31% and 19%, respectively, between 1987 and 2011, whereas spending on prescription medication increased from 27% to 41%.”

“The unit costs rose universally in all five measures of medical care in adults with and without diabetes. For each hospital admission, diabetes patients spent significantly more than persons without diabetes. The gap increased from $1,028 to $1,605 per hospital admission between 1987 and 2001, and dropped slightly to $1,360 per hospital admission in 2011. Diabetes patients also had higher spending per ER visit and per purchase of prescription medications.”

“From 1999 to 2011, national data suggest that growth in the use and price of prescription medications in the general population is 2.6% and 3.6% per year, respectively; and the growth has decelerated in recent years (22). Our analysis suggests that the growth rates in the use and prices of prescription medications for diabetes patients are considerably higher. The higher rate of growth is likely, in part, due to the growing emphasis on achieving glycemic targets, the use of newer medications, and the use of multidrug treatment strategies in modern diabetes care practice (23,24). In addition, the growth of medication spending is fueled by the rising prices per drug, particularly the drugs that are newly introduced in the market. For example, the prices for newer drug classes such as glitazones, dipeptidyl peptidase-4 inhibitors, and incretins have been 8 to 10 times those of sulfonylureas and 5 to 7 times those of metformin (9).”

“Between 1987 and 2011, medical spending increased both in persons with and in persons without diabetes; and the increase was substantially greater among persons with diabetes. As a result, the medical spending associated with diabetes nearly doubled. The growth was primarily driven by the spending in prescription medications. Further studies are needed to assess the cost-effectiveness of increased spending on drugs.”

iv. Determinants of Adherence to Diabetes Medications: Findings From a Large Pharmacy Claims Database.

“Adults with type 2 diabetes are often prescribed multiple medications to treat hyperglycemia, diabetes-associated conditions such as hypertension and dyslipidemia, and other comorbidities. Medication adherence is an important determinant of outcomes in patients with chronic diseases. For those with diabetes, adherence to medications is associated with better control of intermediate risk factors (14), lower odds of hospitalization (3,57), lower health care costs (5,79), and lower mortality (3,7). Estimates of rates of adherence to diabetes medications vary widely depending on the population studied and how adherence is defined. One review found that adherence to oral antidiabetic agents ranged from 36 to 93% across studies and that adherence to insulin was ∼63% (10).”

“Using a large pharmacy claims database, we assessed determinants of adherence to oral antidiabetic medications in >200,000 U.S. adults with type 2 diabetes. […] We selected a cohort of members treated for diabetes with noninsulin medications (oral agents or GLP-1 agonists) in the second half of 2010 who had continuous prescription benefits eligibility through 2011. Each patient was followed for 12 months from their index diabetes claim date identified during the 6-month targeting period. From each patient’s prescription history, we collected the date the prescription was filled, how many days the supply would last, the National Drug Code number, and the drug name. […] Given the difficulty in assessing insulin adherence with measures such as medication possession ratio (MPR), we excluded patients using insulin when defining the cohort.”

“We looked at a wide range of variables […] Predictor variables were defined a priori and grouped into three categories: 1) patient factors including age, sex, education, income, region, past exposure to therapy (new to diabetes therapy vs. continuing therapy), and concurrent chronic conditions; 2) prescription factors including refill channel (retail vs. mail order), total pill burden per day, and out of pocket costs; and 3) prescriber factors including age, sex, and specialty. […] Our primary outcome of interest was adherence to noninsulin antidiabetic medications. To assess adherence, we calculated an MPR for each patient. The ratio captures how often patients refill their medications and is a standard metric that is consistent with the National Quality Forum’s measure of adherence to medications for chronic conditions. MPR was defined as the proportion of days a patient had a supply of medication during a calendar year or equivalent period. We considered patients to be adherent if their MPR was 0.8 or higher, implying that they had their medication supplies for at least 80% of the days. An MPR of 0.8 or above is a well-recognized index of adherence (11,12). Studies have suggested that patients with chronic diseases need to achieve at least 80% adherence to derive the full benefits of their medications (13). […] [W]e [also] determined whether a patient was persistent, that is whether they had not discontinued or had at least a 45-day gap in their targeted therapy.”

“Previous exposure to diabetes therapy had a significant impact on adherence. Patients new to therapy were 61% less likely to be adherent to their diabetes medication. There was also a clear age effect. Patients 25–44 years of age were 49% less likely to be adherent when compared with patients 45–64 years of age. Patients aged 65–74 years were 27% more likely to be adherent, and those aged 75 years and above were 41% more likely to be adherent when compared with the 45–64 year age-group. Men were significantly more likely to be adherent than women […I dislike the use of the word ‘significant’ in such contexts; there is a difference in the level of adherence, but it is not large in absolute terms; the male vs female OR is 1.14 (CI 1.12-1.16) – US]. Education level and household income were both associated with adherence. The higher the estimated academic achievement, the more likely the patient was to be adherent. Patients completing graduate school were 41% more likely to be adherent when compared with patients with a high school equivalent education. Patients with an annual income >$60,000 were also more likely to be adherent when compared with patients with a household income <$30,000.”

“The largest effect size was observed for patients obtaining their prescription antidiabetic medications by mail. Patients using the mail channel were more than twice as likely to be adherent to their antidiabetic medications when compared with patients filling their prescriptions at retail pharmacies. Total daily pill burden was positively associated with antidiabetic medication adherence. For each additional pill a patient took per day, adherence to antidiabetic medications increased by 22%. Patient out-of-pocket costs were negatively associated with adherence. For each additional $15 in out-of-pocket costs per month, diabetes medication adherence decreased by 11%. […] We found few meaningful differences in patient adherence according to prescriber factors.”

“In our study, characteristics that suggest a “healthier” patient (being younger, new to diabetes therapy, and taking few other medications) were all associated with lower odds of adherence to antidiabetic medications. This suggests that acceptance of a chronic illness diagnosis and the potential consequences may be an important, but perhaps overlooked, determinant of medication-taking behavior. […] Our findings regarding income and costs are important reminders that prescribers should consider the impact of medication costs on patients with diabetes. Out-of-pocket costs are an important determinant of adherence to statins (26) and a self-reported cause of underuse of medications in one in seven insured patients with diabetes (27). Lower income has previously been shown to be associated with poor adherence to diabetes medications (15) and a self-reported cause of cost-related medication underuse (27).”

v. The Effect of Alcohol Consumption on Insulin Sensitivity and Glycemic Status: A Systematic Review and Meta-analysis of Intervention Studies.

“Moderate alcohol consumption, compared with abstaining and heavy drinking, is related to a reduced risk of type 2 diabetes (1,2). Although the risk is reduced with moderate alcohol consumption in both men and women, the association may differ for men and women. In a meta-analysis, consumption of 24 g alcohol/day reduced the risk of type 2 diabetes by 40% among women, whereas consumption of 22 g alcohol/day reduced the risk by 13% among men (1).

The association of alcohol consumption with type 2 diabetes may be explained by increased insulin sensitivity, anti-inflammatory effects, or effects of adiponectin (3). Several intervention studies have examined the effect of moderate alcohol consumption on these potential underlying pathways. A meta-analysis of intervention studies by Brien et al. (4) showed that alcohol consumption significantly increased adiponectin levels but did not affect inflammatory factors. Unfortunately, the effect of alcohol consumption on insulin sensitivity has not been summarized quantitatively. A review of cross-sectional studies by Hulthe and Fagerberg (5) suggested a positive association between moderate alcohol consumption and insulin sensitivity, although the three intervention studies included in their review did not show an effect (68). Several other intervention studies also reported inconsistent results (9,10). Consequently, consensus is lacking about the effect of moderate alcohol consumption on insulin sensitivity. Therefore, we aimed to conduct a systematic review and meta-analysis of intervention studies investigating the effect of alcohol consumption on insulin sensitivity and other relevant glycemic measures.”

“22 articles met criteria for inclusion in the qualitative synthesis. […] Of the 22 studies, 15 used a crossover design and 7 a parallel design. The intervention duration of the studies ranged from 2 to 12 weeks […] Of the 22 studies, 2 were excluded from the meta-analysis because they did not include an alcohol-free control group (14,19), and 4 were excluded because they did not have a randomized design […] Overall, 14 studies were included in the meta-analysis”

“A random-effects model was used because heterogeneity was present (P < 0.01, I2 = 91%). […] For HbA1c, a random-effects model was used because the I2 statistic indicated evidence for some heterogeneity (I2 = 30%).” [Cough, you’re not supposed to make these decisions that way, coughUS. This is not the first time I’ve seen this approach applied, and I don’t like it; it’s bad practice to allow the results of (frequently under-powered) heterogeneity tests to influence model selection decisions. As Bohrenstein and Hedges point out in their book, “A report should state the computational model used in the analysis and explain why this model was selected. A common mistake is to use the fixed-effect model on the basis that there is no evidence of heterogeneity. As [already] explained […], the decision to use one model or the other should depend on the nature of the studies, and not on the significance of this test”]

“This meta-analysis shows that moderate alcohol consumption did not affect estimates of insulin sensitivity or fasting glucose levels, but it decreased fasting insulin concentrations and HbA1c. Sex-stratified analysis suggested that moderate alcohol consumption may improve insulin sensitivity and decrease fasting insulin concentrations in women but not in men. The meta-regression suggested no influence of dosage and duration on the results. However, the number of studies may have been too low to detect influences by dosage and duration. […] The primary finding that alcohol consumption does not influence insulin sensitivity concords with the intervention studies included in the review of Hulthe and Fagerberg (5). This is in contrast with observational studies suggesting a significant association between moderate alcohol consumption and improved insulin sensitivity (34,35). […] We observed lower levels of HbA1c in subjects consuming moderate amounts of alcohol compared with abstainers. This has also been shown in several observational studies (39,43,44). Alcohol may decrease HbA1c by suppressing the acute rise in blood glucose after a meal and increasing the early insulin response (45). This would result in lower glucose concentrations over time and, thus, lower HbA1c concentrations. Unfortunately, the underlying mechanism of glycemic control by alcohol is not clearly understood.”

vi. Predictors of Lower-Extremity Amputation in Patients With an Infected Diabetic Foot Ulcer.

“Infection is a frequent complication of diabetic foot ulcers, with up to 58% of ulcers being infected at initial presentation at a diabetic foot clinic, increasing to 82% in patients hospitalized for a diabetic foot ulcer (1). These diabetic foot infections (DFIs) are associated with poor clinical outcomes for the patient and high costs for both the patient and the health care system (2). Patients with a DFI have a 50-fold increased risk of hospitalization and 150-fold increased risk of lower-extremity amputation compared with patients with diabetes and no foot infection (3). Among patients with a DFI, ∼5% will undergo a major amputation and 20–30% a minor amputation, with the presence of peripheral arterial disease (PAD) greatly increasing amputation risk (46).”

“As infection of a diabetic foot wound heralds a poor outcome, early diagnosis and treatment are important. Unfortunately, systemic signs of inflammation such as fever and leukocytosis are often absent even with a serious foot infection (10,11). As local signs and symptoms of infection are also often diminished, because of concomitant peripheral neuropathy and ischemia (12), diagnosing and defining resolution of infection can be difficult.”

“The system developed by the International Working Group on the Diabetic Foot (IWGDF) and the Infectious Diseases Society of America (IDSA) provides criteria for the diagnosis of infection of ulcers and classifies it into three categories: mild, moderate, or severe. The system was validated in three relatively small cohorts of patients […] The European Study Group on Diabetes and the Lower Extremity (Eurodiale) prospectively studied a large cohort of patients with a diabetic foot ulcer (17), enabling us to determine the prognostic value of the IWGDF system for clinically relevant lower-extremity amputations. […] We prospectively studied 575 patients with an infected diabetic foot ulcer presenting to 1 of 14 diabetic foot clinics in 10 European countries. […] Among these patients, 159 (28%) underwent an amputation. […] Patients were followed monthly until healing of the foot ulcer(s), major amputation, or death — up to a maximum of 1 year.”

“One hundred and ninety-nine patients had a grade 2 (mild) infection, 338 a grade 3 (moderate), and 38 a grade 4 (severe). Amputations were performed on 159 (28%) patients (126 minor and 33 major) within the year of follow-up; 103 patients (18%) underwent amputations proximal to and including the hallux. […] The independent predictors of any amputation were as follows: periwound edema, HR 2.01 (95% CI 1.33–3.03); foul smell, HR 1.74 (1.17–2.57); purulent and nonpurulent exudate, HR 1.67 (1.17–2.37) and 1.49 (1.02–2.18), respectively; deep ulcer, HR 3.49 (1.84–6.60); positive probe-to-bone test, HR 6.78 (3.79–12.15); pretibial edema, HR 1.53 (1.02–2.31); fever, HR 2.00 (1.15–3.48); elevated CRP levels but less than three times the upper limit of normal, HR 2.74 (1.40–5.34); and elevated CRP levels more than three times the upper limit, HR 3.84 (2.07–7.12). […] In comparison with mild infection, the presence of a moderate infection increased the hazard for any amputation by a factor of 2.15 (95% CI 1.25–3.71) and 3.01 (1.51–6.01) for amputations excluding the lesser toes. For severe infection, the hazard for any amputation increased by a factor of 4.12 (1.99–8.51) and for amputations excluding the lesser toes by a factor of 5.40 (2.20–13.26). Larger ulcer size and presence of PAD were also independent predictors of both any amputation and amputations excluding the lesser toes, with HRs between 1.81 and 3 (and 95% CIs between 1.05 and 6.6).”

“Previously published studies that have aimed to identify independent risk factors for lower-extremity amputation in patients with a DFI have noted an association with older age (5,22), the presence of fever (5), elevated acute-phase reactants (5,22,23), higher HbA1c levels (24), and renal insufficiency (5,22).”

“The new risk scores we developed for any amputation, and amputations excluding the lesser toes had higher prognostic capability, based on the area under the ROC curve (0.80 and 0.78, respectively), than the IWGDF system (0.67) […] which is currently the only one in use for infected diabetic foot ulcers. […] these Eurodiale scores were developed based on the available data of our cohort, and they will need to be validated in other populations before any firm conclusions can be drawn. The advantage of these newly developed scores is that they are easier for clinicians to perform […] These newly developed risk scores can be readily used in daily clinical practice without the necessity of obtaining additional laboratory testing.”

Advertisements

September 12, 2017 Posted by | Cardiology, Diabetes, Economics, Epidemiology, Health Economics, Infectious disease, Medicine, Microbiology, Statistics | Leave a comment

Utility of Research Autopsies for Understanding the Dynamics of Cancer

A few links:
Pancreatic cancer.
Jaccard index.
Limited heterogeneity of known driver gene mutations among the metastases of individual patients with pancreatic cancer.
Epitope.
Tissue-specific mutation accumulation in human adult stem cells during life.
Epigenomic reprogramming during pancreatic cancer progression links anabolic glucose metabolism to distant metastasis.

August 25, 2017 Posted by | Cancer/oncology, Genetics, Immunology, Lectures, Medicine, Statistics | Leave a comment

Quantifying tumor evolution through spatial computational modeling

Two general remarks: 1. She talks very fast, in my opinion unpleasantly fast – the lecture would have been at least slightly easier to follow if she’d slowed down a little. 2. A few of the lectures uploaded in this lecture series (from the IAS Mathematical Methods in Cancer Evolution and Heterogeneity Workshop) seem to have some sound issues; in this lecture there are multiple 1-2 seconds long ‘chunks’ where the sound drops out and some words are lost. This is really annoying, and a similar problem (which was likely ‘the same problem’) previously lead me to quit another lecture in the series; however in this case I decided to give it a shot anyway, and I actually think it’s not a big deal; the sound-losses are very short in duration, and usually no more than one or two words are lost so you can usually figure out what was said. During this lecture there was incidentally also some issues with the monitor roughly 27 minutes in, but this isn’t a big deal as no information was lost and unlike the people who originally attended the lecture you can just skip ahead approximately one minute (that was how long it took to solve that problem).

A few relevant links to stuff she talks about in the lecture:

A Big Bang model of human colorectal tumor growth.
Approximate Bayesian computation.
Site frequency spectrum.
Identification of neutral tumor evolution across cancer types.
Using tumour phylogenetics to identify the roots of metastasis in humans.

August 22, 2017 Posted by | Cancer/oncology, Evolutionary biology, Genetics, Lectures, Mathematics, Medicine, Statistics | Leave a comment

A few diabetes papers of interest

i. Rates of Diabetic Ketoacidosis: International Comparison With 49,859 Pediatric Patients With Type 1 Diabetes From England, Wales, the U.S., Austria, and Germany.

“Rates of DKA in youth with type 1 diabetes vary widely nationally and internationally, from 15% to 70% at diagnosis (4) to 1% to 15% per established patient per year (911). However, data from systematic comparisons between countries are limited. To address this gap in the literature, we analyzed registry and audit data from three organizations: the Prospective Diabetes Follow-up Registry (DPV) in Germany and Austria, the National Paediatric Diabetes Audit (NPDA) in England and Wales, and the T1D Exchange (T1DX) in the U.S. These countries have similarly advanced, yet differing, health care systems in which data on DKA and associated factors are collected. Our goal was to identify indicators of risk for DKA admissions in pediatric patients with >1-year duration of disease with an aim to better understand where targeted preventive programs might lead to a reduction in the frequency of this complication of management of type 1 diabetes.”

RESULTS The frequency of DKA was 5.0% in DPV, 6.4% in NPDA, and 7.1% in T1DX […] Mean HbA1c was lowest in DPV (63 mmol/mol [7.9%]), intermediate in T1DX (69 mmol/mol [8.5%]), and highest in NPDA (75 mmol/mol [9.0%]). […] In multivariable analyses, higher odds of DKA were found in females (odds ratio [OR] 1.23, 99% CI 1.10–1.37), ethnic minorities (OR 1.27, 99% CI 1.11–1.44), and HbA1c ≥7.5% (≥58 mmol/mol) (OR 2.54, 99% CI 2.09–3.09 for HbA1c from 7.5 to <9% [58 to <75 mmol/mol] and OR 8.74, 99% CI 7.18–10.63 for HbA1c ≥9.0% [≥75 mmol/mol]).”

Poor metabolic control is obviously very important, but it’s important to remember that poor metabolic control is in itself an outcome that needs to be explained. I would note that the mean HbA1c values here, especially that 75 mmol/mol one, seem really high; this is not a very satisfactory level of glycemic control and corresponds to an average glucose level of 12 mmol/l. And that’s a population average, meaning that many individuals have values much higher than this. Actually the most surprising thing to me about these data is that the DKA event rates are not much higher than they are, considering the level of metabolic control achieved. Another slightly surprising finding is that teenagers (13-17 yrs) were not actually all that much more likely to have experienced DKA than small children (0-6 yrs); the OR is only ~1.5. Of course this can not be taken as an indication that DKA in teenagers do not make up a substantial proportion of the total amount of DKA events in pediatric samples, as the type 1 prevalence is much higher in teenagers than in small children (incidence peaks in adolescence).

“In 2004–2009 in the U.S., the mean hospital cost per pediatric DKA admission was $7,142 (range $4,125–11,916) (6), and insurance claims data from 2007 reported an excess of $5,837 in annual medical expenditures for youth with insulin-treated diabetes with DKA compared with those without DKA (7). In Germany, pediatric patients with diabetes with DKA had diabetes-related costs that were up to 3.6-fold higher compared with those without DKA (8).”

“DKA frequency was lower in pump users than in injection users (OR 0.84, 99% CI 0.76–0.93). Heterogeneity in the association with DKA between registries was seen for pump use and age category, and the overall rate should be interpreted accordingly. A lower rate of DKA in pump users was only found in T1DX, in contrast to no association of pump use with DKA in DPV or NPDA. […] In multivariable analyses […], age, type 1 diabetes duration, and pump use were not significantly associated with DKA in the fully adjusted model. […] pump use was associated with elevated odds of DKA in the <6-year-olds and in the 6- to <13-year-olds but with reduced odds of DKA in the 13- to <18-year-olds.”

Pump use should probably all else equal increase the risk of DKA, but all else is never equal and in these data pump users actually had a lower DKA event rate than did diabetics treated with injections. One should not conclude from this finding that pump use decreases the risk of DKA, selection bias and unobserved heterogeneities are problems which it is almost impossible to correct for in an adequate way – I find it highly unlikely that selection bias is only a potential problem in the US (see below). There are many different ways selection bias can be a relevant problem, financial- and insurance-related reasons (relevant particularly in the US and likely the main factors the authors are considering) are far from the only potential problems; I could thus easily imagine selection dynamics playing a major role even in a hypothetical setting where all new-diagnosed children were started on pump therapy as a matter of course. In such a setting you might have a situation where very poorly controlled individuals would have 10 DKA events in a short amount of time because they didn’t take the necessary amount of blood glucose tests/disregarded alarms/forgot or postponed filling up the pump when it’s near-empty/failed to switch the battery in time/etc. etc., and then what might happen would be that the diabetologist/endocrinologist would then proceed to recommend these patients doing very poorly on pump treatment to switch to injection therapy, and what you would end up with would be a compliant/motivated group of patients on pump therapy and a noncompliant/poorly motivated group on injection therapy. This would happen even if everybody started on pump therapy and so pump therapy exposure was completely unrelated to outcomes. Pump therapy requires more of the patient than does injection therapy, and if the patient is unwilling/unable to put in the work required that treatment option will fail. In my opinion the default here should be that these treatment groups are (‘significantly’) different, not that they are similar.

A few more quotes from the paper:

“The major finding of these analyses is high rates of pediatric DKA across the three registries, even though DKA events at the time of diagnosis were not included. In the prior 12 months, ∼1 in 20 (DPV), 1 in 16 (NPDA), and 1 in 14 (T1DX) pediatric patients with a duration of diabetes ≥1 year were diagnosed with DKA and required treatment in a health care facility. Female sex, ethnic minority status, and elevated HbA1c were consistent indicators of risk for DKA across all three registries. These indicators of increased risk for DKA are similar to previous reports (10,11,18,19), and our rates of DKA are within the range in the pediatric diabetes literature of 1–15% per established patient per year (10,11).

Compared with patients receiving injection therapy, insulin pump use was associated with a lower risk of DKA only in the U.S. in the T1DX, but no difference was seen in the DPV or NPDA. Country-specific factors on the associations of risk factors with DKA require further investigation. For pump use, selection bias may play a role in the U.S. The odds of DKA in pump users was not increased in any registry, which is a marked difference from some (10) but not all historic data (20).”

ii. Effect of Long-Acting Insulin Analogs on the Risk of Cancer: A Systematic Review of Observational Studies.

NPH insulin has been the mainstay treatment for type 1 diabetes and advanced type 2 diabetes since the 1950s. However, this insulin is associated with an increased risk of nocturnal hypoglycemia, and its relatively short half-life requires frequent administration (1,2). Consequently, structurally modified insulins, known as long-acting insulin analogs (glargine and detemir), were developed in the 1990s to circumvent these limitations. However, there are concerns that long-acting insulin analogs may be associated with an increased risk of cancer. Indeed, some laboratory studies showed long-acting insulin analogs were associated with cancer cell proliferation and protected against apoptosis via their higher binding affinity to IGF-I receptors (3,4).

In 2009, four observational studies associated the use of insulin glargine with an increased risk of cancer (58). These studies raised important concerns but were also criticized for important methodological shortcomings (913). Since then, several observational studies assessing the association between long-acting insulin analogs and cancer have been published but yielded inconsistent findings (1428). […] Several meta-analyses of observational studies have investigated the association between insulin glargine and cancer risk (3437). These meta-analyses assessed the quality of included studies, but the methodological issues particular to pharmacoepidemiologic research were not fully considered. In addition, given the presence of important heterogeneity in this literature, the appropriateness of pooling the results of these studies remains unclear. We therefore conducted a systematic review of observational studies examining the association between long-acting insulin analogs and cancer incidence, with a particular focus on methodological strengths and weaknesses of these studies.”

“[W]e assessed the quality of studies for key components, including time-related biases (immortal time, time-lag, and time-window), inclusion of prevalent users, inclusion of lag periods, and length of follow-up between insulin initiation and cancer incidence.

Immortal time bias is defined by a period of unexposed person-time that is misclassified as exposed person-time or excluded, resulting in the exposure of interest appearing more favorable (40,41). Time-lag bias occurs when treatments used later in the disease management process are compared with those used earlier for less advanced stages of the disease. Such comparisons can result in confounding by disease duration or severity of disease if duration and severity of disease are not adequately considered in the design or analysis of the study (29). This is particularly true for chronic disease with dynamic treatment processes such as type 2 diabetes. Currently, American and European clinical guidelines suggest using basal insulin (e.g., NPH, glargine, and detemir) as a last line of treatment if HbA1c targets are not achieved with other antidiabetic medications (42). Therefore, studies that compare long-acting insulin analogs to nonbasal insulin may introduce confounding by disease duration. Time-window bias occurs when the opportunity for exposure differs between case subjects and control subjects (29,43).

The importance of considering a lag period is necessary for latency considerations (i.e., a minimum time between treatment initiation and the development of cancer) and to minimize protopathic and detection bias. Protopathic bias, or reverse causation, is present when a medication (exposure) is prescribed for early symptoms related to the outcome of interest, which can lead to an overestimation of the association. Lagging the exposure by a predefined time window in cohort studies or excluding exposures in a predefined time window before the event in case-control studies is a means of minimizing this bias (44). Detection bias is present when the exposure leads to higher detection of the outcome of interest due to the increased frequency of clinic visits (e.g., newly diagnosed patients with type 2 diabetes or new users of another antidiabetic medication), which also results in an overestimation of risk (45). Thus, including a lag period, such as starting follow-up after 1 year of the initiation of a drug, simultaneously considers a latency period while also minimizing protopathic and detection bias.”

“We systematically searched MEDLINE and EMBASE from 2000 to 2014 to identify all observational studies evaluating the relationship between the long-acting insulin analogs and the risk of any and site-specific cancers (breast, colorectal, prostate). […] 16 cohort and 3 case-control studies were included in this systematic review (58,1428). All studies evaluated insulin glargine, with four studies also investigating insulin detemir (15,17,25,28). […] The study populations ranged from 1,340 to 275,164 patients […]. The mean or median durations of follow-up and age ranged from 0.9 to 7.0 years and from 52.3 to 77.4 years, respectively. […] Thirteen of 15 studies reported no association between insulin glargine and detemir and any cancer. Four of 13 studies reported an increased risk of breast cancer with insulin glargine. In the quality assessment, 7 studies included prevalent users, 11 did not consider a lag period, 6 had time-related biases, and 16 had short (<5 years) follow-up.”

“Of the 19 studies in this review, immortal time bias may have been introduced in one study based on the time-independent exposure and cohort entry definitions that were used in this cohort study […] Time-lag bias may have occurred in four studies […] A variation of time-lag bias was observed in a cohort study of new insulin users (28). For the exposure definition, highest duration since the start of insulin use was compared with the lowest. It is expected that the risk of cancer would increase with longer duration of insulin use; however, the opposite was reported (with RRs ranging from 0.50 to 0.90). The protective association observed could be due to competing risks (e.g., death from cardiovascular-related events) (47,48). Patients with diabetes have a higher risk of cardiovascular-related deaths compared with patients with no diabetes (49,50). Therefore, patients with diabetes who die of cardiovascular-related events do not have the opportunity to develop cancer, resulting in an underestimation of the risk of cancer. […] Time-window bias was observed in two studies (18,22). […] HbA1c and diabetes duration were not accounted for in 15 of the 19 studies, resulting in likely residual confounding (7,8,1418,2026,28). […] Seven studies included prevalent users of insulin (8,15,18,20,21,23,25), which is problematic because of the corresponding depletion of susceptible subjects in other insulin groups compared with long-acting insulin analogs. Protopathic or detection bias could have resulted in 11 of the 19 studies because a lag period was not incorporated in the study design (6,7,1416,1821,23,28).”

CONCLUSIONS The observational studies examining the risk of cancer associated with long-acting insulin analogs have important methodological shortcomings that limit the conclusions that can be drawn. Thus, uncertainty remains, particularly for breast cancer risk.”

iii. Impact of Socioeconomic Status on Cardiovascular Disease and Mortality in 24,947 Individuals With Type 1 Diabetes.

“Socioeconomic status (SES) is a powerful predictor of cardiovascular disease (CVD) and death. We examined the association in a large cohort of patients with type 1 diabetes. […] Clinical data from the Swedish National Diabetes Register were linked to national registers, whereby information on income, education, marital status, country of birth, comorbidities, and events was obtained. […] Type 1 diabetes was defined on the basis of epidemiologic data: treatment with insulin and a diagnosis at the age of 30 years or younger. This definition has been validated as accurate in 97% of the cases listed in the register (14).”

“We included 24,947 patients. Mean (SD) age and follow-up was 39.1 (13.9) and 6.0 (1.0) years. Death and fatal/nonfatal CVD occurred in 926 and 1378 individuals. Compared with being single, being married was associated with 50% lower risk of death, cardiovascular (CV) death, and diabetes-related death. Individuals in the two lowest quintiles had twice as great a risk of fatal/nonfatal CVD, coronary heart disease, and stroke and roughly three times as great a risk of death, diabetes-related death, and CV death as individuals in the highest income quintile. Compared with having ≤9 years of education, individuals with a college/university degree had 33% lower risk of fatal/nonfatal stroke.”

“Individuals with 10–12 years of education were comparable at baseline (considering distribution of age and sex) with those with a college/university degree […]. Individuals with a college/university degree had higher income, had 5 mmol/mol lower HbA1c, were more likely to be married/cohabiting, used insulin pump more frequently (17.5% vs. 14.5%), smoked less (5.8% vs. 13.1%), and had less albuminuria (10.8% vs. 14.2%). […] Women had substantially lower income and higher education, were more often married, used insulin pump more frequently, had less albuminuria, and smoked more frequently than men […] Individuals with high income were more likely to be married/cohabiting, had lower HbA1c, and had lower rates of smoking as well as albuminuria”.

CONCLUSIONS Low SES increases the risk of CVD and death by a factor of 2–3 in type 1 diabetes.”

“The effect of SES was striking despite rigorous adjustments for risk factors and confounders. Individuals in the two lowest income quintiles had two to three times higher risk of CV events and death than those in the highest income quintile. Compared with low educational level, having high education was associated with ∼30% lower risk of stroke. Compared with being single, individuals who were married/cohabiting had >50% lower risk of death, CV death, and diabetes-related death. Immigrants had 20–40% lower risk of fatal/nonfatal CVD, all-cause death, and diabetes-related death. Additionally, we show that males had 44%, 63%, and 29% higher risk of all-cause death, CV death, and diabetes-related death, respectively.

Despite rigorous adjustments for covariates and equitable access to health care at a negligible cost (20,21), SES and sex were robust predictors of CVD disease and mortality in type 1 diabetes; their effect was comparable with that of smoking, which represented an HR of 1.56 (95% CI 1.29–1.91) for all-cause death. […] Our study shows that men with type 1 diabetes are at greater risk of CV events and death compared with women. This should be viewed in the light of a recent meta-analysis of 26 studies, which showed higher excess risk in women compared with men. Overall, women had 40% greater excess risk of all-cause mortality, and twice the excess risk of fatal/nonfatal vascular events, compared with men (29). Thus, whereas the excess risk (i.e., the risk of patients with diabetes compared with the nondiabetic population) of vascular disease is higher in women with diabetes, we show that men with diabetes are still at substantially greater risk of all-cause death, CV death, and diabetes death compared with women with diabetes. Other studies are in line with our findings (10,11,13,3032).”

iv. Interventions That Restore Awareness of Hypoglycemia in Adults With Type 1 Diabetes: A Systematic Review and Meta-analysis.

“Hypoglycemia remains the major limiting factor toward achieving good glycemic control (1). Recurrent hypoglycemia reduces symptomatic and hormone responses to subsequent hypoglycemia (2), associated with impaired awareness of hypoglycemia (IAH). IAH occurs in up to one-third of adults with type 1 diabetes (T1D) (3,4), increasing their risk of severe hypoglycemia (SH) sixfold (3) and contributing to substantial morbidity, with implications for employment (5), driving (6), and mortality. Distribution of risk of SH is skewed: one study showed that 5% of subjects accounted for 54% of all SH episodes, with IAH one of the main risk factors (7). “Dead-in-bed,” related to nocturnal hypoglycemia, is a leading cause of death in people with T1D <40 years of age (8).”

“This systematic review assessed the clinical effectiveness of treatment strategies for restoring hypoglycemia awareness (HA) and reducing SH risk in those with IAH and performed a meta-analysis, where possible, for different approaches in restoring awareness in T1D adults. Interventions to restore HA were broadly divided into three categories: educational (inclusive of behavioral), technological, and pharmacotherapeutic. […] Forty-three studies (18 randomized controlled trials, 25 before-and-after studies) met the inclusion criteria, comprising 27 educational, 11 technological, and 5 pharmacological interventions. […] A meta-analysis for educational interventions on change in mean SH rates per person per year was performed. Combining before-and-after and RCT studies, six studies (n = 1,010 people) were included in the meta-analysis […] A random-effects meta-analysis revealed an effect size of a reduction in SH rates of 0.44 per patient per year with 95% CI 0.253–0.628. [here’s the forest plot, US] […] Most of the educational interventions were observational and mostly retrospective, with few RCTs. The overall risk of bias is considered medium to high and the study quality moderate. Most, if not all, of the RCTs did not use double blinding and lacked information on concealment. The strength of association of the effect of educational interventions is moderate. The ability of educational interventions to restore IAH and reduce SH is consistent and direct with educational interventions showing a largely positive outcome. There is substantial heterogeneity between studies, and the estimate is imprecise, as reflected by the large CIs. The strength of evidence is moderate to high.”

v. Trends of Diagnosis-Specific Work Disability After Newly Diagnosed Diabetes: A 4-Year Nationwide Prospective Cohort Study.

“There is little evidence to show which specific diseases contribute to excess work disability among those with diabetes. […] In this study, we used a large nationwide register-based data set, which includes information on work disability for all working-age inhabitants of Sweden, in order to investigate trends of diagnosis-specific work disability (sickness absence and disability pension) among people with diabetes for 4 years directly after the recorded onset of diabetes. We compared work disability trends among people with diabetes with trends among those without diabetes. […] The register data of diabetes medication and in- and outpatient hospital visits were used to identify all recorded new diabetes cases among the population aged 25–59 years in Sweden in 2006 (n = 14,098). Data for a 4-year follow-up of ICD-10 physician-certified sickness absence and disability pension days (2007‒2010) were obtained […] Comparisons were made using a random sample of the population without recorded diabetes (n = 39,056).”

RESULTS The most common causes of work disability were mental and musculoskeletal disorders; diabetes as a reason for disability was rare. Most of the excess work disability among people with diabetes compared with those without diabetes was owing to mental disorders (mean difference adjusted for confounding factors 18.8‒19.8 compensated days/year), musculoskeletal diseases (12.1‒12.8 days/year), circulatory diseases (5.9‒6.5 days/year), diseases of the nervous system (1.8‒2.0 days/year), and injuries (1.0‒1.2 days/year).”

CONCLUSIONS The increased risk of work disability among those with diabetes is largely attributed to comorbid mental, musculoskeletal, and circulatory diseases. […] Diagnosis of diabetes as the cause of work disability was rare.”

August 19, 2017 Posted by | Cancer/oncology, Cardiology, Diabetes, Health Economics, Medicine, Statistics | Leave a comment

Infectious Disease Surveillance (II)

Some more observation from the book below.

“There are three types of influenza viruses — A, B, and C — of which only types A and B cause widespread outbreaks in humans. Influenza A viruses are classified into subtypes based on antigenic differences between their two surface glycoproteins, hemagglutinin and neuraminidase. Seventeen hemagglutinin subtypes (H1–H17) and nine neuraminidase subtypes (N1–N9) have been identifed. […] The internationally accepted naming convention for influenza viruses contains the following elements: the type (e.g., A, B, C), geographical origin (e.g., Perth, Victoria), strain number (e.g., 361), year of isolation (e.g., 2011), for influenza A the hemagglutinin and neuraminidase antigen description (e.g., H1N1), and for nonhuman origin viruses the host of origin (e.g., swine) [4].”

“Only two antiviral drug classes are licensed for chemoprophylaxis and treatment of influenza—the adamantanes (amantadine and rimantadine) and the neuraminidase inhibitors (oseltamivir and zanamivir). […] Antiviral resistant strains arise through selection pressure in individual patients during treatment [which can lead to treatment failure]. […] they usually do not transmit further (because of impaired virus fitness) and have limited public health implications. On the other hand, primarily resistant viruses have emerged in the past decade and in some cases have completely replaced the susceptible strains. […] Surveillance of severe influenza illness is challenging because most cases remain undiagnosed. […] In addition, most of the influenza burden on the healthcare system is because of complications such as secondary bacterial infections and exacerbations of pre-existing chronic diseases, and often influenza is not suspected as an underlying cause. Even if suspected, the virus could have been already cleared from the respiratory secretions when the testing is performed, making diagnostic confirmation impossible. […] Only a small proportion of all deaths caused by influenza are classified as influenza-related on death certificates. […] mortality surveillance based only on death certificates is not useful for the rapid assessment of an influenza epidemic or pandemic severity. Detection of excess mortality in real time can be done by establishing specific monitoring systems that overcome these delays [such as sentinel surveillance systems, US].”

“Influenza vaccination programs are extremely complex and costly. More than half a billion doses of influenza vaccines are produced annually in two separate vaccine production cycles, one for the Northern Hemisphere and one for the Southern Hemisphere [54]. Because the influenza virus evolves constantly and vaccines are reformulated yearly, both vaccine effectiveness and safety need to be monitored routinely. Vaccination campaigns are also organized annually and require continuous public health efforts to maintain an acceptable level of vaccination coverage in the targeted population. […] huge efforts are made and resources spent to produce and distribute influenza vaccines annually. Despite these efforts, vaccination coverage among those at risk in many parts of the world remains low.”

“The Active Bacterial Core surveillance (ABCs) network and its predecessor have been examples of using surveillance as information for action for over 20 years. ABCs has been used to measure disease burden, to provide data for vaccine composition and recommended-use policies, and to monitor the impact of interventions. […] sites represent wide geographic diversity and approximately reflect the race and urban-to-rural mix of the U.S. population [37]. Currently, the population under surveillance is 19–42 million and varies by pathogen and project. […] ABCs has continuously evolved to address challenging questions posed by the six pathogens (H. influenzae; GAS [Group A Streptococcus], GBS [Group B Streptococcus], S.  pneumoniae, N. meningitidis, and MRSA) and other emerging infections. […] For the six core pathogens, the objectives are (1) to determine the incidence and epidemiologic characteristics of invasive disease in geographically diverse populations in the United States through active, laboratory, and population-based surveillance; (2) to determine molecular epidemiologic patterns and microbiologic characteristics of isolates collected as part of routine surveillance in order to track antimicrobial resistance; (3) to detect the emergence of new strains with new resistance patterns and/or virulence and contribute to development and evaluation of new vaccines; and (4) to provide an infrastructure for surveillance of other emerging pathogens and for conducting studies aimed at identifying risk factors for disease and evaluating prevention policies.”

“Food may become contaminated by over 250 bacterial, viral, and parasitic pathogens. Many of these agents cause diarrhea and vomiting, but there is no single clinical syndrome common to all foodborne diseases. Most of these agents can also be transmitted by nonfoodborne routes, including contact with animals or contaminated water. Therefore, for a given illness, it is often unclear whether the source of infection is foodborne or not. […] Surveillance systems for foodborne diseases provide extremely important information for prevention and control.”

“Since 1995, the Centers for Disease Control and Prevention (CDC) has routinely used an automated statistical outbreak detection algorithm that compares current reports of each Salmonella serotype with the preceding 5-year mean number of cases for the same geographic area and week of the year to look for unusual clusters of infection [5]. The sensitivity of Salmonella serotyping to detect outbreaks is greatest for rare serotypes, because a small increase is more noticeable against a rare background. The utility of serotyping has led to its widespread adoption in surveillance for food pathogens in many countries around the world [6]. […] Today, a new generation of subtyping methods […] is increasing the specificity of laboratory-based surveillance and its power to detect outbreaks […] Molecular subtyping allows comparison of the molecular “fingerprint” of bacterial strains. In the United States, the CDC coordinates a network called PulseNet that captures data from standardized molecular subtyping by PFGE [pulsed field gel electrophoresis]. By comparing new submissions and past data, public health officials can rapidly identify geographically dispersed clusters of disease that would otherwise not be apparent and evaluate them as possible foodborne-disease outbreaks [8]. The ability to identify geographically dispersed outbreaks has become increasingly important as more foods are mass-produced and widely distributed. […] Similar networks have been developed in Canada, Europe, the Asia Pacifc region, Latin America and the Caribbean region, the Middle Eastern region and, most recently, the African region”.

“Food consumption and practices have changed during the past 20 years in the United States, resulting in a shift from readily detectable, point-source outbreaks (e.g., attendance at a wedding dinner), to widespread outbreaks that occur over many communities with only a few illnesses in each community. One of the changes has been establishment of large food-producing facilities that disseminate products throughout the country. If a food product is contaminated with a low level of pathogen, contaminated food products are distributed across many states; and only a few illnesses may occur in each community. This type of outbreak is often difficult to detect. PulseNet has been critical for the detection of widely dispersed outbreaks in the United States [17]. […] The growth of the PulseNet database […] and the use of increasingly sophisticated epidemiological approaches have led to a dramatic increase in the number of multistate outbreaks detected and investigated.”

“Each year, approximately 35 million people are hospitalized in the United States, accounting for 170 million inpatient days [1,2]. There are no recent estimates of the numbers of healthcare-associated infections (HAI). However, two decades ago, HAI were estimated to affect more than 2 million hospital patients annually […] The mortality attributed to these HAI was estimated at about 100,000 deaths annually. […] Almost 85% of HAI in the United States are associated with bacterial pathogens, and 33% are thought to be preventable [4]. […] The primary purpose of surveillance [in the context of HAI] is to alert clinicians, epidemiologists, and laboratories of the need for targeted prevention activities required to reduce HAI rates. HAI surveillance data help to establish baseline rates that may be used to determine the potential need to change public health policy, to act and intervene in clinical settings, and to assess the effectiveness of microbiology methods, appropriateness of tests, and allocation of resources. […] As less than 10% of HAI in the United States occur as recognized epidemics [18], HAI surveillance should not be embarked on merely for the detection of outbreaks.”

“There are two types of rate comparisons — intrahospital and interhospital. The primary goals of intrahospital comparison are to identify areas within the hospital where HAI are more likely to occur and to measure the efficacy of interventional efforts. […] Without external comparisons, hospital infection control departments may [however] not know if the endemic rates in their respective facilities are relatively high or where to focus the limited fnancial and human resources of the infection control program. […] The CDC has been the central aggregating institution for active HAI surveillance in the United States since the 1960s.”

“Low sensitivity (i.e., missed infections) in a surveillance system is usually more common than low specificity (i.e., patients reported to have infections who did not actually have infections).”

“Among the numerous analyses of CDC hospital data carried out over the years, characteristics consistently found to be associated with higher HAI rates include affiliation with a medical school (i.e., teaching vs. nonteaching), size of the hospital and ICU categorized by the number of beds (large hospitals and larger ICUs generally had higher infection rates), type of control or ownership of the hospital (municipal, nonprofit, investor owned), and region of the country [43,44]. […] Various analyses of SENIC and NNIS/NHSN data have shown that differences in patient risk factors are largely responsible for interhospital differences in HAI rates. After controlling for patients’ risk factors, average lengths of stay, and measures of the completeness of diagnostic workups for infection (e.g., culturing rates), the differences in the average HAI rates of the various hospital groups virtually disappeared. […] For all of these reasons, an overall HAI rate, per se, gives little insight into whether the facility’s infection control efforts are effective.”

“Although a hospital’s surveillance system might aggregate accurate data and generate appropriate risk-adjusted HAI rates for both internal and external comparison, comparison may be misleading for several reasons. First, the rates may not adjust for patients’ unmeasured intrinsic risks for infection, which vary from hospital to hospital. […] Second, if surveillance techniques are not uniform among hospitals or are used inconsistently over time, variations will occur in sensitivity and specificity for HAI case finding. Third, the sample size […] must be sufficient. This issue is of concern for hospitals with fewer than 200 beds, which represent about 10% of hospital admissions in the United States. In most CDC analyses, rates from hospitals with very small denominators tend to be excluded [37,46,49]. […] Although many healthcare facilities around the country aggregate HAI surveillance data for baseline establishment and interhospital comparison, the comparison of HAI rates is complex, and the value of the aggregated data must be balanced against the burden of their collection. […] If a hospital does not devote sufficient resources to data collection, the data will be of limited value, because they will be replete with inaccuracies. No national database has successfully dealt with all the problems in collecting HAI data and each varies in its ability to address these problems. […] While comparative data can be useful as a tool for the prevention of HAI, in some instances no data might be better than bad data.”

August 10, 2017 Posted by | Books, Data, Epidemiology, Infectious disease, Medicine, Statistics | Leave a comment

Beyond Significance Testing (IV)

Below I have added some quotes from chapters 5, 6, and 7 of the book.

“There are two broad classes of standardized effect sizes for analysis at the group or variable level, the d family, also known as group difference indexes, and the r family, or relationship indexes […] Both families are metric- (unit-) free effect sizes that can compare results across studies or variables measured in different original metrics. Effect sizes in the d family are standardized mean differences that describe mean contrasts in standard deviation units, which can exceed 1.0 in absolute value. Standardized mean differences are signed effect sizes, where the sign of the statistic indicates the direction of the corresponding contrast. Effect sizes in the r family are scaled in correlation units that generally range from 1.0 to +1.0, where the sign indicates the direction of the relation […] Measures of association are unsigned effect sizes and thus do not indicate directionality.”

“The correlation rpb is for designs with two unrelated samples. […] rpb […] is affected by base rate, or the proportion of cases in one group versus the other, p and q. It tends to be highest in balanced designs. As the design becomes more unbalanced holding all else constant, rpb approaches zero. […] rpb is not directly comparable across studies with dissimilar relative group sizes […]. The correlation rpb is also affected by the total variability (i.e., ST). If this variation is not constant over samples, values of rpb may not be directly comparable.”

“Too many researchers neglect to report reliability coefficients for scores analyzed. This is regrettable because effect sizes cannot be properly interpreted without knowing whether the scores are precise. The general effect of measurement error in comparative studies is to attenuate absolute standardized effect sizes and reduce the power of statistical tests. Measurement error also contributes to variation in observed results over studies. Of special concern is when both score reliabilities and sample sizes vary from study to study. If so, effects of sampling error are confounded with those due to measurement error. […] There are ways to correct some effect sizes for measurement error (e.g., Baguley, 2009), but corrected effect sizes are rarely reported. It is more surprising that measurement error is ignored in most meta-analyses, too. F. L. Schmidt (2010) found that corrected effect sizes were analyzed in only about 10% of the 199 meta-analytic articles published in Psychological Bulletin from 1978 to 2006. This implies that (a) estimates of mean effect sizes may be too low and (b) the wrong statistical model may be selected when attempting to explain between-studies variation in results. If a fixed
effects model is mistakenly chosen over a random effects model, confidence intervals based on average effect sizes tend to be too narrow, which can make those results look more precise than they really are. Underestimating mean effect sizes while simultaneously overstating their precision is a potentially serious error.”

“[D]emonstration of an effect’s significance — whether theoretical, practical, or clinical — calls for more discipline-specific expertise than the estimation of its magnitude”.

“Some outcomes are categorical instead of continuous. The levels of a categorical outcome are mutually exclusive, and each case is classified into just one level. […] The risk difference (RD) is defined as pCpT, and it estimates the parameter πC πT. [Those ‘n-resembling letters’ is how wordpress displays pi; this is one of an almost infinite number of reasons why I detest blogging equations on this blog and usually do not do this – US] […] The risk ratio (RR) is the ratio of the risk rates […] which rate appears in the numerator versus the denominator is arbitrary, so one should always explain how RR is computed. […] The odds ratio (OR) is the ratio of the within-groups odds for the undesirable event. […] A convenient property of OR is that it can be converted to a kind of standardized mean difference known as logit d (Chinn, 2000). […] Reporting logit d may be of interest when the hypothetical variable that underlies the observed dichotomy is continuous.”

“The risk difference RD is easy to interpret but has a drawback: Its range depends on the values of the population proportions πC and πT. That is, the range of RD is greater when both πC and πT are closer to .50 than when they are closer to either 0 or 1.00. The implication is that RD values may not be comparable across different studies when the corresponding parameters πC and πT are quite different. The risk ratio RR is also easy to interpret. It has the shortcoming that only the finite interval from 0 to < 1.0 indicates lower risk in the group represented in the numerator, but the interval from > 1.00 to infinity is theoretically available for describing higher risk in the same group. The range of RR varies according to its denominator. This property limits the value of RR for comparing results across different studies. […] The odds ratio or shares the limitation that the finite interval from 0 to < 1.0 indicates lower risk in the group represented in the numerator, but the interval from > 1.0 to infinity describes higher risk for the same group. Analyzing natural log transformations of OR and then taking antilogs of the results deals with this problem, just as for RR. The odds ratio may be the least intuitive of the comparative risk effect sizes, but it probably has the best overall statistical properties. This is because OR can be estimated in prospective studies, in studies that randomly sample from exposed and unexposed populations, and in retrospective studies where groups are first formed based on the presence or absence of a disease before their exposure to a putative risk factor is determined […]. Other effect sizes may not be valid in retrospective studies (RR) or in studies without random sampling ([Pearson correlations between dichotomous variables, US]).”

“Sensitivity and specificity are determined by the threshold on a screening test. This means that different thresholds on the same test will generate different sets of sensitivity and specificity values in the same sample. But both sensitivity and specificity are independent of population base rate and sample size. […] Sensitivity and specificity affect predictive value, the proportion of test results that are correct […] In general, predictive values increase as sensitivity and specificity increase. […] Predictive value is also influenced by the base rate (BR), the proportion of all cases with the disorder […] In general, PPV [positive predictive value] decreases and NPV [negative…] increases as BR approaches zero. This means that screening tests tend to be more useful for ruling out rare disorders than correctly predicting their presence. It also means that most positive results may be false positives under low base rate conditions. This is why it is difficult for researchers or social policy makers to screen large populations for rare conditions without many false positives. […] The effect of BR on predictive values is striking but often overlooked, even by professionals […]. One misunderstanding involves confusing sensitivity and specificity, which are invariant to BR, with PPV and NPV, which are not. This means that diagnosticians fail to adjust their estimates of test accuracy for changes in base rates, which exemplifies the base rate fallacy. […] In general, test results have greater impact on changing the pretest odds when the base rate is moderate, neither extremely low (close to 0) nor extremely high (close to 1.0). But if the target disorder is either very rare or very common, only a result from a highly accurate screening test will change things much.”

“The technique of ANCOVA [ANalysis of COVAriance, US] has two more assumptions than ANOVA does. One is homogeneity of regression, which requires equal within-populations unstandardized regression coefficients for predicting outcome from the covariate. In nonexperimental designs where groups differ systematically on the covariate […] the homogeneity of regression assumption is rather likely to be violated. The second assumption is that the covariate is measured without error […] Violation of either assumption may lead to inaccurate results. For example, an unreliable covariate in experimental designs causes loss of statistical power and in nonexperimental designs may also cause inaccurate adjustment of the means […]. In nonexperimental designs where groups differ systematically, these two extra assumptions are especially likely to be violated. An alternative to ANCOVA is propensity score analysis (PSA). It involves the use of logistic regression to estimate the probability for each case of belonging to different groups, such as treatment versus control, in designs without randomization, given the covariate(s). These probabilities are the propensities, and they can be used to match cases from nonequivalent groups.”

August 5, 2017 Posted by | Books, Epidemiology, Papers, Statistics | Leave a comment

A few diabetes papers of interest

i. Clinically Relevant Cognitive Impairment in Middle-Aged Adults With Childhood-Onset Type 1 Diabetes.

“Modest cognitive dysfunction is consistently reported in children and young adults with type 1 diabetes (T1D) (1). Mental efficiency, psychomotor speed, executive functioning, and intelligence quotient appear to be most affected (2); studies report effect sizes between 0.2 and 0.5 (small to modest) in children and adolescents (3) and between 0.4 and 0.8 (modest to large) in adults (2). Whether effect sizes continue to increase as those with T1D age, however, remains unknown.

A key issue not yet addressed is whether aging individuals with T1D have an increased risk of manifesting “clinically relevant cognitive impairment,” defined by comparing individual cognitive test scores to demographically appropriate normative means, as opposed to the more commonly investigated “cognitive dysfunction,” or between-group differences in cognitive test scores. Unlike the extensive literature examining cognitive impairment in type 2 diabetes, we know of only one prior study examining cognitive impairment in T1D (4). This early study reported a higher rate of clinically relevant cognitive impairment among children (10–18 years of age) diagnosed before compared with after age 6 years (24% vs. 6%, respectively) or a non-T1D cohort (6%).”

“This study tests the hypothesis that childhood-onset T1D is associated with an increased risk of developing clinically relevant cognitive impairment detectable by middle age. We compared cognitive test results between adults with and without T1D and used demographically appropriate published norms (1012) to determine whether participants met criteria for impairment for each test; aging and dementia studies have selected a score ≥1.5 SD worse than the norm on that test, corresponding to performance at or below the seventh percentile (13).”

“During 2010–2013, 97 adults diagnosed with T1D and aged <18 years (age and duration 49 ± 7 and 41 ± 6 years, respectively; 51% female) and 138 similarly aged adults without T1D (age 49 ± 7 years; 55% female) completed extensive neuropsychological testing. Biomedical data on participants with T1D were collected periodically since 1986–1988.  […] The prevalence of clinically relevant cognitive impairment was five times higher among participants with than without T1D (28% vs. 5%; P < 0.0001), independent of education, age, or blood pressure. Effect sizes were large (Cohen d 0.6–0.9; P < 0.0001) for psychomotor speed and visuoconstruction tasks and were modest (d 0.3–0.6; P < 0.05) for measures of executive function. Among participants with T1D, prevalent cognitive impairment was related to 14-year average A1c >7.5% (58 mmol/mol) (odds ratio [OR] 3.0; P = 0.009), proliferative retinopathy (OR 2.8; P = 0.01), and distal symmetric polyneuropathy (OR 2.6; P = 0.03) measured 5 years earlier; higher BMI (OR 1.1; P = 0.03); and ankle-brachial index ≥1.3 (OR 4.2; P = 0.01) measured 20 years earlier, independent of education.”

“Having T1D was the only factor significantly associated with the between-group difference in clinically relevant cognitive impairment in our sample. Traditional risk factors for age-related cognitive impairment, in particular older age and high blood pressure (24), were not related to the between-group difference we observed. […] Similar to previous studies of younger adults with T1D (14,26), we found no relationship between the number of severe hypoglycemic episodes and cognitive impairment. Rather, we found that chronic hyperglycemia, via its associated vascular and metabolic changes, may have triggered structural changes in the brain that disrupt normal cognitive function.”

Just to be absolutely clear about these results: The type 1 diabetics they recruited in this study were on average not yet fifty years old, yet more than one in four of them were cognitively impaired to a clinically relevant degree. This is a huge effect. As they note later in the paper:

“Unlike previous reports of mild/modest cognitive dysfunction in young adults with T1D (1,2), we detected clinically relevant cognitive impairment in 28% of our middle-aged participants with T1D. This prevalence rate in our T1D cohort is comparable to the prevalence of mild cognitive impairment typically reported among community-dwelling adults aged 85 years and older (29%) (20).”

The type 1 diabetics included in the study had had diabetes for roughly a decade more than I have. And the number of cognitively impaired individuals in that sample corresponds roughly to what you find when you test random 85+ year-olds. Having type 1 diabetes is not good for your brain.

ii. Comment on Nunley et al. Clinically Relevant Cognitive Impairment in Middle-Aged Adults With Childhood-Onset Type 1 Diabetes.

This one is a short comment to the above paper, below I’ve quoted ‘the meat’ of the comment:

“While the […] study provides us with important insights regarding cognitive impairment in adults with type 1 diabetes, we regret that depression has not been taken into account. A systematic review and meta-analysis published in 2014 identified significant objective cognitive impairment in adults and adolescents with depression regarding executive functioning, memory, and attention relative to control subjects (2). Moreover, depression is two times more common in adults with diabetes compared with those without this condition, regardless of type of diabetes (3). There is even evidence that the co-occurrence of diabetes and depression leads to additional health risks such as increased mortality and dementia (3,4); this might well apply to cognitive impairment as well. Furthermore, in people with diabetes, the presence of depression has been associated with the development of diabetes complications, such as retinopathy, and higher HbA1c values (3). These are exactly the diabetes-specific correlates that Nunley et al. (1) found.”

“We believe it is a missed opportunity that Nunley et al. (1) mainly focused on biological variables, such as hyperglycemia and microvascular disease, and did not take into account an emotional disorder widely represented among people with diabetes and closely linked to cognitive impairment. Even though severe or chronic cases of depression are likely to have been excluded in the group without type 1 diabetes based on exclusion criteria (1), data on the presence of depression (either measured through a diagnostic interview or by using a validated screening questionnaire) could have helped to interpret the present findings. […] Determining the role of depression in the relationship between cognitive impairment and type 1 diabetes is of significant importance. Treatment of depression might improve cognitive impairment both directly by alleviating cognitive depression symptoms and indirectly by improving treatment nonadherence and glycemic control, consequently lowering the risk of developing complications.”

iii. Prevalence of Diabetes and Diabetic Nephropathy in a Large U.S. Commercially Insured Pediatric Population, 2002–2013.

“[W]e identified 96,171 pediatric patients with diabetes and 3,161 pediatric patients with diabetic nephropathy during 2002–2013. We estimated prevalence of pediatric diabetes overall, by diabetes type, age, and sex, and prevalence of pediatric diabetic nephropathy overall, by age, sex, and diabetes type.”

“Although type 1 diabetes accounts for a majority of childhood and adolescent diabetes, type 2 diabetes is becoming more common with the increasing rate of childhood obesity and it is estimated that up to 45% of all new patients with diabetes in this age-group have type 2 diabetes (1,2). With the rising prevalence of diabetes in children, a rise in diabetes-related complications, such as nephropathy, is anticipated. Moreover, data suggest that the development of clinical macrovascular complications, neuropathy, and nephropathy may be especially rapid among patients with young-onset type 2 diabetes (age of onset <40 years) (36). However, the natural history of young patients with type 2 diabetes and resulting complications has not been well studied.”

I’m always interested in the identification mechanisms applied in papers like this one, and I’m a little confused about the high number of patients without prescriptions (almost one-third of patients); I sort of assume these patients do take (/are given) prescription drugs, but get them from sources not available to the researchers (parents get prescriptions for the antidiabetic drugs, and the researchers don’t have access to these data? Something like this..) but this is a bit unclear. The mechanism they employ in the paper is not perfect (no mechanism is), but it probably works:

“Patients who had one or more prescription(s) for insulin and no prescriptions for another antidiabetes medication were classified as having type 1 diabetes, while those who filled prescriptions for noninsulin antidiabetes medications were considered to have type 2 diabetes.”

When covering limitations of the paper, they observe incidentally in this context that:

“Klingensmith et al. (31) recently reported that in the initial month after diagnosis of type 2 diabetes around 30% of patients were treated with insulin only. Thus, we may have misclassified a small proportion of type 2 cases as type 1 diabetes or vice versa. Despite this, we found that 9% of patients had onset of type 2 diabetes at age <10 years, consistent with the findings of Klingensmith et al. (8%), but higher than reported by the SEARCH for Diabetes in Youth study (<3%) (31,32).”

Some more observations from the paper:

“There were 149,223 patients aged <18 years at first diagnosis of diabetes in the CCE database from 2002 through 2013. […] Type 1 diabetes accounted for a majority of the pediatric patients with diabetes (79%). Among these, 53% were male and 53% were aged 12 to <18 years at onset, while among patients with type 2 diabetes, 60% were female and 79% were aged 12 to <18 years at onset.”

“The overall annual prevalence of all diabetes increased from 1.86 to 2.82 per 1,000 during years 2002–2013; it increased on average by 9.5% per year from 2002 to 2006 and slowly increased by 0.6% after that […] The prevalence of type 1 diabetes increased from 1.48 to 2.32 per 1,000 during the study period (average increase of 8.5% per year from 2002 to 2006 and 1.4% after that; both P values <0.05). The prevalence of type 2 diabetes increased from 0.38 to 0.67 per 1,000 during 2002 through 2006 (average increase of 13.3% per year; P < 0.05) and then dropped from 0.56 to 0.49 per 1,000 during 2007 through 2013 (average decrease of 2.7% per year; P < 0.05). […] Prevalence of any diabetes increased by age, with the highest prevalence in patients aged 12 to <18 years (ranging from 3.47 to 5.71 per 1,000 from 2002 through 2013).” […] The annual prevalence of diabetes increased over the study period mainly because of increases in type 1 diabetes.”

“Dabelea et al. (8) reported, based on data from the SEARCH for Diabetes in Youth study, that the annual prevalence of type 1 diabetes increased from 1.48 to 1.93 per 1,000 and from 0.34 to 0.46 per 1,000 for type 2 diabetes from 2001 to 2009 in U.S. youth. In our study, the annual prevalence of type 1 diabetes was 1.48 per 1,000 in 2002 and 2.10 per 1,000 in 2009, which is close to their reported prevalence.”

“We identified 3,161 diabetic nephropathy cases. Among these, 1,509 cases (47.7%) were of specific diabetic nephropathy and 2,253 (71.3%) were classified as probable cases. […] The annual prevalence of diabetic nephropathy in pediatric patients with diabetes increased from 1.16 to 3.44% between 2002 and 2013; it increased by on average 25.7% per year from 2002 to 2005 and slowly increased by 4.6% after that (both P values <0.05).”

Do note that the relationship between nephropathy prevalence and diabetes prevalence is complicated and that you cannot just explain an increase in the prevalence of nephropathy over time easily by simply referring to an increased prevalence of diabetes during the same time period. This would in fact be a very wrong thing to do, in part but not only on account of the data structure employed in this study. One problem which is probably easy to understand is that if more children got diabetes but the same proportion of those new diabetics got nephropathy, the diabetes prevalence would go up but the diabetic nephropathy prevalence would remain fixed; when you calculate the diabetic nephropathy prevalence you implicitly condition on diabetes status. But this just scratches the surface of the issues you encounter when you try to link these variables, because the relationship between the two variables is complicated; there’s an age pattern to diabetes risk, with risk (incidence) increasing with age (up to a point, after which it falls – in most samples I’ve seen in the past peak incidence in pediatric populations is well below the age of 18). However diabetes prevalence increases monotonously with age as long as the age-specific death rate of diabetics is lower than the age-specific incidence, because diabetes is chronic, and then on top of that you have nephropathy-related variables, which display diabetes-related duration-dependence (meaning that although nephropathy risk is also increasing with age when you look at that variable in isolation, that age-risk relationship is confounded by diabetes duration – a type 1 diabetic at the age of 12 who’s had diabetes for 10 years has a higher risk of nephropathy than a 16-year old who developed diabetes the year before). When a newly diagnosed pediatric patient is included in the diabetes sample here this will actually decrease the nephropathy prevalence in the short run, but not in the long run, assuming no changes in diabetes treatment outcomes over time. This is because the probability that that individual has diabetes-related kidney problems as a newly diagnosed child is zero, so he or she will unquestionably only contribute to the denominator during the first years of illness (the situation in the middle-aged type 2 context is different; here you do sometimes have newly-diagnosed patients who have developed complications already). This is one reason why it would be quite wrong to say that increased diabetes prevalence in this sample is the reason why diabetic nephropathy is increasing as well. Unless the time period you look at is very long (e.g. you have a setting where you follow all individuals with a diagnosis until the age of 18), the impact of increasing prevalence of one condition may well be expected to have a negative impact on the estimated risk of associated conditions, if those associated conditions display duration-dependence (which all major diabetes complications do). A second factor supporting a default assumption of increasing incidence of diabetes leading to an expected decreasing rate of diabetes-related complications is of course the fact that treatment options have tended to increase over time, and especially if you take a long view (look back 30-40 years) the increase in treatment options and improved medical technology have lead to improved metabolic control and better outcomes.

That both variables grew over time might be taken to indicate that both more children got diabetes and that a larger proportion of this increased number of children with diabetes developed kidney problems, but this stuff is a lot more complicated than it might look and it’s in particular important to keep in mind that, say, the 2005 sample and the 2010 sample do not include the same individuals, although there’ll of course be some overlap; in age-stratified samples like this you always have some level of implicit continuous replacement, with newly diagnosed patients entering and replacing the 18-year olds who leave the sample. As long as prevalence is constant over time, associated outcome variables may be reasonably easy to interpret, but when you have dynamic samples as well as increasing prevalence over time it gets difficult to say much with any degree of certainty unless you crunch the numbers in a lot of detail (and it might be difficult even if you do that). A factor I didn’t mention above but which is of course also relevant is that you need to be careful about how to interpret prevalence rates when you look at complications with high mortality rates (and late-stage diabetic nephropathy is indeed a complication with high mortality); in such a situation improvements in treatment outcomes may have large effects on prevalence rates but no effect on incidence. Increased prevalence is not always bad news, sometimes it is good news indeed. Gleevec substantially increased the prevalence of CML.

In terms of the prevalence-outcomes (/complication risk) connection, there are also in my opinion reasons to assume that there may be multiple causal pathways between prevalence and outcomes. For example a very low prevalence of a condition in a given area may mean that fewer specialists are educated to take care of these patients than would be the case for an area with a higher prevalence, and this may translate into a more poorly developed care infrastructure. Greatly increasing prevalence may on the other hand lead to a lower level of care for all patients with the illness, not just the newly diagnosed ones, due to binding budget constraints and care rationing. And why might you have changes in prevalence; might they not sometimes rather be related to changes in diagnostic practices, rather than changes in the True* prevalence? If that’s the case, you might not be comparing apples to apples when you’re comparing the evolving complication rates. There are in my opinion many reasons to believe that the relationship between chronic conditions and the complication rates of these conditions is far from simple to model.

All this said, kidney problems in children with diabetes is still rare, compared to the numbers you see when you look at adult samples with longer diabetes duration. It’s also worth distinguishing between microalbuminuria and overt nephropathy; children rarely proceed to develop diabetes-related kidney failure, although poor metabolic control may mean that they do develop this complication later, in early adulthood. As they note in the paper:

“It has been reported that overt diabetic nephropathy and kidney failure caused by either type 1 or type 2 diabetes are uncommon during childhood or adolescence (24). In this study, the annual prevalence of diabetic nephropathy for all cases ranged from 1.16 to 3.44% in pediatric patients with diabetes and was extremely low in the whole pediatric population (range 2.15 to 9.70 per 100,000), confirming that diabetic nephropathy is a very uncommon condition in youth aged <18 years. We observed that the prevalence of diabetic nephropathy increased in both specific and unspecific cases before 2006, with a leveling off of the specific nephropathy cases after 2005, while the unspecific cases continued to increase.”

iv. Adherence to Oral Glucose-Lowering Therapies and Associations With 1-Year HbA1c: A Retrospective Cohort Analysis in a Large Primary Care Database.

“Between a third and a half of medicines prescribed for type 2 diabetes (T2DM), a condition in which multiple medications are used to control cardiovascular risk factors and blood glucose (1,2), are not taken as prescribed (36). However, estimates vary widely depending on the population being studied and the way in which adherence to recommended treatment is defined.”

“A number of previous studies have used retrospective databases of electronic health records to examine factors that might predict adherence. A recent large cohort database examined overall adherence to oral therapy for T2DM, taking into account changes of therapy. It concluded that overall adherence was 69%, with individuals newly started on treatment being significantly less likely to adhere (19).”

“The impact of continuing to take glucose-lowering medicines intermittently, but not as recommended, is unknown. Medication possession (expressed as a ratio of actual possession to expected possession), derived from prescribing records, has been identified as a valid adherence measure for people with diabetes (7). Previous studies have been limited to small populations in managed-care systems in the U.S. and focused on metformin and sulfonylurea oral glucose-lowering treatments (8,9). Further studies need to be carried out in larger groups of people that are more representative of the general population.

The Clinical Practice Research Database (CPRD) is a long established repository of routine clinical data from more than 13 million patients registered with primary care services in England. […] The Genetics of Diabetes and Audit Research Tayside Study (GoDARTS) database is derived from integrated health records in Scotland with primary care, pharmacy, and hospital data on 9,400 patients with diabetes. […] We conducted a retrospective cohort study using [these databases] to examine the prevalence of nonadherence to treatment for type 2 diabetes and investigate its potential impact on HbA1c reduction stratified by type of glucose-lowering medication.”

“In CPRD and GoDARTS, 13% and 15% of patients, respectively, were nonadherent. Proportions of nonadherent patients varied by the oral glucose-lowering treatment prescribed (range 8.6% [thiazolidinedione] to 18.8% [metformin]). Nonadherent, compared with adherent, patients had a smaller HbA1c reduction (0.4% [4.4 mmol/mol] and 0.46% [5.0 mmol/mol] for CPRD and GoDARTs, respectively). Difference in HbA1c response for adherent compared with nonadherent patients varied by drug (range 0.38% [4.1 mmol/mol] to 0.75% [8.2 mmol/mol] lower in adherent group). Decreasing levels of adherence were consistently associated with a smaller reduction in HbA1c.”

“These findings show an association between adherence to oral glucose-lowering treatment, measured by the proportion of medication obtained on prescription over 1 year, and the corresponding decrement in HbA1c, in a population of patients newly starting treatment and continuing to collect prescriptions. The association is consistent across all commonly used oral glucose-lowering therapies, and the findings are consistent between the two data sets examined, CPRD and GoDARTS. Nonadherent patients, taking on average <80% of the intended medication, had about half the expected reduction in HbA1c. […] Reduced medication adherence for commonly used glucose-lowering therapies among patients persisting with treatment is associated with smaller HbA1c reductions compared with those taking treatment as recommended. Differences observed in HbA1c responses to glucose-lowering treatments may be explained in part by their intermittent use.”

“Low medication adherence is related to increased mortality (20). The mean difference in HbA1c between patients with MPR <80% and ≥80% is between 0.37% and 0.55% (4 mmol/mol and 6 mmol/mol), equivalent to up to a 10% reduction in death or an 18% reduction in diabetes complications (21).”

v. Health Care Transition in Young Adults With Type 1 Diabetes: Perspectives of Adult Endocrinologists in the U.S.

“Empiric data are limited on best practices in transition care, especially in the U.S. (10,1316). Prior research, largely from the patient perspective, has highlighted challenges in the transition process, including gaps in care (13,1719); suboptimal pediatric transition preparation (13,20); increased post-transition hospitalizations (21); and patient dissatisfaction with the transition experience (13,1719). […] Young adults with type 1 diabetes transitioning from pediatric to adult care are at risk for adverse outcomes. Our objective was to describe experiences, resources, and barriers reported by a national sample of adult endocrinologists receiving and caring for young adults with type 1 diabetes.”

“We received responses from 536 of 4,214 endocrinologists (response rate 13%); 418 surveys met the eligibility criteria. Respondents (57% male, 79% Caucasian) represented 47 states; 64% had been practicing >10 years and 42% worked at an academic center. Only 36% of respondents reported often/always reviewing pediatric records and 11% reported receiving summaries for transitioning young adults with type 1 diabetes, although >70% felt that these activities were important for patient care.”

“A number of studies document deficiencies in provider hand-offs across other chronic conditions and point to the broader relevance of our findings. For example, in two studies of inflammatory bowel disease, adult gastroenterologists reported inadequacies in young adult transition preparation (31) and infrequent receipt of medical histories from pediatric providers (32). In a study of adult specialists caring for young adults with a variety of chronic diseases (33), more than half reported that they had no contact with the pediatric specialists.

Importantly, more than half of the endocrinologists in our study reported a need for increased access to mental health referrals for young adult patients with type 1 diabetes, particularly in nonacademic settings. Report of barriers to care was highest for patient scenarios involving mental health issues, and endocrinologists without easy access to mental health referrals were significantly more likely to report barriers to diabetes management for young adults with psychiatric comorbidities such as depression, substance abuse, and eating disorders.”

“Prior research (34,35) has uncovered the lack of mental health resources in diabetes care. In the large cross-national Diabetes Attitudes, Wishes and Needs (DAWN) study (36) […] diabetes providers often reported not having the resources to manage mental health problems; half of specialist diabetes physicians felt unable to provide psychiatric support for patients and one-third did not have ready access to outside expertise in emotional or psychiatric matters. Our results, which resonate with the DAWN findings, are particularly concerning in light of the vulnerability of young adults with type 1 diabetes for adverse medical and mental health outcomes (4,34,37,38). […] In a recent report from the Mental Health Issues of Diabetes conference (35), which focused on type 1 diabetes, a major observation included the lack of trained mental health professionals, both in academic centers and the community, who are knowledgeable about the mental health issues germane to diabetes.”

August 3, 2017 Posted by | Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology, Psychiatry, Psychology, Statistics, Studies | Leave a comment

Beyond Significance Testing (III)

There are many ways to misinterpret significance tests, and this book spends quite a bit of time and effort on these kinds of issues. I decided to include in this post quite a few quotes from chapter 4 of the book, which deals with these topics in some detail. I also included some notes on effect sizes.

“[P] < .05 means that the likelihood of the data or results even more extreme given random sampling under the null hypothesis is < .05, assuming that all distributional requirements of the test statistic are satisfied and there are no other sources of error variance. […] the odds-against-chance fallacy […] [is] the false belief that p indicates the probability that a result happened by sampling error; thus, p < .05 says that there is less than a 5% likelihood that a particular finding is due to chance. There is a related misconception i call the filter myth, which says that p values sort results into two categories, those that are a result of “chance” (H0 not rejected) and others that are due to “real” effects (H0 rejected). These beliefs are wrong […] When p is calculated, it is already assumed that H0 is true, so the probability that sampling error is the only explanation is already taken to be 1.00. It is thus illogical to view p as measuring the likelihood of sampling error. […] There is no such thing as a statistical technique that determines the probability that various causal factors, including sampling error, acted on a particular result.

Most psychology students and professors may endorse the local Type I error fallacy [which is] the mistaken belief that p < .05 given α = .05 means that the likelihood that the decision just taken to reject H0 is a type I error is less than 5%. […] p values from statistical tests are conditional probabilities of data, so they do not apply to any specific decision to reject H0. This is because any particular decision to do so is either right or wrong, so no probability is associated with it (other than 0 or 1.0). Only with sufficient replication could one determine whether a decision to reject H0 in a particular study was correct. […] the valid research hypothesis fallacy […] refers to the false belief that the probability that H1 is true is > .95, given p < .05. The complement of p is a probability, but 1 – p is just the probability of getting a result even less extreme under H0 than the one actually found. This fallacy is endorsed by most psychology students and professors”.

“[S]everal different false conclusions may be reached after deciding to reject or fail to reject H0. […] the magnitude fallacy is the false belief that low p values indicate large effects. […] p values are confounded measures of effect size and sample size […]. Thus, effects of trivial magnitude need only a large enough sample to be statistically significant. […] the zero fallacy […] is the mistaken belief that the failure to reject a nil hypothesis means that the population effect size is zero. Maybe it is, but you cannot tell based on a result in one sample, especially if power is low. […] The equivalence fallacy occurs when the failure to reject H0: µ1 = µ2 is interpreted as saying that the populations are equivalent. This is wrong because even if µ1 = µ2, distributions can differ in other ways, such as variability or distribution shape.”

“[T]he reification fallacy is the faulty belief that failure to replicate a result is the failure to make the same decision about H0 across studies […]. In this view, a result is not considered replicated if H0 is rejected in the first study but not in the second study. This sophism ignores sample size, effect size, and power across different studies. […] The sanctification fallacy refers to dichotomous thinking about continuous p values. […] Differences between results that are “significant” versus “not significant” by close margins, such as p = .03 versus p = .07 when α = .05, are themselves often not statistically significant. That is, relatively large changes in p can correspond to small, nonsignificant changes in the underlying variable (Gelman & Stern, 2006). […] Classical parametric statistical tests are not robust against outliers or violations of distributional assumptions, especially in small, unrepresentative samples. But many researchers believe just the opposite, which is the robustness fallacy. […] most researchers do not provide evidence about whether distributional or other assumptions are met”.

“Many [of the above] fallacies involve wishful thinking about things that researchers really want to know. These include the probability that H0 or H1 is true, the likelihood of replication, and the chance that a particular decision to reject H0 is wrong. Alas, statistical tests tell us only the conditional probability of the data. […] But there is [however] a method that can tell us what we want to know. It is not a statistical technique; rather, it is good, old-fashioned replication, which is also the best way to deal with the problem of sampling error. […] Statistical significance provides even in the best case nothing more than low-level support for the existence of an effect, relation, or difference. That best case occurs when researchers estimate a priori power, specify the correct construct definitions and operationalizations, work with random or at least representative samples, analyze highly reliable scores in distributions that respect test assumptions, control other major sources of imprecision besides sampling error, and test plausible null hypotheses. In this idyllic scenario, p values from statistical tests may be reasonably accurate and potentially meaningful, if they are not misinterpreted. […] The capability of significance tests to address the dichotomous question of whether effects, relations, or differences are greater than expected levels of sampling error may be useful in some new research areas. Due to the many limitations of statistical tests, this period of usefulness should be brief. Given evidence that an effect exists, the next steps should involve estimation of its magnitude and evaluation of its substantive significance, both of which are beyond what significance testing can tell us. […] It should be a hallmark of a maturing research area that significance testing is not the primary inference method.”

“[An] effect size [is] a quantitative reflection of the magnitude of some phenomenon used for the sake of addressing a specific research question. In this sense, an effect size is a statistic (in samples) or parameter (in populations) with a purpose, that of quantifying a phenomenon of interest. more specific definitions may depend on study design. […] cause size refers to the independent variable and specifically to the amount of change in it that produces a given effect on the dependent variable. A related idea is that of causal efficacy, or the ratio of effect size to the size of its cause. The greater the causal efficacy, the more that a given change on an independent variable results in proportionally bigger changes on the dependent variable. The idea of cause size is most relevant when the factor is experimental and its levels are quantitative. […] An effect size measure […] is a named expression that maps data, statistics, or parameters onto a quantity that represents the magnitude of the phenomenon of interest. This expression connects dimensions or generalized units that are abstractions of variables of interest with a specific operationalization of those units.”

“A good effect size measure has the [following properties:] […] 1. Its scale (metric) should be appropriate for the research question. […] 2. It should be independent of sample size. […] 3. As a point estimate, an effect size should have good statistical properties; that is, it should be unbiased, consistent […], and efficient […]. 4. The effect size [should be] reported with a confidence interval. […] Not all effect size measures […] have all the properties just listed. But it is possible to report multiple effect sizes that address the same question in order to improve the communication of the results.” 

“Examples of outcomes with meaningful metrics include salaries in dollars and post-treatment survival time in years. means or contrasts for variables with meaningful units are unstandardized effect sizes that can be directly interpreted. […] In medical research, physical measurements with meaningful metrics are often available. […] But in psychological research there are typically no “natural” units for abstract, nonphysical constructs such as intelligence, scholastic achievement, or self-concept. […] Therefore, metrics in psychological research are often arbitrary instead of meaningful. An example is the total score for a set of true-false items. Because responses can be coded with any two different numbers, the total is arbitrary. Standard scores such as percentiles and normal deviates are arbitrary, too […] Standardized effect sizes can be computed for results expressed in arbitrary metrics. Such effect sizes can also be directly compared across studies where outcomes have different scales. this is because standardized effect sizes are based on units that have a common meaning regardless of the original metric.”

“1. It is better to report unstandardized effect sizes for outcomes with meaningful metrics. This is because the original scale is lost when results are standardized. 2. Unstandardized effect sizes are best for comparing results across different samples measured on the same outcomes. […] 3. Standardized effect sizes are better for comparing conceptually similar results based on different units of measure. […] 4. Standardized effect sizes are affected by the corresponding unstandardized effect sizes plus characteristics of the study, including its design […], whether factors are fixed or random, the extent of error variance, and sample base rates. This means that standardized effect sizes are less directly comparable over studies that differ in their designs or samples. […] 5. There is no such thing as T-shirt effect sizes (Lenth, 2006– 2009) that classify standardized effect sizes as “small,” “medium,” or “large” and apply over all research areas. This is because what is considered a large effect in one area may be seen as small or trivial in another. […] 6. There is usually no way to directly translate standardized effect sizes into implications for substantive significance. […] It is standardized effect sizes from sets of related studies that are analyzed in most meta analyses.”

July 16, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

Beyond Significance Testing (II)

I have added some more quotes and observations from the book below.

“The least squares estimators M and s2 are not robust against the effects of extreme scores. […] Conventional methods to construct confidence intervals rely on sample standard deviations to estimate standard errors. These methods also rely on critical values in central test distributions, such as t and z, that assume normality or homoscedasticity […] Such distributional assumptions are not always plausible. […] One option to deal with outliers is to apply transformations, which convert original scores with a mathematical operation to new ones that may be more normally distributed. The effect of applying a monotonic transformation is to compress one part of the distribution more than another, thereby changing its shape but not the rank order of the scores. […] It can be difficult to find a transformation that works in a particular data set. Some distributions can be so severely nonnormal that basically no transformation will work. […] An alternative that also deals with departures from distributional assumptions is robust estimation. Robust (resistant) estimators are generally less affected than least squares estimators by outliers or nonnormality.”

“An estimator’s quantitative robustness can be described by its finite-sample breakdown point (BP), or the smallest proportion of scores that when made arbitrarily very large or small renders the statistic meaningless. The lower the value of BP, the less robust the estimator. For both M and s2, BP = 0, the lowest possible value. This is because the value of either statistic can be distorted by a single outlier, and the ratio 1/N approaches zero as sample size increases. In contrast, BP = .50 for the median because its value is not distorted by arbitrarily extreme scores unless they make up at least half the sample. But the median is not an optimal estimator because its value is determined by a single score, the one at the 50th percentile. In this sense, all the other scores are discarded by the median. A compromise between the sample mean and the median is the trimmed mean. A trimmed mean Mtr is calculated by (a) ordering the scores from lowest to highest, (b) deleting the same proportion of the most extreme scores from each tail of the distribution, and then (c) calculating the average of the scores that remain. […] A common practice is to trim 20% of the scores from each tail of the distribution when calculating trimmed estimators. This proportion tends to maintain the robustness of trimmed means while minimizing their standard errors when sampling from symmetrical distributions […] For 20% trimmed means, BP = .20, which says they are robust against arbitrarily extreme scores unless such outliers make up at least 20% of the sample.”

The standard H0 is both a point hypothesis and a nil hypothesis. A point hypothesis specifies the numerical value of a parameter or the difference between two or more parameters, and a nil hypothesis states that this value is zero. The latter is usually a prediction that an effect, difference, or association is zero. […] Nil hypotheses as default explanations may be fine in new research areas when it is unknown whether effects exist at all. But they are less suitable in established areas when it is known that some effect is probably not zero. […] Nil hypotheses are tested much more often than non-nil hypotheses even when the former are implausible. […] If a nil hypothesis is implausible, estimated probabilities of data will be too low. This means that risk for Type I error is basically zero and a Type II error is the only possible kind when H0 is known in advance to be false.”

“Too many researchers treat the conventional levels of α, either .05 or .01, as golden rules. If other levels of α are specifed, they tend to be even lower […]. Sanctification of .05 as the highest “acceptable” level is problematic. […] Instead of blindly accepting either .05 or .01, one does better to […] [s]pecify a level of α that reflects the desired relative seriousness (DRS) of Type I error versus Type II error. […] researchers should not rely on a mechanical ritual (i.e., automatically specify .05 or .01) to control risk for Type I error that ignores the consequences of Type II error.”

“Although p and α are derived in the same theoretical sampling distribution, p does not estimate the conditional probability of a Type I error […]. This is because p is based on a range of results under H0, but α has nothing to do with actual results and is supposed to be specified before any data are collected. Confusion between p and α is widespread […] To differentiate the two, Gigerenzer (1993) referred to p as the exact level of significance. If p = .032 and α = .05, H0 is rejected at the .05 level, but .032 is not the long-run probability of Type I error, which is .05 for this example. The exact level of significance is the conditional probability of the data (or any result even more extreme) assuming H0 is true, given all other assumptions about sampling, distributions, and scores. […] Because p values are estimated assuming that H0 is true, they do not somehow measure the likelihood that H0 is correct. […] The false belief that p is the probability that H0 is true, or the inverse probability error […] is widespread.”

“Probabilities from significance tests say little about effect size. This is because essentially any test statistic (TS) can be expressed as the product TS = ES × f(N) […] where ES is an effect size and f(N) is a function of sample size. This equation explains how it is possible that (a) trivial effects can be statistically significant in large samples or (b) large effects may not be statistically significant in small samples. So p is a confounded measure of effect size and sample size.”

“Power is the probability of getting statistical significance over many random replications when H1 is true. it varies directly with sample size and the magnitude of the population effect size. […] This combination leads to the greatest power: a large population effect size, a large sample, a higher level of α […], a within-subjects design, a parametric test rather than a nonparametric test (e.g., t instead of Mann–Whitney), and very reliable scores. […] Power .80 is generally desirable, but an even higher standard may be need if consequences of Type II error are severe. […] Reviews from the 1970s and 1980s indicated that the typical power of behavioral science research is only about .50 […] and there is little evidence that power is any higher in more recent studies […] Ellis (2010) estimated that < 10% of studies have samples sufficiently large to detect smaller population effect sizes. Increasing sample size would address low power, but the number of additional cases necessary to reach even nominal power when studying smaller effects may be so great as to be practically impossible […] Too few researchers, generally < 20% (Osborne, 2008), bother to report prospective power despite admonitions to do so […] The concept of power does not stand without significance testing. as statistical tests play a smaller role in the analysis, the relevance of power also declines. If significance tests are not used, power is irrelevant. Cumming (2012) described an alternative called precision for research planning, where the researcher specifies a target margin of error for estimating the parameter of interest. […] The advantage over power analysis is that researchers must consider both effect size and precision in study planning.”

“Classical nonparametric tests are alternatives to the parametric t and F tests for means (e.g., the Mann–Whitney test is the nonparametric analogue to the t test). Nonparametric tests generally work by converting the original scores to ranks. They also make fewer assumptions about the distributions of those ranks than do parametric tests applied to the original scores. Nonparametric tests date to the 1950s–1960s, and they share some limitations. One is that they are not generally robust against heteroscedasticity, and another is that their application is typically limited to single-factor designs […] Modern robust tests are an alternative. They are generally more flexible than nonparametric tests and can be applied in designs with multiple factors. […] At the end of the day, robust statistical tests are subject to many of the same limitations as other statistical tests. For example, they assume random sampling albeit from population distributions that may be nonnormal or heteroscedastic; they also assume that sampling error is the only source of error variance. Alternative tests, such as the Welch–James and Yuen–Welch versions of a robust t test, do not always yield the same p value for the same data, and it is not always clear which alternative is best (Wilcox, 2003).”

July 11, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

Beyond Significance Testing (I)

“This book introduces readers to the principles and practice of statistics reform in the behavioral sciences. it (a) reviews the now even larger literature about shortcomings of significance testing; (b) explains why these criticisms have sufficient merit to justify major changes in the ways researchers analyze their data and report the results; (c) helps readers acquire new skills concerning interval estimation and effect size estimation; and (d) reviews alternative ways to test hypotheses, including Bayesian estimation. […] I assume that the reader has had undergraduate courses in statistics that covered at least the basics of regression and factorial analysis of variance. […] This book is suitable as a textbook for an introductory course in behavioral science statistics at the graduate level.”

I’m currently reading this book. I have so far read 8 out of the 10 chapters included, and I’m currently sort of hovering between a 3 and 4 star goodreads rating; some parts of the book are really great, but there are also a few aspects I don’t like. Some parts of the coverage are rather technical and I’m still debating to which extent I should cover the technical stuff in detail later here on the blog; there are quite a few equations included in the book and I find it annoying to cover math using the wordpress format of this blog. For now I’ll start out with a reasonably non-technical post with some quotes and key ideas from the first parts of the book.

“In studies of intervention outcomes, a statistically significant difference between treated and untreated cases […] has nothing to do with whether treatment leads to any tangible benefits in the real world. In the context of diagnostic criteria, clinical significance concerns whether treated cases can no longer be distinguished from control cases not meeting the same criteria. For example, does treatment typically prompt a return to normal levels of functioning? A treatment effect can be statistically significant yet trivial in terms of its clinical significance, and clinically meaningful results are not always statistically significant. Accordingly, the proper response to claims of statistical significance in any context should be “so what?” — or, more pointedly, “who cares?” — without more information.”

“There are free computer tools for estimating power, but most researchers — probably at least 80% (e.g., Ellis, 2010) — ignore the power of their analyses. […] Ignoring power is regrettable because the median power of published nonexperimental studies is only about .50 (e.g., Maxwell, 2004). This implies a 50% chance of correctly rejecting the null hypothesis based on the data. In this case the researcher may as well not collect any data but instead just toss a coin to decide whether or not to reject the null hypothesis. […] A consequence of low power is that the research literature is often difficult to interpret. Specifically, if there is a real effect but power is only .50, about half the studies will yield statistically significant results and the rest will yield no statistically significant findings. If all these studies were somehow published, the number of positive and negative results would be roughly equal. In an old-fashioned, narrative review, the research literature would appear to be ambiguous, given this balance. It may be concluded that “more research is needed,” but any new results will just reinforce the original ambiguity, if power remains low.”

“Statistical tests of a treatment effect that is actually clinically significant may fail to reject the null hypothesis of no difference when power is low. If the researcher in this case ignored whether the observed effect size is clinically significant, a potentially beneficial treatment may be overlooked. This is exactly what was found by Freiman, Chalmers, Smith, and Kuebler (1978), who reviewed 71 randomized clinical trials of mainly heart- and cancer-related treatments with “negative” results (i.e., not statistically significant). They found that if the authors of 50 of the 71 trials had considered the power of their tests along with the observed effect sizes, those authors should have concluded just the opposite, or that the treatments resulted in clinically meaningful improvements.”

“Even if researchers avoided the kinds of mistakes just described, there are grounds to suspect that p values from statistical tests are simply incorrect in most studies: 1. They (p values) are estimated in theoretical sampling distributions that assume random sampling from known populations. Very few samples in behavioral research are random samples. Instead, most are convenience samples collected under conditions that have little resemblance to true random sampling. […] 2. Results of more quantitative reviews suggest that, due to assumptions violations, there are few actual data sets in which significance testing gives accurate results […] 3. Probabilities from statistical tests (p values) generally assume that all other sources of error besides sampling error are nil. This includes measurement error […] Other sources of error arise from failure to control for extraneous sources of variance or from flawed operational definitions of hypothetical constructs. It is absurd to assume in most studies that there is no error variance besides sampling error. Instead it is more practical to expect that sampling error makes up the small part of all possible kinds of error when the number of cases is reasonably large (Ziliak & mcCloskey, 2008).”

“The p values from statistical tests do not tell researchers what they want to know, which often concerns whether the data support a particular hypothesis. This is because p values merely estimate the conditional probability of the data under a statistical hypothesis — the null hypothesis — that in most studies is an implausible, straw man argument. In fact, p values do not directly “test” any hypothesis at all, but they are often misinterpreted as though they describe hypotheses instead of data. Although p values ultimately provide a yes-or-no answer (i.e., reject or fail to reject the null hypothesis), the question — p < a?, where a is the criterion level of statistical significance, usually .05 or .01 — is typically uninteresting. The yes-or-no answer to this question says nothing about scientific relevance, clinical significance, or effect size. […] determining clinical significance is not just a matter of statistics; it also requires strong knowledge about the subject matter.”

“[M]any null hypotheses have little if any scientific value. For example, Anderson et al. (2000) reviewed null hypotheses tested in several hundred empirical studies published from 1978 to 1998 in two environmental sciences journals. They found many implausible null hypotheses that specified things such as equal survival probabilities for juvenile and adult members of a species or that growth rates did not differ across species, among other assumptions known to be false before collecting data. I am unaware of a similar survey of null hypotheses in the behavioral sciences, but I would be surprised if the results would be very different.”

“Hoekstra, Finch, Kiers, and Johnson (2006) examined a total of 266 articles published in Psychonomic Bulletin & Review during 2002–2004. Results of significance tests were reported in about 97% of the articles, but confidence intervals were reported in only about 6%. Sadly, p values were misinterpreted in about 60% of surveyed articles. Fidler, Burgman, Cumming, Buttrose, and Thomason (2006) sampled 200 articles published in two different biology journals. Results of significance testing were reported in 92% of articles published during 2001–2002, but this rate dropped to 78% in 2005. There were also corresponding increases in the reporting of confidence intervals, but power was estimated in only 8% and p values were misinterpreted in 63%. […] Sun, Pan, and Wang (2010) reviewed a total of 1,243 works published in 14 different psychology and education journals during 2005–2007. The percentage of articles reporting effect sizes was 49%, and 57% of these authors interpreted their effect sizes.”

“It is a myth that the larger the sample, the more closely it approximates a normal distribution. This idea probably stems from a misunderstanding of the central limit theorem, which applies to certain group statistics such as means. […] This theorem justifies approximating distributions of random means with normal curves, but it does not apply to distributions of scores in individual samples. […] larger samples do not generally have more normal distributions than smaller samples. If the population distribution is, say, positively skewed, this shape will tend to show up in the distributions of random samples that are either smaller or larger.”

“A standard error is the standard deviation in a sampling distribution, the probability distribution of a statistic across all random samples drawn from the same population(s) and with each sample based on the same number of cases. It estimates the amount of sampling error in standard deviation units. The square of a standard error is the error variance. […] Variability of the sampling distributions […] decreases as the sample size increases. […] The standard error sM, which estimates variability of the group statistic M, is often confused with the standard deviation s, which measures variability at the case level. This confusion is a source of misinterpretation of both statistical tests and confidence intervals […] Note that the standard error sM itself has a standard error (as do standard errors for all other kinds of statistics). This is because the value of sM varies over random samples. This explains why one should not overinterpret a confidence interval or p value from a significance test based on a single sample.”

“Standard errors estimate sampling error under random sampling. What they measure when sampling is not random may not be clear. […] Standard errors also ignore […] other sources of error [:] 1. Measurement error [which] refers to the difference between an observed score X and the true score on the underlying construct. […] Measurement error reduces absolute effect sizes and the power of statistical tests. […] 2. Construct definition error [which] involves problems with how hypothetical constructs are defined or operationalized. […] 3. Specification error [which] refers to the omission from a regression equation of at least one predictor that covaries with the measured (included) predictors. […] 4. Treatment implementation error occurs when an intervention does not follow prescribed procedures. […] Gosset used the term real error to refer all types of error besides sampling error […]. In reasonably large samples, the impact of real error may be greater than that of sampling error.”

“The technique of bootstrapping […] is a computer-based method of resampling that recombines the cases in a data set in different ways to estimate statistical precision, with fewer assumptions than traditional methods about population distributions. Perhaps the best known form is nonparametric bootstrapping, which generally makes no assumptions other than that the distribution in the sample reflects the basic shape of that in the population. It treats your data file as a pseudo-population in that cases are randomly selected with replacement to generate other data sets, usually of the same size as the original. […] The technique of nonparametric bootstrapping seems well suited for interval estimation when the researcher is either unwilling or unable to make a lot of assumptions about population distributions. […] potential limitations of nonparametric bootstrapping: 1. Nonparametric bootstrapping simulates random sampling, but true random sampling is rarely used in practice. […] 2. […] If the shape of the sample distribution is very different compared with that in the population, results of nonparametric bootstrapping may have poor external validity. 3. The “population” from which bootstrapped samples are drawn is merely the original data file. If this data set is small or the observations are not independent, resampling from it will not somehow fix these problems. In fact, resampling can magnify the effects of unusual features in a small data set […] 4. Results of bootstrap analyses are probably quite biased in small samples, but this is true of many traditional methods, too. […] [In] parametric bootstrapping […] the researcher specifies the numerical and distributional properties of a theoretical probability density function, and then the computer randomly samples from that distribution. When repeated many times by the computer, values of statistics in these synthesized samples vary randomly about the parameters specified by the researcher, which simulates sampling error.”

July 9, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

Melanoma therapeutic strategies that select against resistance

A short lecture, but interesting:

If you’re not an oncologist, these two links in particular might be helpful to have a look at before you start out: BRAF (gene) & Myc. A very substantial proportion of the talk is devoted to math and stats methodology (which some people will find interesting and others …will not).

July 3, 2017 Posted by | Biology, Cancer/oncology, Genetics, Lectures, Mathematics, Medicine, Statistics | Leave a comment

The Personality Puzzle (I)

I don’t really like this book, which is a personality psychology introductory textbook by David Funder. I’ve read the first 400 pages (out of 700), but I’m still debating whether or not to finish it, it just isn’t very good; the level of coverage is low, it’s very fluffy and the signal-to-noise ratio is nowhere near where I’d like it to be when I’m reading academic texts. Some parts of it frankly reads like popular science. However despite not feeling that the book is all that great I can’t justify not blogging it; stuff I don’t blog I tend to forget, and if I’m reading a mediocre textbook anyway I should at least try to pick out some of the decent stuff in there which keeps me reading and try to make it easier for me to recall that stuff later. Some parts of- and arguments/observations included in the book are in my opinion just plain silly or stupid, but I won’t go into these things in this post because I don’t really see what would be the point of doing that.

The main reason why I decided to give the book a go was that I liked Funder’s book Personality Judgment, which I read a few years ago and which deals with some topics also covered superficially in this text – it’s a much better book, in my opinion, at least as far as I can remember (…I have actually been starting to wonder if it was really all that great, if it was written by the same guy who wrote this book…), if you’re interested in these matters. If you’re interested in a more ‘pure’ personality psychology text, a significantly better alternative is Leary et al.‘s Handbook of Individual Differences in Social Behavior. Because of the multi-author format it also includes some very poor chapters, but those tend to be somewhat easy to identify and skip to get to the good stuff if you’re so inclined, and the general coverage is at a much higher level than that of this book.

Below I have added some quotes and observations from the first 150 pages of the book.

“A theory that accounts for certain things extremely well will probably not explain everything else so well. And a theory that tries to explain almost everything […] would probably not provide the best explanation for any one thing. […] different [personality psychology] basic approaches address different sets of questions […] each basic approach usually just ignores the topics it is not good at explaining.”

Personality psychology tends to emphasize how individuals are different from one another. […] Other areas of psychology, by contrast, are more likely to treat people as if they were the same or nearly the same. Not only do the experimental subfields of psychology, such as cognitive and social psychology, tend to ignore how people are different from each other, but also the statistical analyses central to their research literally put individual differences into their “error” terms […] Although the emphasis of personality psychology often entails categorizing and labeling people, it also leads the field to be extraordinarily sensitive — more than any other area of psychology — to the fact that people really are different.”

“If you want to “look at” personality, what do you look at, exactly? Four different things. First, and perhaps most obviously, you can have the person describe herself. Personality psychologists often do exactly this. Second, you can ask people who know the person to describe her. Third, you can check on how the person is faring in life. And finally, you can observe what the person does and try to measure her behavior as directly and objectively as possible. These four types of clues can be called S [self-judgments], I [informants], L [life], and B [behavior] data […] The point of the four-way classification […] is not to place every kind of data neatly into one and only one category. Rather, the point is to illustrate the types of data that are relevant to personality and to show how they all have both advantages and disadvantages.”

“For cost-effectiveness, S data simply cannot be beat. […] According to one analysis, 70 percent of the articles in an important personality journal were based on self-report (Vazire, 2006).”

“I data are judgments by knowledgeable “informants” about general attributes of the individual’s personality. […] Usually, close acquaintanceship paired with common sense is enough to allow people to make judgments of each other’s attributes with impressive accuracy […]. Indeed, they may be more accurate than self-judgments, especially when the judgments concern traits that are extremely desirable or extremely undesirable […]. Only when the judgments are of a technical nature (e.g., the diagnosis of a mental disorder) does psychological education become relevant. Even then, acquaintances without professional training are typically well aware when someone has psychological problems […] psychologists often base their conclusions on contrived tests of one kind or another, or on observations in carefully constructed and controlled environments. Because I data derive from behaviors informants have seen in daily social interactions, they enjoy an extra chance of being relevant to aspects of personality that affect important life outcomes. […] I data reflect the opinions of people who interact with the person every day; they are the person’s reputation. […] personality judgments can [however] be [both] unfair as well as mistaken […] The most common problem that arises from letting people choose their own informants — the usual practice in research — may be the “letter of recommendation effect” […] research participants may tend to nominate informants who think well of them, leading to I data that provide a more positive picture than might have been obtained from more neutral parties.”

“L data […] are verifable, concrete, real-life facts that may hold psychological significance. […] An advantage of using archival records is that they are not prone to the potential biases of self-report or the judgments of others. […] [However] L data have many causes, so trying to establish direct connections between specific attributes of personality and life outcomes is chancy. […] a psychologist can predict a particular outcome from psychological data only to the degree that the outcome is psychologically caused. L data often are psychologically caused only to a small degree.”

“The idea of B data is that participants are found, or put, in some sort of a situation, sometimes referred to as a testing situation, and then their behavior is directly observed. […] B data are expensive [and] are not used very often compared to the other types. Relatively few psychologists have the necessary resources.”

“Reliable data […] are measurements that reflect what you are trying to assess and are not affected by anything else. […] When trying to measure a stable attribute of personality—a trait rather than a state — the question of reliability reduces to this: Can you get the same result more than once? […] Validity is the degree to which a measurement actually reflects what one thinks or hopes it does. […] for a measure to be valid, it must be reliable. But a reliable measure is not necessarily valid. […] A measure that is reliable gives the same answer time after time. […] But even if a measure is the same time after time, that does not necessarily mean it is correct.”

“[M]ost personality tests provide S data. […] Other personality tests yield B data. […] IQ tests […] yield B data. Imagine trying to assess intelligence using an S-data test, asking questions such as “Are you an intelligent person?” and “Are you good at math?” Researchers have actually tried this, but simply asking people whether they are smart turns out to be a poor way to measure intelligence”.

“The answer an individual gives to any one question might not be particularly informative […] a single answer will tend to be unreliable. But if a group of similar questions is asked, the average of the answers ought to be much more stable, or reliable, because random fluctuations tend to cancel each other out. For this reason, one way to make a personality test more reliable is simply to make it longer.”

“The factor analytic method of test construction is based on a statistical technique. Factor analysis identifies groups of things […] that seem to have something in common. […] To use factor analysis to construct a personality test, researchers begin with a long list of […] items […] The next step is to administer these items to a large number of participants. […] The analysis is based on calculating correlation coefficients between each item and every other item. Many items […] will not correlate highly with anything and can be dropped. But the items that do correlate with each other can be assembled into groups. […] The next steps are to consider what the items have in common, and then name the factor. […] Factor analysis has been used not only to construct tests, but also to decide how many fundamental traits exist […] Various analysts have come up with different answers.”

[The Big Five were derived from factor analyses.]

The empirical strategy of test construction is an attempt to allow reality to speak for itself. […] Like the factor analytic approach described earlier, the frst step of the empirical approach is to gather lots of items. […] The second step, however, is quite different. For this step, you need to have a sample of participants who have already independently been divided into the groups you are interested in. Occupational groups and diagnostic categories are often used for this purpose. […] Then you are ready for the third step: administering your test to your participants. The fourth step is to compare the answers given by the different groups of participants. […] The basic assumption of the empirical approach […] is that certain kinds of people answer certain questions on personality inventories in distinctive ways. If you answer questions the same way as members of some occupational or diagnostic group did in the original derivation study, then you might belong to that group too. […] responses to empirically derived tests are difficult to fake. With a personality test of the straightforward, S-data variety, you can describe yourself the way you want to be seen, and that is indeed the score you will get. But because the items on empirically derived scales sometimes seem backward or absurd, it is difficult to know how to answer in such a way as to guarantee the score you want. This is often held up as one of the great advantages of the empirical approach […] [However] empirically derived tests are only as good as the criteria by which they are developed or against which they are cross-validated. […] the empirical correlates of item responses by which these tests are assembled are those found in one place, at one time, with one group of participants. If no attention is paid to item content, then there is no way to be confident that the test will work in a similar manner at another time, in another place, with different participants. […] A particular concern is that the empirical correlates of item response might change over time. The MMPI was developed decades ago and has undergone a major revision only once”.

“It is not correct, for example, that the significance level provides the probability that the substantive (non-null) hypothesis is true. […] the significance level gives the probability of getting the result one found if the null hypothesis were true. One statistical writer offered the following analogy (Dienes, 2011): The probability that a person is dead, given that a shark has bitten his head off, is 1.0. However, the probability that a person’s head was bitten off by a shark, given that he is dead, is much lower. The probability of the data given the hypothesis, and of the hypothesis given the data, is not the same thing. And the latter is what we really want to know. […] An effect size is more meaningful than a significance level. […] It is both facile and misleading to use the frequently taught method of squaring correlations if the intention is to evaluate effect size.”

June 30, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

The Mathematical Challenge of Large Networks

This is another one of the aforementioned lectures I watched a while ago, but had never got around to blogging:

If I had to watch this one again, I’d probably skip most of the second half; it contains highly technical coverage of topics in graph theory, and it was very difficult for me to follow (but I did watch it to the end, just out of curiosity).

The lecturer has put up a ~500 page publication on these and related topics, which is available here, so if you want to know more that’s an obvious place to go have a look. A few other relevant links to stuff mentioned/covered in the lecture:
Szemerédi regularity lemma.
Graphon.
Turán’s theorem.
Quantum graph.

May 19, 2017 Posted by | Computer science, Lectures, Mathematics, Statistics | Leave a comment

A few diabetes papers of interest

i. Association Between Blood Pressure and Adverse Renal Events in Type 1 Diabetes.

“The Joint National Committee and American Diabetes Association guidelines currently recommend a blood pressure (BP) target of <140/90 mmHg for all adults with diabetes, regardless of type (13). However, evidence used to support this recommendation is primarily based on data from trials of type 2 diabetes (46). The relationship between BP and adverse outcomes in type 1 and type 2 diabetes may differ, given that the type 1 diabetes population is typically much younger at disease onset, hypertension is less frequently present at diagnosis (3), and the basis for the pathophysiology and disease complications may differ between the two populations.

Prior prospective cohort studies (7,8) of patients with type 1 diabetes suggested that lower BP levels (<110–120/70–80 mmHg) at baseline entry were associated with a lower risk of adverse renal outcomes, including incident microalbuminuria. In one trial of antihypertensive treatment in type 1 diabetes (9), assignment to a lower mean arterial pressure (MAP) target of <92 mmHg (corresponding to ∼125/75 mmHg) led to a significant reduction in proteinuria compared with a MAP target of 100–107 mmHg (corresponding to ∼130–140/85–90 mmHg). Thus, it is possible that lower BP (<120/80 mmHg) reduces the risk of important renal outcomes, such as proteinuria, in patients with type 1 diabetes and may provide a synergistic benefit with intensive glycemic control on renal outcomes (1012). However, fewer studies have examined the association between BP levels over time and the risk of more advanced renal outcomes, such as stage III chronic kidney disease (CKD) or end-stage renal disease (ESRD)”.

“The primary objective of this study was to determine whether there is an association between lower BP levels and the risk of more advanced diabetic nephropathy, defined as macroalbuminuria or stage III CKD, within a background of different glycemic control strategies […] We included 1,441 participants with type 1 diabetes between the ages of 13 and 39 years who had previously been randomized to receive intensive versus conventional glycemic control in the Diabetes Control and Complications Trial (DCCT). The exposures of interest were time-updated systolic BP (SBP) and diastolic BP (DBP) categories. Outcomes included macroalbuminuria (>300 mg/24 h) or stage III chronic kidney disease (CKD) […] During a median follow-up time of 24 years, there were 84 cases of stage III CKD and 169 cases of macroalbuminuria. In adjusted models, SBP in the 2 (95% CI 1.05–1.21), and a 1.04 times higher risk of ESRD (95% CI 0.77–1.41) in adjusted Cox models. Every 10 mmHg increase in DBP was associated with a 1.17 times higher risk of microalbuminuria (95% CI 1.03–1.32), a 1.15 times higher risk of eGFR decline to 2 (95% CI 1.04–1.29), and a 0.80 times higher risk of ESRD (95% CI 0.47–1.38) in adjusted models. […] Because these data are observational, they cannot prove causation. It remains possible that subtle kidney disease may lead to early elevations in BP, and we cannot rule out the potential for reverse causation in our findings. However, we note similar trends in our data even when imposing a 7-year lag between BP and CKD ascertainment.”

CONCLUSIONS A lower BP (<120/70 mmHg) was associated with a substantially lower risk of adverse renal outcomes, regardless of the prior assigned glycemic control strategy. Interventional trials may be useful to help determine whether the currently recommended BP target of 140/90 mmHg may be too high for optimal renal protection in type 1 diabetes.”

It’s important to keep in mind when interpreting these results that endpoints like ESRD and stage III CKD are not the only relevant outcomes in this setting; even mild-stage kidney disease in diabetics significantly increase the risk of death from cardiovascular disease, and a substantial proportion of patients may die from cardiovascular disease before reaching a late-stage kidney disease endpoint (here’s a relevant link).

Identifying Causes for Excess Mortality in Patients With Diabetes: Closer but Not There Yet.

“A number of epidemiological studies have quantified the risk of death among patients with diabetes and assessed the causes of death (26), with highly varying results […] Overall, the studies to date have confirmed that diabetes is associated with an increased risk of all-cause mortality, but the magnitude of this excess risk is highly variable, with the relative risk ranging from 1.15 to 3.15. Nevertheless, all studies agree that mortality is mainly attributable to cardiovascular causes (26). On the other hand, studies of cancer-related death have generally been lacking despite the diabetes–cancer association and a number of plausible biological mechanisms identified to explain this link (8,9). In fact, studies assessing the specific causes of noncardiovascular death in diabetes have been sparse. […] In this issue of Diabetes Care, Baena-Díez et al. (10) report on an observational study of the association between diabetes and cause-specific death. This study involved 55,292 individuals from 12 Spanish population cohorts with no prior history of cardiovascular disease, aged 35 to 79 years, with a 10-year follow-up. […] This study found that individuals with diabetes compared with those without diabetes had a higher risk of cardiovascular death, cancer death, and noncardiovascular noncancer death with similar estimates obtained using the two statistical approaches. […] Baena-Díez et al. (10) showed that individuals with diabetes have an approximately threefold increased risk of cardiovascular mortality, which is much higher than what has been reported by recent studies (5,6). While this may be due to the lack of adjustment for important confounders in this study, there remains uncertainty regarding the magnitude of this increase.”

“[A]ll studies of excess mortality associated with diabetes, including the current one, have produced highly variable results. The reasons may be methodological. For instance, it may be that because of the wide range of age in these studies, comparing the rates of death between the patients with diabetes and those without diabetes using a measure based on the ratio of the rates may be misleading because the ratio can vary by age [it almost certainly does vary by age, US]. Instead, a measure based on the difference in rates may be more appropriate (16). Another issue relates to the fact that the studies include patients with longstanding diabetes of variable duration, resulting in so-called prevalent cohorts that can result in muddled mortality estimates since these are necessarily based on a mix of patients at different stages of disease (17). Thus, a paradigm change may be in order for future observational studies of diabetes and mortality, in the way they are both designed and analyzed. With respect to cancer, such studies will also need to tease out the independent contribution of antidiabetes treatments on cancer incidence and mortality (1820). It is thus clear that the quantification of the excess mortality associated with diabetes per se will need more accurate tools.”

iii. Risk of Cause-Specific Death in Individuals With Diabetes: A Competing Risks Analysis. This is the paper some of the results of which were discussed above. I’ll just include the highlights here:

RESULTS We included 55,292 individuals (15.6% with diabetes and overall mortality of 9.1%). The adjusted hazard ratios showed that diabetes increased mortality risk: 1) cardiovascular death, CSH = 2.03 (95% CI 1.63–2.52) and PSH = 1.99 (1.60–2.49) in men; and CSH = 2.28 (1.75–2.97) and PSH = 2.23 (1.70–2.91) in women; 2) cancer death, CSH = 1.37 (1.13–1.67) and PSH = 1.35 (1.10–1.65) in men; and CSH = 1.68 (1.29–2.20) and PSH = 1.66 (1.25–2.19) in women; and 3) noncardiovascular noncancer death, CSH = 1.53 (1.23–1.91) and PSH = 1.50 (1.20–1.89) in men; and CSH = 1.89 (1.43–2.48) and PSH = 1.84 (1.39–2.45) in women. In all instances, the cumulative mortality function was significantly higher in individuals with diabetes.

CONCLUSIONS Diabetes is associated with premature death from cardiovascular disease, cancer, and noncardiovascular noncancer causes.”

“Summary

Diabetes is associated with premature death from cardiovascular diseases (coronary heart disease, stroke, and heart failure), several cancers (liver, colorectal, and lung), and other diseases (chronic obstructive pulmonary disease and liver and kidney disease). In addition, the cause-specific cumulative mortality for cardiovascular, cancer, and noncardiovascular noncancer causes was significantly higher in individuals with diabetes, compared with the general population. The dual analysis with CSH and PSH methods provides a comprehensive view of mortality dynamics in the population with diabetes. This approach identifies the individuals with diabetes as a vulnerable population for several causes of death aside from the traditionally reported cardiovascular death.”

iv. Disability-Free Life-Years Lost Among Adults Aged ≥50 Years With and Without Diabetes.

RESEARCH DESIGN AND METHODS Adults (n = 20,008) aged 50 years and older were followed from 1998 to 2012 in the Health and Retirement Study, a prospective biannual survey of a nationally representative sample of adults. Diabetes and disability status (defined by mobility loss, difficulty with instrumental activities of daily living [IADL], and/or difficulty with activities of daily living [ADL]) were self-reported. We estimated incidence of disability, remission to nondisability, and mortality. We developed a discrete-time Markov simulation model with a 1-year transition cycle to predict and compare lifetime disability-related outcomes between people with and without diabetes. Data represent the U.S. population in 1998.

RESULTS From age 50 years, adults with diabetes died 4.6 years earlier, developed disability 6–7 years earlier, and spent about 1–2 more years in a disabled state than adults without diabetes. With increasing baseline age, diabetes was associated with significant (P < 0.05) reductions in the number of total and disability-free life-years, but the absolute difference in years between those with and without diabetes was less than at younger baseline age. Men with diabetes spent about twice as many of their remaining years disabled (20–24% of remaining life across the three disability definitions) as men without diabetes (12–16% of remaining life across the three disability definitions). Similar associations between diabetes status and disability-free and disabled years were observed among women.

CONCLUSIONS Diabetes is associated with a substantial reduction in nondisabled years, to a greater extent than the reduction of longevity. […] Using a large, nationally representative cohort of Americans aged 50 years and older, we found that diabetes is associated with a substantial deterioration of nondisabled years and that this is a greater number of years than the loss of longevity associated with diabetes. On average, a middle-aged adult with diabetes has an onset of disability 6–7 years earlier than one without diabetes, spends 1–2 more years with disability, and loses 7 years of disability-free life to the condition. Although other nationally representative studies have reported large reductions in complications (9) and mortality among the population with diabetes in recent decades (1), these studies, akin to our results, suggest that diabetes continues to have a substantial impact on morbidity and quality of remaining years of life.”

v. Association Between Use of Lipid-Lowering Therapy and Cardiovascular Diseases and Death in Individuals With Type 1 Diabetes.

“People with type 1 diabetes have a documented shorter life expectancy than the general population without diabetes (1). Cardiovascular disease (CVD) is the main cause of the excess morbidity and mortality, and despite advances in management and therapy, individuals with type 1 diabetes have a markedly elevated risk of cardiovascular events and death compared with the general population (2).

Lipid-lowering treatment with hydroxymethylglutaryl-CoA reductase inhibitors (statins) prevents major cardiovascular events and death in a broad spectrum of patients (3,4). […] We hypothesized that primary prevention with lipid-lowering therapy (LLT) can reduce the incidence of cardiovascular morbidity and mortality in individuals with type 1 diabetes. The aim of the study was to examine this in a nationwide longitudinal cohort study of patients with no history of CVD. […] A total of 24,230 individuals included in 2006–2008 NDR with type 1 diabetes without a history of CVD were followed until 31 December 2012; 18,843 were untreated and 5,387 treated with LLT [Lipid-Lowering Therapy] (97% statins). The mean follow-up was 6.0 years. […] Hazard ratios (HRs) for treated versus untreated were as follows: cardiovascular death 0.60 (95% CI 0.50–0.72), all-cause death 0.56 (0.48–0.64), fatal/nonfatal stroke 0.56 (0.46–0.70), fatal/nonfatal acute myocardial infarction 0.78 (0.66–0.92), fatal/nonfatal coronary heart disease 0.85 (0.74–0.97), and fatal/nonfatal CVD 0.77 (0.69–0.87).

CONCLUSIONS This observational study shows that LLT is associated with 22–44% reduction in the risk of CVD and cardiovascular death among individuals with type 1 diabetes without history of CVD and underlines the importance of primary prevention with LLT to reduce cardiovascular risk in type 1 diabetes.”

vi. Prognostic Classification Factors Associated With Development of Multiple Autoantibodies, Dysglycemia, and Type 1 Diabetes—A Recursive Partitioning Analysis.

“In many prognostic factor studies, multivariate analyses using the Cox proportional hazards model are applied to identify independent prognostic factors. However, the coefficient estimates derived from the Cox proportional hazards model may be biased as a result of violating assumptions of independence. […] RPA [Recursive Partitioning Analysis] classification is a useful tool that could prioritize the prognostic factors and divide the subjects into distinctive groups. RPA has an advantage over the proportional hazards model in identifying prognostic factors because it does not require risk factor independence and, as a nonparametric technique, makes no requirement on the underlying distributions of the variables considered. Hence, it relies on fewer modeling assumptions. Also, because the method is designed to divide subjects into groups based on the length of survival, it defines groupings for risk classification, whereas Cox regression models do not. Moreover, there is no need to explicitly include covariate interactions because of the recursive splitting structure of tree model construction.”

“This is the first study that characterizes the risk factors associated with the transition from one preclinical stage to the next following a recommended staging classification system (9). The tree-structured prediction model reveals that the risk parameters are not the same across each transition. […] Based on the RPA classification, the subjects at younger age and with higher GAD65Ab [an important biomarker in the context of autoimmune forms of diabetes, US – here’s a relevant link] titer are at higher risk for progression to multiple positive autoantibodies from a single autoantibody (seroconversion). Approximately 70% of subjects with a single autoantibody were positive for GAD65Ab, much higher than for insulin autoantibody (24%) and IA-2A [here’s a relevant link – US] (5%). Our study results are consistent with those of others (2224) in that seroconversion is age related. Previous studies in infants and children at an early age have shown that progression from single to two or more autoantibodies occurs more commonly in children 25). The subjects ≤16 years of age had almost triple the 5-year risk compared with subjects >16 years of age at the same GAD65Ab titer level. Hence, not all individuals with a single islet autoantibody can be thought of as being at low risk for disease progression.”

“This is the first study that identifies the risk factors associated with the timing of transitions from one preclinical stage to the next in the development of T1D. Based on RPA risk parameters, we identify the characteristics of groups with similar 5-year risks for advancing to the next preclinical stage. It is clear that individuals with one or more autoantibodies or with dysglycemia are not homogeneous with regard to the risk of disease progression. Also, there are differences in risk factors at each stage that are associated with increased risk of progression. The potential benefit of identifying these groups allows for a more informed discussion of diabetes risk and the selective enrollment of individuals into clinical trials whose risk more appropriately matches the potential benefit of an experimental intervention. Since the risk levels in these groups are substantial, their definition makes possible the design of more efficient trials with target sample sizes that are feasible, opening up the field of prevention to additional at-risk cohorts. […] Our results support the evidence that autoantibody titers are strong predictors at each transition leading to T1D development. The risk of the development of multiple autoantibodies was significantly increased when the GAD65Ab titer level was elevated, and the risk of the development of dysglycemia was increased when the IA-2A titer level increased. These indicate that better risk prediction on the timing of transitions can be obtained by evaluating autoantibody titers. The results also suggest that an autoantibody titer should be carefully considered in planning prevention trials for T1D in addition to the number of positive autoantibodies and the type of autoantibody.”

May 17, 2017 Posted by | Diabetes, Epidemiology, Health Economics, Immunology, Medicine, Nephrology, Statistics, Studies | Leave a comment

Biodemography of aging (IV)

My working assumption as I was reading part two of the book was that I would not be covering that part of the book in much detail here because it would simply be too much work to make such posts legible to the readership of this blog. However I then later, while writing this post, had the thought that given that almost nobody reads along here anyway (I’m not complaining, mind you – this is how I like it these days), the main beneficiary of my blog posts will always be myself, which lead to the related observation/notion that I should not be limiting my coverage of interesting stuff here simply because some hypothetical and probably nonexistent readership out there might not be able to follow the coverage. So when I started out writing this post I was working under the assumption that it would be my last post about the book, but I now feel sure that if I find the time I’ll add at least one more post about the book’s statistics coverage. On a related note I am explicitly making the observation here that this post was written for my benefit, not yours. You can read it if you like, or not, but it was not really written for you.

I have added bold a few places to emphasize key concepts and observations from the quoted paragraphs and in order to make the post easier for me to navigate later (all the italics below are on the other hand those of the authors of the book).

Biodemography is a multidisciplinary branch of science that unites under its umbrella various analytic approaches aimed at integrating biological knowledge and methods and traditional demographic analyses to shed more light on variability in mortality and health across populations and between individuals. Biodemography of aging is a special subfield of biodemography that focuses on understanding the impact of processes related to aging on health and longevity.”

“Mortality rates as a function of age are a cornerstone of many demographic analyses. The longitudinal age trajectories of biomarkers add a new dimension to the traditional demographic analyses: the mortality rate becomes a function of not only age but also of these biomarkers (with additional dependence on a set of sociodemographic variables). Such analyses should incorporate dynamic characteristics of trajectories of biomarkers to evaluate their impact on mortality or other outcomes of interest. Traditional analyses using baseline values of biomarkers (e.g., Cox proportional hazards or logistic regression models) do not take into account these dynamics. One approach to the evaluation of the impact of biomarkers on mortality rates is to use the Cox proportional hazards model with time-dependent covariates; this approach is used extensively in various applications and is available in all popular statistical packages. In such a model, the biomarker is considered a time-dependent covariate of the hazard rate and the corresponding regression parameter is estimated along with standard errors to make statistical inference on the direction and the significance of the effect of the biomarker on the outcome of interest (e.g., mortality). However, the choice of the analytic approach should not be governed exclusively by its simplicity or convenience of application. It is essential to consider whether the method gives meaningful and interpretable results relevant to the research agenda. In the particular case of biodemographic analyses, the Cox proportional hazards model with time-dependent covariates is not the best choice.

“Longitudinal studies of aging present special methodological challenges due to inherent characteristics of the data that need to be addressed in order to avoid biased inference. The challenges are related to the fact that the populations under study (aging individuals) experience substantial dropout rates related to death or poor health and often have co-morbid conditions related to the disease of interest. The standard assumption made in longitudinal analyses (although usually not explicitly mentioned in publications) is that dropout (e.g., death) is not associated with the outcome of interest. While this can be safely assumed in many general longitudinal studies (where, e.g., the main causes of dropout might be the administrative end of the study or moving out of the study area, which are presumably not related to the studied outcomes), the very nature of the longitudinal outcomes (e.g., measurements of some physiological biomarkers) analyzed in a longitudinal study of aging assumes that they are (at least hypothetically) related to the process of aging. Because the process of aging leads to the development of diseases and, eventually, death, in longitudinal studies of aging an assumption of non-association of the reason for dropout and the outcome of interest is, at best, risky, and usually is wrong. As an illustration, we found that the average trajectories of different physiological indices of individuals dying at earlier ages markedly deviate from those of long-lived individuals, both in the entire Framingham original cohort […] and also among carriers of specific alleles […] In such a situation, panel compositional changes due to attrition affect the averaging procedure and modify the averages in the total sample. Furthermore, biomarkers are subject to measurement error and random biological variability. They are usually collected intermittently at examination times which may be sparse and typically biomarkers are not observed at event times. It is well known in the statistical literature that ignoring measurement errors and biological variation in such variables and using their observed “raw” values as time-dependent covariates in a Cox regression model may lead to biased estimates and incorrect inferences […] Standard methods of survival analysis such as the Cox proportional hazards model (Cox 1972) with time-dependent covariates should be avoided in analyses of biomarkers measured with errors because they can lead to biased estimates.

“Statistical methods aimed at analyses of time-to-event data jointly with longitudinal measurements have become known in the mainstream biostatistical literature as “joint models for longitudinal and time-to-event data” (“survival” or “failure time” are often used interchangeably with “time-to-event”) or simply “joint models.” This is an active and fruitful area of biostatistics with an explosive growth in recent years. […] The standard joint model consists of two parts, the first representing the dynamics of longitudinal data (which is referred to as the “longitudinal sub-model”) and the second one modeling survival or, generally, time-to-event data (which is referred to as the “survival sub-model”). […] Numerous extensions of this basic model have appeared in the joint modeling literature in recent decades, providing great flexibility in applications to a wide range of practical problems. […] The standard parameterization of the joint model (11.2) assumes that the risk of the event at age t depends on the current “true” value of the longitudinal biomarker at this age. While this is a reasonable assumption in general, it may be argued that additional dynamic characteristics of the longitudinal trajectory can also play a role in the risk of death or onset of a disease. For example, if two individuals at the same age have exactly the same level of some biomarker at this age, but the trajectory for the first individual increases faster with age than that of the second one, then the first individual can have worse survival chances for subsequent years. […] Therefore, extensions of the basic parameterization of joint models allowing for dependence of the risk of an event on such dynamic characteristics of the longitudinal trajectory can provide additional opportunities for comprehensive analyses of relationships between the risks and longitudinal trajectories. Several authors have considered such extended models. […] joint models are computationally intensive and are sometimes prone to convergence problems [however such] models provide more efficient estimates of the effect of a covariate […] on the time-to-event outcome in the case in which there is […] an effect of the covariate on the longitudinal trajectory of a biomarker. This means that analyses of longitudinal and time-to-event data in joint models may require smaller sample sizes to achieve comparable statistical power with analyses based on time-to-event data alone (Chen et al. 2011).”

“To be useful as a tool for biodemographers and gerontologists who seek biological explanations for observed processes, models of longitudinal data should be based on realistic assumptions and reflect relevant knowledge accumulated in the field. An example is the shape of the risk functions. Epidemiological studies show that the conditional hazards of health and survival events considered as functions of risk factors often have U- or J-shapes […], so a model of aging-related changes should incorporate this information. In addition, risk variables, and, what is very important, their effects on the risks of corresponding health and survival events, experience aging-related changes and these can differ among individuals. […] An important class of models for joint analyses of longitudinal and time-to-event data incorporating a stochastic process for description of longitudinal measurements uses an epidemiologically-justified assumption of a quadratic hazard (i.e., U-shaped in general and J-shaped for variables that can take values only on one side of the U-curve) considered as a function of physiological variables. Quadratic hazard models have been developed and intensively applied in studies of human longitudinal data”.

“Various approaches to statistical model building and data analysis that incorporate unobserved heterogeneity are ubiquitous in different scientific disciplines. Unobserved heterogeneity in models of health and survival outcomes can arise because there may be relevant risk factors affecting an outcome of interest that are either unknown or not measured in the data. Frailty models introduce the concept of unobserved heterogeneity in survival analysis for time-to-event data. […] Individual age trajectories of biomarkers can differ due to various observed as well as unobserved (and unknown) factors and such individual differences propagate to differences in risks of related time-to-event outcomes such as the onset of a disease or death. […] The joint analysis of longitudinal and time-to-event data is the realm of a special area of biostatistics named “joint models for longitudinal and time-to-event data” or simply “joint models” […] Approaches that incorporate heterogeneity in populations through random variables with continuous distributions (as in the standard joint models and their extensions […]) assume that the risks of events and longitudinal trajectories follow similar patterns for all individuals in a population (e.g., that biomarkers change linearly with age for all individuals). Although such homogeneity in patterns can be justifiable for some applications, generally this is a rather strict assumption […] A population under study may consist of subpopulations with distinct patterns of longitudinal trajectories of biomarkers that can also have different effects on the time-to-event outcome in each subpopulation. When such subpopulations can be defined on the base of observed covariate(s), one can perform stratified analyses applying different models for each subpopulation. However, observed covariates may not capture the entire heterogeneity in the population in which case it may be useful to conceive of the population as consisting of latent subpopulations defined by unobserved characteristics. Special methodological approaches are necessary to accommodate such hidden heterogeneity. Within the joint modeling framework, a special class of models, joint latent class models, was developed to account for such heterogeneity […] The joint latent class model has three components. First, it is assumed that a population consists of a fixed number of (latent) subpopulations. The latent class indicator represents the latent class membership and the probability of belonging to the latent class is specified by a multinomial logistic regression function of observed covariates. It is assumed that individuals from different latent classes have different patterns of longitudinal trajectories of biomarkers and different risks of event. The key assumption of the model is conditional independence of the biomarker and the time-to-events given the latent classes. Then the class-specific models for the longitudinal and time-to-event outcomes constitute the second and third component of the model thus completing its specification. […] the latent class stochastic process model […] provides a useful tool for dealing with unobserved heterogeneity in joint analyses of longitudinal and time-to-event outcomes and taking into account hidden components of aging in their joint influence on health and longevity. This approach is also helpful for sensitivity analyses in applications of the original stochastic process model. We recommend starting the analyses with the original stochastic process model and estimating the model ignoring possible hidden heterogeneity in the population. Then the latent class stochastic process model can be applied to test hypotheses about the presence of hidden heterogeneity in the data in order to appropriately adjust the conclusions if a latent structure is revealed.”

The longitudinal genetic-demographic model (or the genetic-demographic model for longitudinal data) […] combines three sources of information in the likelihood function: (1) follow-up data on survival (or, generally, on some time-to-event) for genotyped individuals; (2) (cross-sectional) information on ages at biospecimen collection for genotyped individuals; and (3) follow-up data on survival for non-genotyped individuals. […] Such joint analyses of genotyped and non-genotyped individuals can result in substantial improvements in statistical power and accuracy of estimates compared to analyses of the genotyped subsample alone if the proportion of non-genotyped participants is large. Situations in which genetic information cannot be collected for all participants of longitudinal studies are not uncommon. They can arise for several reasons: (1) the longitudinal study may have started some time before genotyping was added to the study design so that some initially participating individuals dropped out of the study (i.e., died or were lost to follow-up) by the time of genetic data collection; (2) budget constraints prohibit obtaining genetic information for the entire sample; (3) some participants refuse to provide samples for genetic analyses. Nevertheless, even when genotyped individuals constitute a majority of the sample or the entire sample, application of such an approach is still beneficial […] The genetic stochastic process model […] adds a new dimension to genetic biodemographic analyses, combining information on longitudinal measurements of biomarkers available for participants of a longitudinal study with follow-up data and genetic information. Such joint analyses of different sources of information collected in both genotyped and non-genotyped individuals allow for more efficient use of the research potential of longitudinal data which otherwise remains underused when only genotyped individuals or only subsets of available information (e.g., only follow-up data on genotyped individuals) are involved in analyses. Similar to the longitudinal genetic-demographic model […], the benefits of combining data on genotyped and non-genotyped individuals in the genetic SPM come from the presence of common parameters describing characteristics of the model for genotyped and non-genotyped subsamples of the data. This takes into account the knowledge that the non-genotyped subsample is a mixture of carriers and non-carriers of the same alleles or genotypes represented in the genotyped subsample and applies the ideas of heterogeneity analyses […] When the non-genotyped subsample is substantially larger than the genotyped subsample, these joint analyses can lead to a noticeable increase in the power of statistical estimates of genetic parameters compared to estimates based only on information from the genotyped subsample. This approach is applicable not only to genetic data but to any discrete time-independent variable that is observed only for a subsample of individuals in a longitudinal study.

“Despite an existing tradition of interpreting differences in the shapes or parameters of the mortality rates (survival functions) resulting from the effects of exposure to different conditions or other interventions in terms of characteristics of individual aging, this practice has to be used with care. This is because such characteristics are difficult to interpret in terms of properties of external and internal processes affecting the chances of death. An important question then is: What kind of mortality model has to be developed to obtain parameters that are biologically interpretable? The purpose of this chapter is to describe an approach to mortality modeling that represents mortality rates in terms of parameters of physiological changes and declining health status accompanying the process of aging in humans. […] A traditional (demographic) description of changes in individual health/survival status is performed using a continuous-time random Markov process with a finite number of states, and age-dependent transition intensity functions (transitions rates). Transitions to the absorbing state are associated with death, and the corresponding transition intensity is a mortality rate. Although such a description characterizes connections between health and mortality, it does not allow for studying factors and mechanisms involved in the aging-related health decline. Numerous epidemiological studies provide compelling evidence that health transition rates are influenced by a number of factors. Some of them are fixed at the time of birth […]. Others experience stochastic changes over the life course […] The presence of such randomly changing influential factors violates the Markov assumption, and makes the description of aging-related changes in health status more complicated. […] The age dynamics of influential factors (e.g., physiological variables) in connection with mortality risks has been described using a stochastic process model of human mortality and aging […]. Recent extensions of this model have been used in analyses of longitudinal data on aging, health, and longevity, collected in the Framingham Heart Study […] This model and its extensions are described in terms of a Markov stochastic process satisfying a diffusion-type stochastic differential equation. The stochastic process is stopped at random times associated with individuals’ deaths. […] When an individual’s health status is taken into account, the coefficients of the stochastic differential equations become dependent on values of the jumping process. This dependence violates the Markov assumption and renders the conditional Gaussian property invalid. So the description of this (continuously changing) component of aging-related changes in the body also becomes more complicated. Since studying age trajectories of physiological states in connection with changes in health status and mortality would provide more realistic scenarios for analyses of available longitudinal data, it would be a good idea to find an appropriate mathematical description of the joint evolution of these interdependent processes in aging organisms. For this purpose, we propose a comprehensive model of human aging, health, and mortality in which the Markov assumption is fulfilled by a two-component stochastic process consisting of jumping and continuously changing processes. The jumping component is used to describe relatively fast changes in health status occurring at random times, and the continuous component describes relatively slow stochastic age-related changes of individual physiological states. […] The use of stochastic differential equations for random continuously changing covariates has been studied intensively in the analysis of longitudinal data […] Such a description is convenient since it captures the feedback mechanism typical of biological systems reflecting regular aging-related changes and takes into account the presence of random noise affecting individual trajectories. It also captures the dynamic connections between aging-related changes in health and physiological states, which are important in many applications.”

April 23, 2017 Posted by | Biology, Books, Demographics, Genetics, Mathematics, Statistics | Leave a comment

Biodemography of aging (III)

Latent class representation of the Grade of Membership model.
Singular value decomposition.
Affine space.
Lebesgue measure.
General linear position.

The links above are links to topics I looked up while reading the second half of the book. The first link is quite relevant to the book’s coverage as a comprehensive longitudinal Grade of Membership (-GoM) model is covered in chapter 17. Relatedly, chapter 18 covers linear latent structure (-LLS) models, and as observed in the book LLS is a generalization of GoM. As should be obvious from the nature of the links some of the stuff included in the second half of the text is highly technical, and I’ll readily admit I was not fully able to understand all the details included in the coverage of chapters 17 and 18 in particular. On account of the technical nature of the coverage in Part 2 I’m not sure I’ll cover the second half of the book in much detail, though I probably shall devote at least one more post to some of those topics, as they were quite interesting even if some of the details were difficult to follow.

I have almost finished the book at this point, and I have already decided to both give the book five stars and include it on my list of favorite books on goodreads; it’s really well written, and it provides consistently highly detailed coverage of very high quality. As I also noted in the first post about the book the authors have given readability aspects some thought, and I am sure most readers would learn quite a bit from this text even if they were to skip some of the more technical chapters. The main body of Part 2 of the book, the subtitle of which is ‘Statistical Modeling of Aging, Health, and Longevity’, is however probably in general not worth the effort of reading unless you have a solid background in statistics.

This post includes some observations and quotes from the last chapters of the book’s Part 1.

“The proportion of older adults in the U.S. population is growing. This raises important questions about the increasing prevalence of aging-related diseases, multimorbidity issues, and disability among the elderly population. […] In 2009, 46.3 million people were covered by Medicare: 38.7 million of them were aged 65 years and older, and 7.6 million were disabled […]. By 2031, when the baby-boomer generation will be completely enrolled, Medicare is expected to reach 77 million individuals […]. Because the Medicare program covers 95 % of the nation’s aged population […], the prediction of future Medicare costs based on these data can be an important source of health care planning.”

“Three essential components (which could be also referred as sub-models) need to be developed to construct a modern model of forecasting of population health and associated medical costs: (i) a model of medical cost projections conditional on each health state in the model, (ii) health state projections, and (iii) a description of the distribution of initial health states of a cohort to be projected […] In making medical cost projections, two major effects should be taken into account: the dynamics of the medical costs during the time periods comprising the date of onset of chronic diseases and the increase of medical costs during the last years of life. In this chapter, we investigate and model the first of these two effects. […] the approach developed in this chapter generalizes the approach known as “life tables with covariates” […], resulting in a new family of forecasting models with covariates such as comorbidity indexes or medical costs. In sum, this chapter develops a model of the relationships between individual cost trajectories following the onset of aging-related chronic diseases. […] The underlying methodological idea is to aggregate the health state information into a single (or several) covariate(s) that can be determinative in predicting the risk of a health event (e.g., disease incidence) and whose dynamics could be represented by the model assumptions. An advantage of such an approach is its substantial reduction of the degrees of freedom compared with existing forecasting models  (e.g., the FEM model, Goldman and RAND Corporation 2004). […] We found that the time patterns of medical cost trajectories were similar for all diseases considered and can be described in terms of four components having the meanings of (i) the pre-diagnosis cost associated with initial comorbidity represented by medical expenditures, (ii) the cost peak associated with the onset of each disease, (iii) the decline/reduction in medical expenditures after the disease onset, and (iv) the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity. The description of the trajectories was formalized by a model which explicitly involves four parameters reflecting these four components.”

As I noted earlier in my coverage of the book, I don’t think the model above fully captures all relevant cost contributions of the diseases included, as the follow-up period was too short to capture all relevant costs to be included in the part iv model component. This is definitely a problem in the context of diabetes. But then again nothing in theory stops people from combining the model above with other models which are better at dealing with the excess costs associated with long-term complications of chronic diseases, and the model results were intriguing even if the model likely underperforms in a few specific disease contexts.

Moving on…

“Models of medical cost projections usually are based on regression models estimated with the majority of independent predictors describing demographic status of the individual, patient’s health state, and level of functional limitations, as well as their interactions […]. If the health states needs to be described by a number of simultaneously manifested diseases, then detailed stratification over the categorized variables or use of multivariate regression models allows for a better description of the health states. However, it can result in an abundance of model parameters to be estimated. One way to overcome these difficulties is to use an approach in which the model components are demographically-based aggregated characteristics that mimic the effects of specific states. The model developed in this chapter is an example of such an approach: the use of a comorbidity index rather than of a set of correlated categorical regressor variables to represent the health state allows for an essential reduction in the degrees of freedom of the problem.”

“Unlike mortality, the onset time of chronic disease is difficult to define with high precision due to the large variety of disease-specific criteria for onset/incident case identification […] there is always some arbitrariness in defining the date of chronic disease onset, and a unified definition of date of onset is necessary for population studies with a long-term follow-up.”

“Individual age trajectories of physiological indices are the product of a complicated interplay among genetic and non-genetic (environmental, behavioral, stochastic) factors that influence the human body during the course of aging. Accordingly, they may differ substantially among individuals in a cohort. Despite this fact, the average age trajectories for the same index follow remarkable regularities. […] some indices tend to change monotonically with age: the level of blood glucose (BG) increases almost monotonically; pulse pressure (PP) increases from age 40 until age 85, then levels off and shows a tendency to decline only at later ages. The age trajectories of other indices are non-monotonic: they tend to increase first and then decline. Body mass index (BMI) increases up to about age 70 and then declines, diastolic blood pressure (DBP) increases until age 55–60 and then declines, systolic blood pressure (SBP) increases until age 75 and then declines, serum cholesterol (SCH) increases until age 50 in males and age 70 in females and then declines, ventricular rate (VR) increases until age 55 in males and age 45 in females and then declines. With small variations, these general patterns are similar in males and females. The shapes of the age-trajectories of the physiological variables also appear to be similar for different genotypes. […] The effects of these physiological indices on mortality risk were studied in Yashin et al. (2006), who found that the effects are gender and age specific. They also found that the dynamic properties of the individual age trajectories of physiological indices may differ dramatically from one individual to the next.”

“An increase in the mortality rate with age is traditionally associated with the process of aging. This influence is mediated by aging-associated changes in thousands of biological and physiological variables, some of which have been measured in aging studies. The fact that the age trajectories of some of these variables differ among individuals with short and long life spans and healthy life spans indicates that dynamic properties of the indices affect life history traits. Our analyses of the FHS data clearly demonstrate that the values of physiological indices at age 40 are significant contributors both to life span and healthy life span […] suggesting that normalizing these variables around age 40 is important for preventing age-associated morbidity and mortality later in life. […] results [also] suggest that keeping physiological indices stable over the years of life could be as important as their normalizing around age 40.”

“The results […] indicate that, in the quest of identifying longevity genes, it may be important to look for candidate genes with pleiotropic effects on more than one dynamic characteristic of the age-trajectory of a physiological variable, such as genes that may influence both the initial value of a trait (intercept) and the rates of its changes over age (slopes). […] Our results indicate that the dynamic characteristics of age-related changes in physiological variables are important predictors of morbidity and mortality risks in aging individuals. […] We showed that the initial value (intercept), the rate of changes (slope), and the variability of a physiological index, in the age interval 40–60 years, significantly influenced both mortality risk and onset of unhealthy life at ages 60+ in our analyses of the Framingham Heart Study data. That is, these dynamic characteristics may serve as good predictors of late life morbidity and mortality risks. The results also suggest that physiological changes taking place in the organism in middle life may affect longevity through promoting or preventing diseases of old age. For non-monotonically changing indices, we found that having a later age at the peak value of the index […], a lower peak value […], a slower rate of decline in the index at older ages […], and less variability in the index over time, can be beneficial for longevity. Also, the dynamic characteristics of the physiological indices were, overall, associated with mortality risk more significantly than with onset of unhealthy life.”

“Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward manner […]. Recent genome-wide association studies (GWAS) have reached fundamentally the same conclusion by showing that the traits in late life likely are controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny effect […] the weak effect of genes on traits in late life can be not only because they confer small risks having small penetrance but because they confer large risks but in a complex fashion […] In this chapter, we consider several examples of complex modes of gene actions, including genetic tradeoffs, antagonistic genetic effects on the same traits at different ages, and variable genetic effects on lifespan. The analyses focus on the APOE common polymorphism. […] The analyses reported in this chapter suggest that the e4 allele can be protective against cancer with a more pronounced role in men. This protective effect is more characteristic of cancers at older ages and it holds in both the parental and offspring generations of the FHS participants. Unlike cancer, the effect of the e4 allele on risks of CVD is more pronounced in women. […] [The] results […] explicitly show that the same allele can change its role on risks of CVD in an antagonistic fashion from detrimental in women with onsets at younger ages to protective in women with onsets at older ages. […] e4 allele carriers have worse survival compared to non-e4 carriers in each cohort. […] Sex stratification shows sexual dimorphism in the effect of the e4 allele on survival […] with the e4 female carriers, particularly, being more exposed to worse survival. […] The results of these analyses provide two important insights into the role of genes in lifespan. First, they provide evidence on the key role of aging-related processes in genetic susceptibility to lifespan. For example, taking into account the specifics of aging-related processes gains 18 % in estimates of the RRs and five orders of magnitude in significance in the same sample of women […] without additional investments in increasing sample sizes and new genotyping. The second is that a detailed study of the role of aging-related processes in estimates of the effects of genes on lifespan (and healthspan) helps in detecting more homogeneous [high risk] sub-samples”.

“The aging of populations in developed countries requires effective strategies to extend healthspan. A promising solution could be to yield insights into the genetic predispositions for endophenotypes, diseases, well-being, and survival. It was thought that genome-wide association studies (GWAS) would be a major breakthrough in this endeavor. Various genetic association studies including GWAS assume that there should be a deterministic (unconditional) genetic component in such complex phenotypes. However, the idea of unconditional contributions of genes to these phenotypes faces serious difficulties which stem from the lack of direct evolutionary selection against or in favor of such phenotypes. In fact, evolutionary constraints imply that genes should be linked to age-related phenotypes in a complex manner through different mechanisms specific for given periods of life. Accordingly, the linkage between genes and these traits should be strongly modulated by age-related processes in a changing environment, i.e., by the individuals’ life course. The inherent sensitivity of genetic mechanisms of complex health traits to the life course will be a key concern as long as genetic discoveries continue to be aimed at improving human health.”

“Despite the common understanding that age is a risk factor of not just one but a large portion of human diseases in late life, each specific disease is typically considered as a stand-alone trait. Independence of diseases was a plausible hypothesis in the era of infectious diseases caused by different strains of microbes. Unlike those diseases, the exact etiology and precursors of diseases in late life are still elusive. It is clear, however, that the origin of these diseases differs from that of infectious diseases and that age-related diseases reflect a complicated interplay among ontogenetic changes, senescence processes, and damages from exposures to environmental hazards. Studies of the determinants of diseases in late life provide insights into a number of risk factors, apart from age, that are common for the development of many health pathologies. The presence of such common risk factors makes chronic diseases and hence risks of their occurrence interdependent. This means that the results of many calculations using the assumption of disease independence should be used with care. Chapter 4 argued that disregarding potential dependence among diseases may seriously bias estimates of potential gains in life expectancy attributable to the control or elimination of a specific disease and that the results of the process of coping with a specific disease will depend on the disease elimination strategy, which may affect mortality risks from other diseases.”

April 17, 2017 Posted by | Biology, Books, Cancer/oncology, Demographics, Economics, Epidemiology, Genetics, Health Economics, Medicine, Statistics | Leave a comment

Diabetes and the brain (IV)

Here’s one of my previous posts in the series about the book. In this post I’ll cover material dealing with two acute hyperglycemia-related diabetic complications (DKA and HHS – see below…) as well as multiple topics related to diabetes and stroke. I’ll start out with a few quotes from the book about DKA and HHS:

“DKA [diabetic ketoacidosis] is defined by a triad of hyperglycemia, ketosis, and acidemia and occurs in the absolute or near-absolute absence of insulin. […] DKA accounts for the bulk of morbidity and mortality in children with T1DM. National population-based studies estimate DKA mortality at 0.15% in the United States (4), 0.18–0.25% in Canada (4, 5), and 0.31% in the United Kingdom (6). […] Rates reach 25–67% in those who are newly diagnosed (4, 8, 9). The rates are higher in younger children […] The risk of DKA among patients with pre-existing diabetes is 1–10% annual per person […] DKA can present with mild-to-severe symptoms. […] polyuria and polydipsia […] patients may present with signs of dehydration, such as tachycardia and dry mucus membranes. […] Vomiting, abdominal pain, malaise, and weight loss are common presenting symptoms […] Signs related to the ketoacidotic state include hyperventilation with deep breathing (Kussmaul’s respiration) which is a compensatory respiratory response to an underlying metabolic acidosis. Acetonemia may cause a fruity odor to the breath. […] Elevated glucose levels are almost always present; however, euglycemic DKA has been described (19). Anion-gap metabolic acidosis is the hallmark of this condition and is caused by elevated ketone bodies.”

“Clinically significant cerebral edema occurs in approximately 1% of patients with diabetic ketoacidosis […] DKA-related cerebral edema may represent a continuum. Mild forms resulting in subtle edema may result in modest mental status abnormalities whereas the most severe manifestations result in overt cerebral injury. […] Cerebral edema typically presents 4–12 h after the treatment for DKA is started (28, 29), but can occur at any time. […] Increased intracranial pressure with cerebral edema has been recognized as the leading cause of morbidity and mortality in pediatric patients with DKA (59). Mortality from DKA-related cerebral edema in children is high, up to 90% […] and accounts for 60–90% of the mortality seen in DKA […] many patients are left with major neurological deficits (28, 31, 35).”

“The hyperosmolar hyperglycemic state (HHS) is also an acute complication that may occur in patients with diabetes mellitus. It is seen primarily in patients with T2DM and has previously been referred to as “hyperglycemic hyperosmolar non-ketotic coma” or “hyperglycemic hyperosmolar non-ketotic state” (13). HHS is marked by profound dehydration and hyperglycemia and often by some degree of neurological impairment. The term hyperglycemic hyperosmolar state is used because (1) ketosis may be present and (2) there may be varying degrees of altered sensorium besides coma (13). Like DKA, the basic underlying disorder is inadequate circulating insulin, but there is often enough insulin to inhibit free fatty acid mobilization and ketoacidosis. […] Up to 20% of patients diagnosed with HHS do not have a previous history of diabetes mellitus (14). […] Kitabchi et al. estimated the rate of hospital admissions due to HHS to be lower than DKA, accounting for less than 1% of all primary diabetic admissions (13). […] Glucose levels rise in the setting of relative insulin deficiency. The low levels of circulating insulin prevent lipolysis, ketogenesis, and ketoacidosis (62) but are unable to suppress hyperglycemia, glucosuria, and water losses. […] HHS typically presents with one or more precipitating factors, similar to DKA. […] Acute infections […] account for approximately 32–50% of precipitating causes (13). […] The mortality rates for HHS vary between 10 and 20% (14, 93).”

It should perhaps be noted explicitly that the mortality rates for these complications are particularly high in the settings of either very young individuals (DKA) or in elderly individuals (HHS) who might have multiple comorbidities. Relatedly HHS often develops acutely specifically in settings where the precipitating factor is something really unpleasant like pneumonia or a cardiovascular event, so a high-ish mortality rate is perhaps not that surprising. Nor is it surprising that very young brains are particularly vulnerable in the context of DKA (I already discussed some of the research on these matters in some detail in an earlier post about this book).

This post to some extent covered the topic of ‘stroke in general’, however I wanted to include here also some more data specifically on diabetes-related matters about this topic. Here’s a quote to start off with:

“DM [Diabetes Mellitus] has been consistently shown to represent a strong independent risk factor of ischemic stroke. […] The contribution of hyperglycemia to increased stroke risk is not proven. […] the relationship between hyperglycemia and stroke remains subject of debate. In this respect, the association between hyperglycemia and cerebrovascular disease is established less strongly than the association between hyperglycemia and coronary heart disease. […] The course of stroke in patients with DM is characterized by higher mortality, more severe disability, and higher recurrence rate […] It is now well accepted that the risk of stroke in individuals with DM is equal to that of individuals with a history of myocardial infarction or stroke, but no DM (24–26). This was confirmed in a recently published large retrospective study which enrolled all inhabitants of Denmark (more than 3 million people out of whom 71,802 patients with DM) and were followed-up for 5 years. In men without DM the incidence of stroke was 2.5 in those without and 7.8% in those with prior myocardial infarction, whereas in patients with DM it was 9.6 in those without and 27.4% in those with history of myocardial infarction. In women the numbers were 2.5, 9.0, 10.0, and 14.2%, respectively (22).

That study incidentally is very nice for me in particular to know about, given that I am a Danish diabetic. I do not here face any of the usual tiresome questions about ‘external validity’ and issues pertaining to ‘extrapolating out of sample’ – not only is it quite likely I’ve actually looked at some of the data used in that analysis myself, I also know that I am almost certainly one of the people included in the analysis. Of course you need other data as well to assess risk (e.g. age, see the previously linked post), but this is pretty clean as far as it goes. Moving on…

“The number of deaths from stroke attributable to DM is highest in low-and-middle-income countries […] the relative risk conveyed by DM is greater in younger subjects […] It is not well known whether type 1 or type 2 DM affects stroke risk differently. […] In the large cohort of women enrolled in the Nurses’ Health Study (116,316 women followed for up to 26 years) it was shown that the incidence of total stroke was fourfold higher in women with type 1 DM and twofold higher among women with type 2 DM than for non-diabetic women (33). […] The impact of DM duration as a stroke risk factor has not been clearly defined. […] In this context it is important to note that the actual duration of type 2 DM is difficult to determine precisely […and more generally: “the date of onset of a certain chronic disease is a quantity which is not defined as precisely as mortality“, as Yashin et al. put it – I also talked about this topic in my previous post, but it’s important when you’re looking at these sorts of things and is worth reiterating – US]. […] Traditional risk factors for stroke such as arterial hypertension, dyslipidemia, atrial fibrillation, heart failure, and previous myocardial infarction are more common in people with DM […]. However, the impact of DM on stroke is not just due to the higher prevalence of these risk factors, as the risk of mortality and morbidity remains over twofold increased after correcting for these factors (4, 37). […] It is informative to distinguish between factors that are non-specific and specific to DM. DM-specific factors, including chronic hyperglycemia, DM duration, DM type and complications, and insulin resistance, may contribute to an elevated stroke risk either by amplification of the harmful effect of other “classical” non-specific risk factors, such as hypertension, or by acting independently.”

More than a few variables are known to impact stroke risk, but the fact that many of the risk factors are related to each other (‘fat people often also have high blood pressure’) makes it hard to figure out which variables are most important, how they interact with each other, etc., etc. One might in that context perhaps conceptualize the metabolic syndrome (-MS) as a sort of indicator variable indicating whether a relatively common set of such related potential risk factors of interest are present or not – it is worth noting in that context that the authors include in the text the observation that: “it is yet uncertain if the whole concept of the MS entails more than its individual components. The clustering of risk factors complicates the assessment of the contribution of individual components to the risk of vascular events, as well as assessment of synergistic or interacting effects.” MS confers a two-threefold increased stroke risk, depending on the definition and the population analyzed, so there’s definitely some relevant stuff included in that box, but in the context of developing new treatment options and better assess risk it might be helpful to – to put it simplistically – know if variable X is significantly more important than variable Y (and how the variables interact, etc., etc.). But this sort of information is hard to get.

There’s more than one type of stroke, and the way diabetes modifies the risk of various stroke types is not completely clear:

“Most studies have consistently shown that DM is an important risk factor for ischemic stroke, while the incidence of hemorrhagic stroke in subjects with DM does not seem to be increased. Consequently, the ratio of ischemic to hemorrhagic stroke is higher in patients with DM than in those stroke patients without DM [recall the base rates I’ve mentioned before in the coverage of this book: 80% of strokes are ischemic strokes in Western countries, and 15 % hemorrhagic] […] The data regarding an association between DM and the risk of hemorrhagic stroke are quite conflicting. In the most series no increased risk of cerebral hemorrhage was found (10, 101), and in the Copenhagen Stroke Registry, hemorrhagic stroke was even six times less frequent in diabetic patients than in non-diabetic subjects (102). […] However, in another prospective population-based study DM was associated with an increased risk of primary intracerebral hemorrhage (103). […] The significance of DM as a risk factor of hemorrhagic stroke could differ depending on ethnicity of subjects or type of DM. In the large Nurses’ Health Study type 1 DM increased the risk of hemorrhagic stroke by 3.8 times while type 2 DM did not increase such a risk (96). […] It is yet unclear if DM predominantly predisposes to either large or small vessel ischemic stroke. Nevertheless, lacunar stroke (small, less than 15mm in diameter infarction, cyst-like, frequently multiple) is considered to be the typical type of stroke in diabetic subjects (105–107), and DM may be present in up to 28–43% of patients with cerebral lacunar infarction (108–110).”

The Danish results mentioned above might not be as useful to me as they were before if the type is important, because the majority of those diabetics included were type 2 diabetics. I know from personal experience that it is difficult to type-identify diabetics using the Danish registry data available if you want to work with population-level data, and any type of scheme attempting this will be subject to potentially large misidentification problems. Some subgroups can be presumably correctly identified using diagnostic codes, but a very large number of individuals will be left out of the analyses if you only rely on identification strategies where you’re (at least reasonably?) certain about the type. I’ve worked on these identification problems during my graduate work so perhaps a few more things are worth mentioning here. In the context of diabetic subgroup analyses, misidentification is in general a much larger problem in the context of type 1 results than in the context of type 2 results; unless the study design takes the large prevalence difference of the two conditions into account, the type 1 sample will be much smaller than the type 2 sample in pretty much all analytical contexts, so a small number of misidentified type 2 individuals can have large impacts on the results of the type 1 sample. Type 1s misidentified as type 2 individuals is in general to be expected to be a much smaller problem in terms of the validity of the type 2 analysis; misidentification of that type will cause a loss of power in the context of the type 1 subgroup analysis, which is already low to start with (and it’ll also make the type 1 subgroup analysis even more vulnerable to misidentified type 2s), but it won’t much change the results of the type 2 subgroup analysis in any significant way. Relatedly, even if enough type 2 patients are misidentified to cause problems with the interpretation of the type 1 subgroup analysis, this would not on its own be a good reason to doubt the results of the type 2 subgroup analysis. Another thing to note in terms of these things is that given that misidentification will tend to lead to ‘mixing’, i.e. it’ll make the subgroup results look similar, when outcomes are not similar in the type 1 and the type 2 individuals then this might be taken to be an indicator that something potentially interesting might be going on, because most analyses will struggle with some level of misidentification which will tend to reduce the power of tests of group differences.

What about stroke outcomes? A few observations were included on that topic above, but the book has a lot more stuff on that – some observations on this topic:

“DM is an independent risk factor of death from stroke […]. Tuomilehto et al. (35) calculated that 16% of all stroke mortality in men and 33% in women could be directly attributed to DM. Patients with DM have higher hospital and long-term stroke mortality, more pronounced residual neurological deficits, and more severe disability after acute cerebrovascular accidents […]. The 1-year mortality rate, for example, was twofold higher in diabetic patients compared to non-diabetic subjects (50% vs. 25%) […]. Only 20% of people with DM survive over 5 years after the first stroke and half of these patients die within the first year (36, 128). […] The mechanisms underlying the worse outcome of stroke in diabetic subjects are not fully understood. […] Regarding prevention of stroke in patients with DM, it may be less relevant than in non-DM subjects to distinguish between primary and secondary prevention as all patients with DM are considered to be high-risk subjects regardless of the history of cerebrovascular accidents or the presence of clinical and subclinical vascular lesions. […] The influence of the mode of antihyperglycemic treatment on the risk of stroke is uncertain.

Control of blood pressure is very important in the diabetic setting:

“There are no doubts that there is a linear relation between elevated systolic blood pressure and the risk of stroke, both in people with or without DM. […] Although DM and arterial hypertension represent significant independent risk factors for stroke if they co-occur in the same patient the risk increases dramatically. A prospective study of almost 50 thousand subjects in Finland followed up for 19 years revealed that the hazard ratio for stroke incidence was 1.4, 2.0, 2.5, 3.5, and 4.5 and for stroke mortality was 1.5, 2.6, 3.1, 5.6, and 9.3, respectively, in subjects with an isolated modestly elevated blood pressure (systolic 140–159/diastolic 90–94 mmHg), isolated more severe hypertension (systolic >159 mmHg, diastolic >94 mmHg, or use of antihypertensive drugs), with isolated DM only, with both DM and modestly elevated blood pressure, and with both DM and more severe hypertension, relative to subjects without either of the risk factors (168). […] it remains unclear whether some classes of antihypertensive agents provide a stronger protection against stroke in diabetic patients than others. […] effective antihypertensive treatment is highly beneficial for reduction of stroke risk in diabetic patients, but the advantages of any particular class of antihypertensive medications are not substantially proven.”

Treatment of dyslipidemia is also very important, but here it does seem to matter how you treat it:

“It seems that the beneficial effect of statins is dose-dependent. The lower the LDL level that is achieved the stronger the cardiovascular protection. […] Recently, the results of the meta-analysis of 14 randomized trials of statins in 18,686 patients with DM had been published. It was calculated that statins use in diabetic patients can result in a 21% reduction of the risk of any stroke per 1 mmol/l reduction of LDL achieved […] There is no evidence from trials that supports efficacy of fibrates for stroke prevention in diabetic patients. […] No reduction of stroke risk by fibrates was shown also in a meta-analysis of eight trials enrolled 12,249 patients with type 2 DM (204).”

Antiplatelets?

“Significant reductions in stroke risk in diabetic patients receiving antiplatelet therapy were found in large-scale controlled trials (205). It appears that based on the high incidence of stroke and prevalence of stroke risk factors in the diabetic population the benefits of routine aspirin use for primary and secondary stroke prevention outweigh its potential risk of hemorrhagic stroke especially in patients older than 30 years having at least one additional risk factor (206). […] both guidelines issued by the AHA/ADA or the ESC/EASD on the prevention of cardiovascular disease in patients with DM support the use of aspirin in a dose of 50–325 mg daily for the primary prevention of stroke in subjects older than 40 years of age and additional risk factors, such as DM […] The newer antiplatelet agent, clopidogrel, was more efficacious in prevention of ischemic stroke than aspirin with greater risk reduction in the diabetic cohort especially in those treated with insulin compared to non-diabetics in CAPRIE trial (209). However, the combination of aspirin and clopidogrel does not appear to be more efficacious and safe compared to clopidogrel or aspirin alone”.

When you treat all risk factors aggressively, it turns out that the elevated stroke risk can be substantially reduced. Again the data on this stuff is from Denmark:

“Gaede et al. (216) have shown in the Steno 2 study that intensive multifactorial intervention aimed at correction of hyperglycemia, hypertension, dyslipidemia, and microalbuminuria along with aspirin use resulted in a reduction of cardiovascular morbidity including non-fatal stroke […] recently the results of the extended 13.3 years follow-up of this study were presented and the reduction of cardiovascular mortality by 57% and morbidity by 59% along with the reduction of the number of non-fatal stroke (6 vs. 30 events) in intensively treated group was convincingly demonstrated (217). Antihypertensive, hypolipidemic treatment, use of aspirin should thus be recommended as either primary or secondary prevention of stroke for patients with DM.”

March 3, 2017 Posted by | Books, Cardiology, Diabetes, Epidemiology, Medicine, Neurology, Pharmacology, Statistics | Leave a comment

Quotes

i. “The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data.” (John Tukey)

ii. “Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.” (-ll-)

iii. “They who can no longer unlearn have lost the power to learn.” (John Lancaster Spalding)

iv. “If there are but few who interest thee, why shouldst thou be disappointed if but few find thee interesting?” (-ll-)

v. “Since the mass of mankind are too ignorant or too indolent to think seriously, if majorities are right it is by accident.” (-ll-)

vi. “As they are the bravest who require no witnesses to their deeds of daring, so they are the best who do right without thinking whether or not it shall be known.” (-ll-)

vii. “Perfection is beyond our reach, but they who earnestly strive to become perfect, acquire excellences and virtues of which the multitude have no conception.” (-ll-)

viii. “We are made ridiculous less by our defects than by the affectation of qualities which are not ours.” (-ll-)

ix. “If thy words are wise, they will not seem so to the foolish: if they are deep the shallow will not appreciate them. Think not highly of thyself, then, when thou art praised by many.” (-ll-)

x. “Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. ” (George E. P. Box)

xi. “Intense ultraviolet (UV) radiation from the young Sun acted on the atmosphere to form small amounts of very many gases. Most of these dissolved easily in water, and fell out in rain, making Earth’s surface water rich in carbon compounds. […] the most important chemical of all may have been cyanide (HCN). It would have formed easily in the upper atmosphere from solar radiation and meteorite impact, then dissolved in raindrops. Today it is broken down almost at once by oxygen, but early in Earth’s history it built up at low concentrations in lakes and oceans. Cyanide is a basic building block for more complex organic molecules such as amino acids and nucleic acid bases. Life probably evolved in chemical conditions that would kill us instantly!” (Richard Cowen, History of Life, p.8)

xii. “Dinosaurs dominated land communities for 100 million years, and it was only after dinosaurs disappeared that mammals became dominant. It’s difficult to avoid the suspicion that dinosaurs were in some way competitively superior to mammals and confined them to small body size and ecological insignificance. […] Dinosaurs dominated many guilds in the Cretaceous, including that of large browsers. […] in terms of their reconstructed behavior […] dinosaurs should be compared not with living reptiles, but with living mammals and birds. […] By the end of the Cretaceous there were mammals with varied sets of genes but muted variation in morphology. […] All Mesozoic mammals were small. Mammals with small bodies can play only a limited number of ecological roles, mainly insectivores and omnivores. But when dinosaurs disappeared at the end of the Cretaceous, some of the Paleocene mammals quickly evolved to take over many of their ecological roles” (ibid., pp. 145, 154, 222, 227-228)

xiii. “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.” (Ronald Fisher)

xiv. “Ideas are incestuous.” (Howard Raiffa)

xv. “Game theory […] deals only with the way in which ultrasmart, all knowing people should behave in competitive situations, and has little to say to Mr. X as he confronts the morass of his problem. ” (-ll-)

xvi. “One of the principal objects of theoretical research is to find the point of view from which the subject appears in the greatest simplicity.” (Josiah Williard Gibbs)

xvii. “Nothing is as dangerous as an ignorant friend; a wise enemy is to be preferred.” (Jean de La Fontaine)

xviii. “Humility is a virtue all preach, none practice; and yet everybody is content to hear.” (John Selden)

xix. “Few men make themselves masters of the things they write or speak.” (-ll-)

xx. “Wise men say nothing in dangerous times.” (-ll-)

 

 

January 15, 2016 Posted by | Biology, Books, Paleontology, Quotes/aphorisms, Statistics | Leave a comment

Principles of Applied Statistics

“Statistical considerations arise in virtually all areas of science and technology and, beyond these, in issues of public and private policy and in everyday life. While the detailed methods used vary greatly in the level of elaboration involved and often in the way they are described, there is a unity of ideas which gives statistics as a subject both its intellectual challenge and its importance […] In this book we have aimed to discuss the ideas involved in applying statistical methods to advance knowledge and understanding. It is a book not on statistical methods as such but, rather, on how these methods are to be deployed […] We are writing partly for those working as applied statisticians, partly for subject-matter specialists using statistical ideas extensively in their work and partly for masters and doctoral students of statistics concerned with the relationship between the detailed methods and theory they are studying and the effective application of these ideas. Our aim is to emphasize how statistical ideas may be deployed fruitfully rather than to describe the details of statistical techniques.”

I gave the book five stars, but as noted in my review on goodreads I’m not sure the word ‘amazing’ is really fitting – however the book had a lot of good stuff and it had very little stuff for me to quibble about, so I figured it deserved a high rating. The book deals to a very large extent with topics which are in some sense common to pretty much all statistical analyses, regardless of the research context; formulation of research questions/hypotheses, data search, study designs, data analysis, and interpretation. The authors spend quite a few pages talking about hypothesis testing but on the other hand no pages talking about statistical information criteria, a topic with which I’m at this point at least reasonably familiar, and I figure if I had been slightly more critical I’d have subtracted a star for this omission – however I have the impression that I’m at times perhaps too hard on non-fiction books on goodreads so I decided not to punish the book for this omission. Part of the reason why I gave the book five stars is also that I’ve sort of wanted to read a book like this one for a while; I think in some sense it’s the first one of its kind I’ve read. I liked the way the book was structured.

Below I have added some observations from the book, as well as a few comments (I should note that I have had to leave out a lot of good stuff).

“When the data are very extensive, precision estimates calculated from simple standard statistical methods are likely to underestimate error substantially owing to the neglect of hidden correlations. A large amount of data is in no way synonymous with a large amount of information. In some settings at least, if a modest amount of poor quality data is likely to be modestly misleading, an extremely large amount of poor quality data may be extremely misleading.”

“For studies of a new phenomenon it will usually be best to examine situations in which the phenomenon is likely to appear in the most striking form, even if this is in some sense artificial or not representative. This is in line with the well-known precept in mathematical research: study the issue in the simplest possible context that is not entirely trivial, and later generalize.”

“It often […] aids the interpretation of an observational study to consider the question: what would have been done in a comparable experiment?”

“An important and perhaps sometimes underemphasized issue in empirical prediction is that of stability. Especially when repeated application of the same method is envisaged, it is unlikely that the situations to be encountered will exactly mirror those involved in setting up the method. It may well be wise to use a procedure that works well over a range of conditions even if it is sub-optimal in the data used to set up the method.”

“Many investigations have the broad form of collecting similar data repeatedly, for example on different individuals. In this connection the notion of a unit of analysis is often helpful in clarifying an approach to the detailed analysis. Although this notion is more generally applicable, it is clearest in the context of randomized experiments. Here the unit of analysis is that smallest subdivision of the experimental material such that two distinct units might be randomized (randomly allocated) to different treatments. […] In general the unit of analysis may not be the same as the unit of interpretation, that is to say, the unit about which conclusions are to drawn. The most difficult situation is when the unit of analysis is an aggregate of several units of interpretation, leading to the possibility of ecological bias, that is, a systematic difference between, say, the impact of explanatory variables at different levels of aggregation. […] it is important to identify the unit of analysis, which may be different in different parts of the analysis […] on the whole, limited detail is needed in examining the variation within the unit of analysis in question.”

The book briefly discusses issues pertaining to the scale of effort involved when thinking about appropriate study designs and how much/which data to gather for analysis, and notes that often associated costs are not quantified – rather a judgment call is made. An important related point is that e.g. in survey contexts response patterns will tend to depend upon the quantity of information requested; if you ask for too much, few people might reply (…and perhaps it’s also the case that it’s ‘the wrong people’ that reply? The authors don’t touch upon the potential selection bias issue, but it seems relevant). A few key observations from the book on this topic:

“the intrinsic quality of data, for example the response rates of surveys, may be degraded if too much is collected. […] sampling may give higher [data] quality than the study of a complete population of individuals. […] When researchers studied the effect of the expected length (10, 20 or 30 minutes) of a web-based questionnaire, they found that fewer potential respondents started and completed questionnaires expected to take longer (Galesic and Bosnjak, 2009). Furthermore, questions that appeared later in the questionnaire were given shorter and more uniform answers than questions that appeared near the start of the questionnaire.”

Not surprising, but certainly worth keeping in mind. Moving on…

“In general, while principal component analysis may be helpful in suggesting a base for interpretation and the formation of derived variables there is usually considerable arbitrariness involved in its use. This stems from the need to standardize the variables to comparable scales, typically by the use of correlation coefficients. This means that a variable that happens to have atypically small variability in the data will have a misleadingly depressed weight in the principal components.”

The book includes a few pages about the Berkson error model, which I’d never heard about. Wikipedia doesn’t have much about it and I was debating how much to include about this one here – I probably wouldn’t have done more than including the link here if the wikipedia article actually covered this topic in any detail, but it doesn’t. However it seemed important enough to write a few words about it. The basic difference between the ‘classical error model’, i.e. the one everybody knows about, and the Berkson error model is that in the former case the measurement error is statistically independent of the true value of X, whereas in the latter case the measurement error is independent of the measured value; the authors note that this implies that the true values are more variable than the measured values in a Berkson error context. Berkson errors can e.g. happen in experimental contexts where levels of a variable are pre-set by some target, for example in a medical context where a drug is supposed to be administered each X hours; the pre-set levels might then be the measured values, and the true values might be different e.g. if the nurse was late. I thought it important to mention this error model not only because it’s a completely new idea to me that you might encounter this sort of error-generating process, but also because there is no statistical test that you can use to figure out if the standard error model is the appropriate one, or if a Berkson error model is better; which means that you need to be aware of the difference and think about which model works best, based on the nature of the measuring process.

Let’s move on to some quotes dealing with modeling:

“while it is appealing to use methods that are in a reasonable sense fully efficient, that is, extract all relevant information in the data, nevertheless any such notion is within the framework of an assumed model. Ideally, methods should have this efficiency property while preserving good behaviour (especially stability of interpretation) when the model is perturbed. Essentially a model translates a subject-matter question into a mathematical or statistical one and, if that translation is seriously defective, the analysis will address a wrong or inappropriate question […] The greatest difficulty with quasi-realistic models [as opposed to ‘toy models’] is likely to be that they require numerical specification of features for some of which there is very little or no empirical information. Sensitivity analysis is then particularly important.”

“Parametric models typically represent some notion of smoothness; their danger is that particular representations of that smoothness may have strong and unfortunate implications. This difficulty is covered for the most part by informal checking that the primary conclusions do not depend critically on the precise form of parametric representation. To some extent such considerations can be formalized but in the last analysis some element of judgement cannot be avoided. One general consideration that is sometimes helpful is the following. If an issue can be addressed nonparametrically then it will often be better to tackle it parametrically; however, if it cannot be resolved nonparametrically then it is usually dangerous to resolve it parametrically.”

“Once a model is formulated two types of question arise. How can the unknown parameters in the model best be estimated? Is there evidence that the model needs modification or indeed should be abandoned in favour of some different representation? The second question is to be interpreted not as asking whether the model is true [this is the wrong question to ask, as also emphasized by Burnham & Anderson] but whether there is clear evidence of a specific kind of departure implying a need to change the model so as to avoid distortion of the final conclusions. […] it is important in applications to understand the circumstances under which different methods give similar or different conclusions. In particular, if a more elaborate method gives an apparent improvement in precision, what are the assumptions on which that improvement is based? Are they reasonable? […] the hierarchical principle implies, […] with very rare exceptions, that models with interaction terms should include also the corresponding main effects. […] When considering two families of models, it is important to consider the possibilities that both families are adequate, that one is adequate and not the other and that neither family fits the data.” [Do incidentally recall that in the context of interactions, “the term interaction […] is in some ways a misnomer. There is no necessary implication of interaction in the physical sense or synergy in a biological context. Rather, interaction means a departure from additivity […] This is expressed most explicitly by the requirement that, apart from random fluctuations, the difference in outcome between any two levels of one factor is the same at all levels of the other factor. […] The most directly interpretable form of interaction, certainly not removable by [variable] transformation, is effect reversal.”]

“The p-value assesses the data […] via a comparison with that anticipated if H0 were true. If in two different situations the test of a relevant null hypothesis gives approximately the same p-value, it does not follow that the overall strengths of the evidence in favour of the relevant H0 are the same in the two cases.”

“There are […] two sources of uncertainty in observational studies that are not present in randomized experiments. The first is that the ordering of the variables may be inappropriate, a particular hazard in cross-sectional studies. […] if the data are tied to one time point then any presumption of causality relies on a working hypothesis as to whether the components are explanatory or responses. Any check on this can only be from sources external to the current data. […] The second source of uncertainty is that important explanatory variables affecting both the potential cause and the outcome may not be available. […] Retrospective explanations may be convincing if based on firmly established theory but otherwise need to be treated with special caution. It is well known in many fields that ingenious explanations can be constructed retrospectively for almost any finding.”

“The general issue of applying conclusions from aggregate data to specific individuals is essentially that of showing that the individual does not belong to a subaggregate for which a substantially different conclusion applies. In actuality this can at most be indirectly checked for specific subaggregates. […] It is not unknown in the literature to see conclusions such as that there are no treatment differences except for males aged over 80 years, living more than 50 km south of Birmingham and life-long supporters of Aston Villa football club, who show a dramatic improvement under some treatment T. Despite the undoubted importance of this particular subgroup, virtually always such conclusions would seem to be unjustified.” [I loved this example!]

The authors included a few interesting results from an undated Cochrane publication which I thought I should mention. The file-drawer effect is well known, but there are a few other interesting biases at play in a publication bias context. One is time-lag bias, which means that statistically significant results take less time to get published. Another is language bias; statistically significant results are more likely to be published in English publications. A third bias is multiple publication bias; it turns out that papers with statistically significant results are more likely to be published more than once. The last one mentioned is citation bias; papers with statistically significant results are more likely to be cited in the literature.

The authors include these observations in their concluding remarks: “The overriding general principle [in the context of applied statistics], difficult to achieve, is that there should be a seamless flow between statistical and subject-matter considerations. […] in principle seamlessness requires an individual statistician to have views on subject-matter interpretation and subject-matter specialists to be interested in issues of statistical analysis.”

As already mentioned this is a good book. It’s not long, and/but it’s worth reading if you’re in the target group.

November 22, 2015 Posted by | Books, Statistics | Leave a comment

Quotes

i. “By all means think yourself big but don’t think everyone else small” (‘Notes on Flyleaf of Fresh ms. Book’, Scott’s Last Expedition. See also this).

ii. “The man who knows everyone’s job isn’t much good at his own.” (-ll-)

iii. “It is amazing what little harm doctors do when one considers all the opportunities they have” (Mark Twain, as quoted in the Oxford Handbook of Clinical Medicine, p.595).

iv. “A first-rate theory predicts; a second-rate theory forbids and a third-rate theory explains after the event.” (Aleksander Isaakovich Kitaigorodski)

v. “[S]ome of the most terrible things in the world are done by people who think, genuinely think, that they’re doing it for the best” (Terry Pratchett, Snuff).

vi. “That was excellently observ’d, say I, when I read a Passage in an Author, where his Opinion agrees with mine. When we differ, there I pronounce him to be mistaken.” (Jonathan Swift)

vii. “Death is nature’s master stroke, albeit a cruel one, because it allows genotypes space to try on new phenotypes.” (Quote from the Oxford Handbook of Clinical Medicine, p.6)

viii. “The purpose of models is not to fit the data but to sharpen the questions.” (Samuel Karlin)

ix. “We may […] view set theory, and mathematics generally, in much the way in which we view theoretical portions of the natural sciences themselves; as comprising truths or hypotheses which are to be vindicated less by the pure light of reason than by the indirect systematic contribution which they make to the organizing of empirical data in the natural sciences.” (Quine)

x. “At root what is needed for scientific inquiry is just receptivity to data, skill in reasoning, and yearning for truth. Admittedly, ingenuity can help too.” (-ll-)

xi. “A statistician carefully assembles facts and figures for others who carefully misinterpret them.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p.329. Only source given in the book is: “Quoted in Evan Esar, 20,000 Quips and Quotes“)

xii. “A knowledge of statistics is like a knowledge of foreign languages or of algebra; it may prove of use at any time under any circumstances.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p. 328. The source provided is: “Elements of Statistics, Part I, Chapter I (p.4)”).

xiii. “We own to small faults to persuade others that we have not great ones.” (Rochefoucauld)

xiv. “There is more self-love than love in jealousy.” (-ll-)

xv. “We should not judge of a man’s merit by his great abilities, but by the use he makes of them.” (-ll-)

xvi. “We should gain more by letting the world see what we are than by trying to seem what we are not.” (-ll-)

xvii. “Put succinctly, a prospective study looks for the effects of causes whereas a retrospective study examines the causes of effects.” (Quote from p.49 of Principles of Applied Statistics, by Cox & Donnelly)

xviii. “… he who seeks for methods without having a definite problem in mind seeks for the most part in vain.” (David Hilbert)

xix. “Give every man thy ear, but few thy voice” (Shakespeare).

xx. “Often the fear of one evil leads us into a worse.” (Nicolas Boileau-Despréaux)

 

November 22, 2015 Posted by | Books, Mathematics, Medicine, Philosophy, Quotes/aphorisms, Science, Statistics | Leave a comment