Econstudentlog

Quantifying tumor evolution through spatial computational modeling

Two general remarks: 1. She talks very fast, in my opinion unpleasantly fast – the lecture would have been at least slightly easier to follow if she’d slowed down a little. 2. A few of the lectures uploaded in this lecture series (from the IAS Mathematical Methods in Cancer Evolution and Heterogeneity Workshop) seem to have some sound issues; in this lecture there are multiple 1-2 seconds long ‘chunks’ where the sound drops out and some words are lost. This is really annoying, and a similar problem (which was likely ‘the same problem’) previously lead me to quit another lecture in the series; however in this case I decided to give it a shot anyway, and I actually think it’s not a big deal; the sound-losses are very short in duration, and usually no more than one or two words are lost so you can usually figure out what was said. During this lecture there was incidentally also some issues with the monitor roughly 27 minutes in, but this isn’t a big deal as no information was lost and unlike the people who originally attended the lecture you can just skip ahead approximately one minute (that was how long it took to solve that problem).

A few relevant links to stuff she talks about in the lecture:

A Big Bang model of human colorectal tumor growth.
Approximate Bayesian computation.
Site frequency spectrum.
Identification of neutral tumor evolution across cancer types.
Using tumour phylogenetics to identify the roots of metastasis in humans.

Advertisements

August 22, 2017 Posted by | Cancer/oncology, Evolutionary biology, Genetics, Lectures, Mathematics, Medicine, Statistics | Leave a comment

A few diabetes papers of interest

i. Rates of Diabetic Ketoacidosis: International Comparison With 49,859 Pediatric Patients With Type 1 Diabetes From England, Wales, the U.S., Austria, and Germany.

“Rates of DKA in youth with type 1 diabetes vary widely nationally and internationally, from 15% to 70% at diagnosis (4) to 1% to 15% per established patient per year (911). However, data from systematic comparisons between countries are limited. To address this gap in the literature, we analyzed registry and audit data from three organizations: the Prospective Diabetes Follow-up Registry (DPV) in Germany and Austria, the National Paediatric Diabetes Audit (NPDA) in England and Wales, and the T1D Exchange (T1DX) in the U.S. These countries have similarly advanced, yet differing, health care systems in which data on DKA and associated factors are collected. Our goal was to identify indicators of risk for DKA admissions in pediatric patients with >1-year duration of disease with an aim to better understand where targeted preventive programs might lead to a reduction in the frequency of this complication of management of type 1 diabetes.”

RESULTS The frequency of DKA was 5.0% in DPV, 6.4% in NPDA, and 7.1% in T1DX […] Mean HbA1c was lowest in DPV (63 mmol/mol [7.9%]), intermediate in T1DX (69 mmol/mol [8.5%]), and highest in NPDA (75 mmol/mol [9.0%]). […] In multivariable analyses, higher odds of DKA were found in females (odds ratio [OR] 1.23, 99% CI 1.10–1.37), ethnic minorities (OR 1.27, 99% CI 1.11–1.44), and HbA1c ≥7.5% (≥58 mmol/mol) (OR 2.54, 99% CI 2.09–3.09 for HbA1c from 7.5 to <9% [58 to <75 mmol/mol] and OR 8.74, 99% CI 7.18–10.63 for HbA1c ≥9.0% [≥75 mmol/mol]).”

Poor metabolic control is obviously very important, but it’s important to remember that poor metabolic control is in itself an outcome that needs to be explained. I would note that the mean HbA1c values here, especially that 75 mmol/mol one, seem really high; this is not a very satisfactory level of glycemic control and corresponds to an average glucose level of 12 mmol/l. And that’s a population average, meaning that many individuals have values much higher than this. Actually the most surprising thing to me about these data is that the DKA event rates are not much higher than they are, considering the level of metabolic control achieved. Another slightly surprising finding is that teenagers (13-17 yrs) were not actually all that much more likely to have experienced DKA than small children (0-6 yrs); the OR is only ~1.5. Of course this can not be taken as an indication that DKA in teenagers do not make up a substantial proportion of the total amount of DKA events in pediatric samples, as the type 1 prevalence is much higher in teenagers than in small children (incidence peaks in adolescence).

“In 2004–2009 in the U.S., the mean hospital cost per pediatric DKA admission was $7,142 (range $4,125–11,916) (6), and insurance claims data from 2007 reported an excess of $5,837 in annual medical expenditures for youth with insulin-treated diabetes with DKA compared with those without DKA (7). In Germany, pediatric patients with diabetes with DKA had diabetes-related costs that were up to 3.6-fold higher compared with those without DKA (8).”

“DKA frequency was lower in pump users than in injection users (OR 0.84, 99% CI 0.76–0.93). Heterogeneity in the association with DKA between registries was seen for pump use and age category, and the overall rate should be interpreted accordingly. A lower rate of DKA in pump users was only found in T1DX, in contrast to no association of pump use with DKA in DPV or NPDA. […] In multivariable analyses […], age, type 1 diabetes duration, and pump use were not significantly associated with DKA in the fully adjusted model. […] pump use was associated with elevated odds of DKA in the <6-year-olds and in the 6- to <13-year-olds but with reduced odds of DKA in the 13- to <18-year-olds.”

Pump use should probably all else equal increase the risk of DKA, but all else is never equal and in these data pump users actually had a lower DKA event rate than did diabetics treated with injections. One should not conclude from this finding that pump use decreases the risk of DKA, selection bias and unobserved heterogeneities are problems which it is almost impossible to correct for in an adequate way – I find it highly unlikely that selection bias is only a potential problem in the US (see below). There are many different ways selection bias can be a relevant problem, financial- and insurance-related reasons (relevant particularly in the US and likely the main factors the authors are considering) are far from the only potential problems; I could thus easily imagine selection dynamics playing a major role even in a hypothetical setting where all new-diagnosed children were started on pump therapy as a matter of course. In such a setting you might have a situation where very poorly controlled individuals would have 10 DKA events in a short amount of time because they didn’t take the necessary amount of blood glucose tests/disregarded alarms/forgot or postponed filling up the pump when it’s near-empty/failed to switch the battery in time/etc. etc., and then what might happen would be that the diabetologist/endocrinologist would then proceed to recommend these patients doing very poorly on pump treatment to switch to injection therapy, and what you would end up with would be a compliant/motivated group of patients on pump therapy and a noncompliant/poorly motivated group on injection therapy. This would happen even if everybody started on pump therapy and so pump therapy exposure was completely unrelated to outcomes. Pump therapy requires more of the patient than does injection therapy, and if the patient is unwilling/unable to put in the work required that treatment option will fail. In my opinion the default here should be that these treatment groups are (‘significantly’) different, not that they are similar.

A few more quotes from the paper:

“The major finding of these analyses is high rates of pediatric DKA across the three registries, even though DKA events at the time of diagnosis were not included. In the prior 12 months, ∼1 in 20 (DPV), 1 in 16 (NPDA), and 1 in 14 (T1DX) pediatric patients with a duration of diabetes ≥1 year were diagnosed with DKA and required treatment in a health care facility. Female sex, ethnic minority status, and elevated HbA1c were consistent indicators of risk for DKA across all three registries. These indicators of increased risk for DKA are similar to previous reports (10,11,18,19), and our rates of DKA are within the range in the pediatric diabetes literature of 1–15% per established patient per year (10,11).

Compared with patients receiving injection therapy, insulin pump use was associated with a lower risk of DKA only in the U.S. in the T1DX, but no difference was seen in the DPV or NPDA. Country-specific factors on the associations of risk factors with DKA require further investigation. For pump use, selection bias may play a role in the U.S. The odds of DKA in pump users was not increased in any registry, which is a marked difference from some (10) but not all historic data (20).”

ii. Effect of Long-Acting Insulin Analogs on the Risk of Cancer: A Systematic Review of Observational Studies.

NPH insulin has been the mainstay treatment for type 1 diabetes and advanced type 2 diabetes since the 1950s. However, this insulin is associated with an increased risk of nocturnal hypoglycemia, and its relatively short half-life requires frequent administration (1,2). Consequently, structurally modified insulins, known as long-acting insulin analogs (glargine and detemir), were developed in the 1990s to circumvent these limitations. However, there are concerns that long-acting insulin analogs may be associated with an increased risk of cancer. Indeed, some laboratory studies showed long-acting insulin analogs were associated with cancer cell proliferation and protected against apoptosis via their higher binding affinity to IGF-I receptors (3,4).

In 2009, four observational studies associated the use of insulin glargine with an increased risk of cancer (58). These studies raised important concerns but were also criticized for important methodological shortcomings (913). Since then, several observational studies assessing the association between long-acting insulin analogs and cancer have been published but yielded inconsistent findings (1428). […] Several meta-analyses of observational studies have investigated the association between insulin glargine and cancer risk (3437). These meta-analyses assessed the quality of included studies, but the methodological issues particular to pharmacoepidemiologic research were not fully considered. In addition, given the presence of important heterogeneity in this literature, the appropriateness of pooling the results of these studies remains unclear. We therefore conducted a systematic review of observational studies examining the association between long-acting insulin analogs and cancer incidence, with a particular focus on methodological strengths and weaknesses of these studies.”

“[W]e assessed the quality of studies for key components, including time-related biases (immortal time, time-lag, and time-window), inclusion of prevalent users, inclusion of lag periods, and length of follow-up between insulin initiation and cancer incidence.

Immortal time bias is defined by a period of unexposed person-time that is misclassified as exposed person-time or excluded, resulting in the exposure of interest appearing more favorable (40,41). Time-lag bias occurs when treatments used later in the disease management process are compared with those used earlier for less advanced stages of the disease. Such comparisons can result in confounding by disease duration or severity of disease if duration and severity of disease are not adequately considered in the design or analysis of the study (29). This is particularly true for chronic disease with dynamic treatment processes such as type 2 diabetes. Currently, American and European clinical guidelines suggest using basal insulin (e.g., NPH, glargine, and detemir) as a last line of treatment if HbA1c targets are not achieved with other antidiabetic medications (42). Therefore, studies that compare long-acting insulin analogs to nonbasal insulin may introduce confounding by disease duration. Time-window bias occurs when the opportunity for exposure differs between case subjects and control subjects (29,43).

The importance of considering a lag period is necessary for latency considerations (i.e., a minimum time between treatment initiation and the development of cancer) and to minimize protopathic and detection bias. Protopathic bias, or reverse causation, is present when a medication (exposure) is prescribed for early symptoms related to the outcome of interest, which can lead to an overestimation of the association. Lagging the exposure by a predefined time window in cohort studies or excluding exposures in a predefined time window before the event in case-control studies is a means of minimizing this bias (44). Detection bias is present when the exposure leads to higher detection of the outcome of interest due to the increased frequency of clinic visits (e.g., newly diagnosed patients with type 2 diabetes or new users of another antidiabetic medication), which also results in an overestimation of risk (45). Thus, including a lag period, such as starting follow-up after 1 year of the initiation of a drug, simultaneously considers a latency period while also minimizing protopathic and detection bias.”

“We systematically searched MEDLINE and EMBASE from 2000 to 2014 to identify all observational studies evaluating the relationship between the long-acting insulin analogs and the risk of any and site-specific cancers (breast, colorectal, prostate). […] 16 cohort and 3 case-control studies were included in this systematic review (58,1428). All studies evaluated insulin glargine, with four studies also investigating insulin detemir (15,17,25,28). […] The study populations ranged from 1,340 to 275,164 patients […]. The mean or median durations of follow-up and age ranged from 0.9 to 7.0 years and from 52.3 to 77.4 years, respectively. […] Thirteen of 15 studies reported no association between insulin glargine and detemir and any cancer. Four of 13 studies reported an increased risk of breast cancer with insulin glargine. In the quality assessment, 7 studies included prevalent users, 11 did not consider a lag period, 6 had time-related biases, and 16 had short (<5 years) follow-up.”

“Of the 19 studies in this review, immortal time bias may have been introduced in one study based on the time-independent exposure and cohort entry definitions that were used in this cohort study […] Time-lag bias may have occurred in four studies […] A variation of time-lag bias was observed in a cohort study of new insulin users (28). For the exposure definition, highest duration since the start of insulin use was compared with the lowest. It is expected that the risk of cancer would increase with longer duration of insulin use; however, the opposite was reported (with RRs ranging from 0.50 to 0.90). The protective association observed could be due to competing risks (e.g., death from cardiovascular-related events) (47,48). Patients with diabetes have a higher risk of cardiovascular-related deaths compared with patients with no diabetes (49,50). Therefore, patients with diabetes who die of cardiovascular-related events do not have the opportunity to develop cancer, resulting in an underestimation of the risk of cancer. […] Time-window bias was observed in two studies (18,22). […] HbA1c and diabetes duration were not accounted for in 15 of the 19 studies, resulting in likely residual confounding (7,8,1418,2026,28). […] Seven studies included prevalent users of insulin (8,15,18,20,21,23,25), which is problematic because of the corresponding depletion of susceptible subjects in other insulin groups compared with long-acting insulin analogs. Protopathic or detection bias could have resulted in 11 of the 19 studies because a lag period was not incorporated in the study design (6,7,1416,1821,23,28).”

CONCLUSIONS The observational studies examining the risk of cancer associated with long-acting insulin analogs have important methodological shortcomings that limit the conclusions that can be drawn. Thus, uncertainty remains, particularly for breast cancer risk.”

iii. Impact of Socioeconomic Status on Cardiovascular Disease and Mortality in 24,947 Individuals With Type 1 Diabetes.

“Socioeconomic status (SES) is a powerful predictor of cardiovascular disease (CVD) and death. We examined the association in a large cohort of patients with type 1 diabetes. […] Clinical data from the Swedish National Diabetes Register were linked to national registers, whereby information on income, education, marital status, country of birth, comorbidities, and events was obtained. […] Type 1 diabetes was defined on the basis of epidemiologic data: treatment with insulin and a diagnosis at the age of 30 years or younger. This definition has been validated as accurate in 97% of the cases listed in the register (14).”

“We included 24,947 patients. Mean (SD) age and follow-up was 39.1 (13.9) and 6.0 (1.0) years. Death and fatal/nonfatal CVD occurred in 926 and 1378 individuals. Compared with being single, being married was associated with 50% lower risk of death, cardiovascular (CV) death, and diabetes-related death. Individuals in the two lowest quintiles had twice as great a risk of fatal/nonfatal CVD, coronary heart disease, and stroke and roughly three times as great a risk of death, diabetes-related death, and CV death as individuals in the highest income quintile. Compared with having ≤9 years of education, individuals with a college/university degree had 33% lower risk of fatal/nonfatal stroke.”

“Individuals with 10–12 years of education were comparable at baseline (considering distribution of age and sex) with those with a college/university degree […]. Individuals with a college/university degree had higher income, had 5 mmol/mol lower HbA1c, were more likely to be married/cohabiting, used insulin pump more frequently (17.5% vs. 14.5%), smoked less (5.8% vs. 13.1%), and had less albuminuria (10.8% vs. 14.2%). […] Women had substantially lower income and higher education, were more often married, used insulin pump more frequently, had less albuminuria, and smoked more frequently than men […] Individuals with high income were more likely to be married/cohabiting, had lower HbA1c, and had lower rates of smoking as well as albuminuria”.

CONCLUSIONS Low SES increases the risk of CVD and death by a factor of 2–3 in type 1 diabetes.”

“The effect of SES was striking despite rigorous adjustments for risk factors and confounders. Individuals in the two lowest income quintiles had two to three times higher risk of CV events and death than those in the highest income quintile. Compared with low educational level, having high education was associated with ∼30% lower risk of stroke. Compared with being single, individuals who were married/cohabiting had >50% lower risk of death, CV death, and diabetes-related death. Immigrants had 20–40% lower risk of fatal/nonfatal CVD, all-cause death, and diabetes-related death. Additionally, we show that males had 44%, 63%, and 29% higher risk of all-cause death, CV death, and diabetes-related death, respectively.

Despite rigorous adjustments for covariates and equitable access to health care at a negligible cost (20,21), SES and sex were robust predictors of CVD disease and mortality in type 1 diabetes; their effect was comparable with that of smoking, which represented an HR of 1.56 (95% CI 1.29–1.91) for all-cause death. […] Our study shows that men with type 1 diabetes are at greater risk of CV events and death compared with women. This should be viewed in the light of a recent meta-analysis of 26 studies, which showed higher excess risk in women compared with men. Overall, women had 40% greater excess risk of all-cause mortality, and twice the excess risk of fatal/nonfatal vascular events, compared with men (29). Thus, whereas the excess risk (i.e., the risk of patients with diabetes compared with the nondiabetic population) of vascular disease is higher in women with diabetes, we show that men with diabetes are still at substantially greater risk of all-cause death, CV death, and diabetes death compared with women with diabetes. Other studies are in line with our findings (10,11,13,3032).”

iv. Interventions That Restore Awareness of Hypoglycemia in Adults With Type 1 Diabetes: A Systematic Review and Meta-analysis.

“Hypoglycemia remains the major limiting factor toward achieving good glycemic control (1). Recurrent hypoglycemia reduces symptomatic and hormone responses to subsequent hypoglycemia (2), associated with impaired awareness of hypoglycemia (IAH). IAH occurs in up to one-third of adults with type 1 diabetes (T1D) (3,4), increasing their risk of severe hypoglycemia (SH) sixfold (3) and contributing to substantial morbidity, with implications for employment (5), driving (6), and mortality. Distribution of risk of SH is skewed: one study showed that 5% of subjects accounted for 54% of all SH episodes, with IAH one of the main risk factors (7). “Dead-in-bed,” related to nocturnal hypoglycemia, is a leading cause of death in people with T1D <40 years of age (8).”

“This systematic review assessed the clinical effectiveness of treatment strategies for restoring hypoglycemia awareness (HA) and reducing SH risk in those with IAH and performed a meta-analysis, where possible, for different approaches in restoring awareness in T1D adults. Interventions to restore HA were broadly divided into three categories: educational (inclusive of behavioral), technological, and pharmacotherapeutic. […] Forty-three studies (18 randomized controlled trials, 25 before-and-after studies) met the inclusion criteria, comprising 27 educational, 11 technological, and 5 pharmacological interventions. […] A meta-analysis for educational interventions on change in mean SH rates per person per year was performed. Combining before-and-after and RCT studies, six studies (n = 1,010 people) were included in the meta-analysis […] A random-effects meta-analysis revealed an effect size of a reduction in SH rates of 0.44 per patient per year with 95% CI 0.253–0.628. [here’s the forest plot, US] […] Most of the educational interventions were observational and mostly retrospective, with few RCTs. The overall risk of bias is considered medium to high and the study quality moderate. Most, if not all, of the RCTs did not use double blinding and lacked information on concealment. The strength of association of the effect of educational interventions is moderate. The ability of educational interventions to restore IAH and reduce SH is consistent and direct with educational interventions showing a largely positive outcome. There is substantial heterogeneity between studies, and the estimate is imprecise, as reflected by the large CIs. The strength of evidence is moderate to high.”

v. Trends of Diagnosis-Specific Work Disability After Newly Diagnosed Diabetes: A 4-Year Nationwide Prospective Cohort Study.

“There is little evidence to show which specific diseases contribute to excess work disability among those with diabetes. […] In this study, we used a large nationwide register-based data set, which includes information on work disability for all working-age inhabitants of Sweden, in order to investigate trends of diagnosis-specific work disability (sickness absence and disability pension) among people with diabetes for 4 years directly after the recorded onset of diabetes. We compared work disability trends among people with diabetes with trends among those without diabetes. […] The register data of diabetes medication and in- and outpatient hospital visits were used to identify all recorded new diabetes cases among the population aged 25–59 years in Sweden in 2006 (n = 14,098). Data for a 4-year follow-up of ICD-10 physician-certified sickness absence and disability pension days (2007‒2010) were obtained […] Comparisons were made using a random sample of the population without recorded diabetes (n = 39,056).”

RESULTS The most common causes of work disability were mental and musculoskeletal disorders; diabetes as a reason for disability was rare. Most of the excess work disability among people with diabetes compared with those without diabetes was owing to mental disorders (mean difference adjusted for confounding factors 18.8‒19.8 compensated days/year), musculoskeletal diseases (12.1‒12.8 days/year), circulatory diseases (5.9‒6.5 days/year), diseases of the nervous system (1.8‒2.0 days/year), and injuries (1.0‒1.2 days/year).”

CONCLUSIONS The increased risk of work disability among those with diabetes is largely attributed to comorbid mental, musculoskeletal, and circulatory diseases. […] Diagnosis of diabetes as the cause of work disability was rare.”

August 19, 2017 Posted by | Cancer/oncology, Cardiology, Diabetes, Health Economics, Medicine, Statistics | Leave a comment

Infectious Disease Surveillance (III)

I have added some more observations from the book below.

“Zoonotic diseases are infections transmitted between animals and humans […]. A recent survey identified more than 1,400 species of human disease–causing agents, over half (58%) of which were zoonotic [2]. Moreover, nearly three-quarters (73%) of infectious diseases considered to be emerging or reemerging were zoonotic [2]. […] In many countries there is minimal surveillance for live animal imports or imported wildlife products. Minimal surveillance prevents the identification of wildlife trade–related health risks to the public, agricultural industry, and native wildlife [36] and has led to outbreaks of zoonotic diseases […] Southeast Asia [is] a hotspot for emerging zoonotic diseases because of rapid population growth, high population density, and high biodiversity […] influenza virus in particular is of zoonotic importance as multiple human infections have resulted from animal exposure [77–79].”

“[R]abies is an important cause of death in many countries, particularly in Africa and Asia [85]. Rabies is still underreported throughout the developing world, and 100-fold underreporting of human rabies is estimated for most of Africa [44]. Reasons for underreporting include lack of public health personnel, difficulties in identifying suspect animals, and limited laboratory capacity for rabies testing. […] Brucellosis […] is transmissible to humans primarily through consumption of unpasteurized milk or dairy products […] Brucella is classified as a category B bioterrorism agent [90] because of its potential for aerosolization [I should perhaps here mention that the book coverage does overlaps a bit with that of Fong & Alibek’s book – which I covered here – but that I decided against covering those topics in much detail here – US] […] The key to preventing brucellosis in humans is to control or eliminate infections in animals [91–93]; therefore, veterinarians are crucial to the identification, prevention, and control of brucellosis [89]. […] Since 1954 [there has been] an ongoing eradication program involving surveillance testing of cattle at slaughter, testing at livestock markets, and whole-herd testing on the farm [in the US] […] Except for endemic brucellosis in wildlife in the Greater Yellowstone Area, all 50 states and territories in the United States are free of bovine brucellosis [94].”

“Because of its high mortality rate in humans in the absence of early treatment, Y. pestis is viewed as one of the most pathogenic human bacteria [101]. In the United States, plague is most often found in the Southwest where it is transmitted by fleas and maintained in rodent populations [102]. Deer mice and voles typically serve as maintenance hosts [and] these animals are often resistant to plague [102]. In contrast, in amplifying host species such as prairie dogs, ground squirrels, chipmunks, and wood rats, plague spreads rapidly and results in high mortality [103]. […] Human infections with Y. pestis can result in bubonic, pneumonic, or septicemic plague, depending on the route of exposure. Bubonic plague is most common; however, pneumonic plague poses a more serious public health risk since it can be easily transmitted person-to-person through inhalation of aerosolized bacteria […] Septicemic plague is characterized by bloodstream infection with Y. pestis and can occur secondary to pneumonic or bubonic forms of infection or as a primary infection [6,60].
Plague outbreaks are often correlated with animal die-offs in the area [104], and rodent control near human residences is important to prevent disease [103]. […] household pets can be an important route of plague transmission and flea control in dogs and cats is an important prevention measure [105]. Plague surveillance involves monitoring three populations for infection: vectors (e.g., fleas), humans, and rodents [106]. In the past 20 years, the numbers of human cases of plague reported in the United States have varied from 1 to 17 cases per year [90]. […]
Since rodent species are the main reservoirs of the bacteria, these animals can be used for sentinel surveillance to provide an early warning of the public health risk to humans [106]. […] Rodent die-offs can often be an early indicator of a plague outbreak”.

“Zoonotic disease surveillance is crucial for protection of human and animal health. An integrated, sustainable system that collects data on incidence of disease in both animals and humans is necessary to ensure prompt detection of zoonotic disease outbreaks and a timely and focused response [34]. Currently, surveillance systems for animals and humans [operate] largely independently [34]. This results in an inability to rapidly detect zoonotic diseases, particularly novel emerging diseases, that are detected in the human population only after an outbreak occurs [109]. While most industrialized countries have robust disease surveillance systems, many developing countries currently lack the resources to conduct both ongoing and real-time surveillance [34,43].”

“Acute hepatitis of any cause has similar, usually indistinguishable, signs and symptoms. Acute illness is associated with fever, fatigue, nausea, abdominal pain, followed by signs of liver dysfunction, including jaundice, light to clay-colored stool, dark urine, and easy bruising. The jaundice, dark urine, and abnormal stool are because of the diminished capacity of the inflamed liver to handle the metabolism of bilirubin, which is a breakdown product of hemoglobin released as red blood cells are normally replaced. In severe hepatitis that is associated with fulminant liver disease, the liver’s capacity to produce clotting factors and to clear potential toxic metabolic products is severely impaired, with resultant bleeding and hepatic encephalopathy. […] An effective vaccine to prevent hepatitis A has been available for more than 15 years, and incidence rates of hepatitis A are dropping wherever it is used in routine childhood immunization programs. […] Currently, hepatitis A vaccine is part of the U.S. childhood immunization schedule recommended by the Advisory Committee on Immunization Practices (ACIP) [31].”

Chronic hepatitis — persistent and ongoing inflammation that can result from chronic infection — usually has minimal to no signs or symptoms […] Hepatitis B and C viruses cause acute hepatitis as well as chronic hepatitis. The acute component is often not recognized as an episode of acute hepatitis, and the chronic infection may have little or no symptoms for many years. With hepatitis B, clearance of infection is age related, as is presentation with symptoms. Over 90% of infants exposed to HBV develop chronic infection, while <1% have symptoms; 5–10% of adults develop chronic infection, but 50% or more have symptoms associated with acute infection. Among those who acquire hepatitis C, 15–45% clear the infection; the remainder have lifelong infection unless treated specifically for hepatitis C.”

“[D]ata are only received on individuals accessing care. Asymptomatic acute infection and poor or unavailable measurements for high risk populations […] have resulted in questionable estimates of the prevalence and incidence of hepatitis B and C. Further, a lack of understanding of the different types of viral hepatitis by many medical providers [18] has led to many undiagnosed individuals living with chronic infection, who are not captured in disease surveillance systems. […] Evaluation of acute HBV and HCV surveillance has demonstrated a lack of sensitivity for identifying acute infection in injection drug users; it is likely that most cases in this population go undetected, even if they receive medical care [36]. […] Best practices for conducting surveillance for chronic hepatitis B and C are not well established. […] The role of health departments in responding to infectious diseases is typically responding to acute disease. Response to chronic HBV infection is targeted to prevention of transmission to contacts of those infected, especially in high risk situations. Because of the high risk of vertical transmission and likely development of chronic disease in exposed newborns, identification and case management of HBV-infected pregnant women and their infants is a high priority. […] For a number of reasons, states do not conduct uniform surveillance for chronic hepatitis C. There is not agreement as to the utility of surveillance for chronic HCV infection, as it is a measurement of prevalent rather than incident cases.”

“Among all nationally notifiable diseases, three STDs (chlamydia, gonorrhea, and syphilis) are consistently in the top five most commonly reported diseases annually. These three STDs made up more than 86% of all reported diseases in the United States in 2010 [2]. […] The true burden of STDs is likely to be higher, as most infections are asymptomatic [4] and are never diagnosed or reported. A synthesis of a variety of data sources estimated that in 2008 there were over 100 million prevalent STDs and nearly 20 million incident STDs in the United States [5]. […] Nationally, 72% of all reported STDs are among persons aged 15–24 years [3], and it is estimated that 1 in 4 females aged 14–19 has an STD [7]. […] In 2011, the rates of chlamydia, gonorrhea, and primary and secondary syphilis among African-­Americans were, respectively, 7.5, 16.9, and 6.7 times the rates among whites [3]. Additionally, men who have sex with men (MSM) are disproportionately infected with STDs. […] several analyses have shown risk ratios above 100 for the associations between being an MSM and having syphilis or HIV [9,10]. […] Many STDs can be transmitted congenitally during pregnancy or birth. In 2008, over 400,000 neonatal deaths and stillbirths were associated with syphilis worldwide […] untreated chlamydia and gonorrhea can cause ophthalmia neonatorum in newborns, which can result in blindness [13]. The medical and societal costs for STDs are high. […] One estimate in 2008 put national costs at $15.6 billion [15].”

“A significant challenge in STD surveillance is that the term “STD” encompasses a variety of infections. Currently, there are over 35 pathogens that can be transmitted sexually, including bacteria […] protozoa […] and ectoparasites […]. Some infections can cause clinical syndromes shortly after exposure, whereas others result in no symptoms or have a long latency period. Some STDs can be easily diagnosed using self-collected swabs, while others require a sample of blood or a physical examination by a clinician. Consequently, no one particular surveillance strategy works for all STDs. […] The asymptomatic nature of most STDs limits inferences from case­-based surveillance, since in order to be counted in this system an infection must be diagnosed and reported. Additionally, many infections never result in disease. For example, an estimated 90% of human papillomavirus (HPV) infections resolve on their own without sequelae [24]. As such, simply counting infections may not be appropriate, and sequelae must also be monitored. […] Strategies for STD surveillance include case reporting; sentinel surveillance; opportunistic surveillance, including use of administrative data and positivity in screened populations; and population-­based studies […] the choice of strategy depends on the type of STD and the population of interest.”

“Determining which diseases and conditions should be included in mandatory case reporting requires balancing the benefits to the public health system (e.g., utility of the data) with the costs and burdens of case reporting. While many epidemiologists and public health practitioners follow the mantra “the more data, the better,” the costs (in both dollars and human resources) of developing and maintaining a robust case­-based reporting system can be large. Case­-based surveillance has been mandated for chlamydia, gonorrhea, syphilis, and chancroid nationally; but expansion of state­-initiated mandatory reporting for other STDs is controversial.”

August 18, 2017 Posted by | Books, Epidemiology, Immunology, Infectious disease, Medicine | Leave a comment

Type 1 Diabetes Is Associated With an Increased Risk of Fracture Across the Life Span

Type 1 Diabetes Is Associated With an Increased Risk of Fracture Across the Life Span: A Population-Based Cohort Study Using The Health Improvement Network (THIN).

I originally intended to include this paper in a standard diabetes post like this one, but the post gradually got more and more unwieldy as I added more stuff and so in the end I decided – like in this case – that it might be a better idea to just devote an entire post to the paper and then postpone my coverage of some of the other papers included in the post.

I’ve talked about this stuff before, but I’m almost certain the results of this paper were not included in Czernik and Fowlkes’ book as this paper was published at almost exactly the same time as was the book. It provides further support of some of the observations included in C&F’s publication. This is a very large and important study in the context of the relationship between type 1 diabetes and skeletal health. I have quoted extensively from the paper below, and also added some observations of my own along the way in order to provide a little bit of context where it might be needed:

“There is an emerging awareness that diabetes adversely affects skeletal health and that type 1 diabetes affects the skeleton more severely than type 2 diabetes (5). Studies in humans and animal models have identified a number of skeletal abnormalities associated with type 1 diabetes, including deficits in bone mineral density (BMD) (6,7) and bone structure (8), decreased markers of bone formation (9,10), and variable alterations in markers of bone resorption (10,11).

Previous studies and two large meta-analyses reported that type 1 diabetes is associated with an increased risk of fracture (1219). However, most of these studies were conducted in older adults and focused on hip fractures. Importantly, most affected individuals develop type 1 diabetes in childhood, before the attainment of peak bone mass, and therefore may be at increased risk of fracture throughout their life span. Moreover, because hip fractures are rare in children and young adults, studies limited to this outcome may underestimate the overall fracture burden in type 1 diabetes.

We used The Health Improvement Network (THIN) database to conduct a population-based cohort study to determine whether type 1 diabetes is associated with increased fracture incidence, to delineate age and sex effects on fracture risk, and to determine whether fracture site distribution is altered in participants with type 1 diabetes compared with participants without diabetes. […] 30,394 participants aged 0–89 years with type 1 diabetes were compared with 303,872 randomly selected age-, sex-, and practice-matched participants without diabetes. Cox regression analysis was used to determine hazard ratios (HRs) for incident fracture in participants with type 1 diabetes. […] A total of 334,266 participants, median age 34 years, were monitored for 1.9 million person-years. HR were lowest in males and females age <20 years, with HR 1.14 (95% CI 1.01–1.29) and 1.35 (95% CI 1.12–1.63), respectively. Risk was highest in men 60–69 years (HR 2.18 [95% CI 1.79–2.65]), and in women 40–49 years (HR 2.03 [95% CI 1.73–2.39]). Lower extremity fractures comprised a higher proportion of incident fractures in participants with versus those without type 1 diabetes (31.1% vs. 25.1% in males, 39.3% vs. 32% in females; P < 0.001). Secondary analyses for incident hip fractures identified the highest HR of 5.64 (95% CI 3.55–8.97) in men 60–69 years and the highest HR of 5.63 (95% CI 2.25–14.11) in women 30–39 years.”

“Conditions identified by diagnosis codes as covariates of interest were hypothyroidism, hyperthyroidism, adrenal insufficiency, celiac disease, inflammatory bowel disease, vitamin D deficiency, fracture before the start of the follow-up period, diabetic retinopathy, and diabetic neuropathy. All variables, with the exception of prior fracture, were treated as time-varying covariates. […] Multivariable Cox regression analysis was used to assess confounding by covariates of interest. Final models were stratified by age category (<20, 20–29, 30–39, 40–49, 50–59, 60–69, and ≥70 years) after age was found to be a significant predictor of fracture and to violate the assumption of proportionality of hazards […] Within each age stratum, models were again assessed for proportionality of hazards and further stratified where appropriate.”

A brief note on a few of those covariates. Some of them are obvious, other perhaps less so. Retinopathy is probably included mainly due to the associated vision issues, rather than some sort of direct pathophysiological linkage between the conditions; vision problems may increase the risk of falls, particularly in the elderly, and falls increase the fracture risk (they note this later on in the paper). Neuropathy could in my opinion affect risk in multiple ways, not only through an increased fall risk, but either way it certainly makes a lot of sense to include that variable if it’s available. Thyroid disorders can cause bone problems, and the incidence of thyroid disorders is elevated in type 1 – to the extent that e.g. the Oxford Handbook of Clinical Medicine recommends screening people with diabetes mellitus for abnormalities in thyroid function on the annual review. Both Addison‘s (adrenal insufficiency) and thyroid disorders in type 1 diabetics may be specific components of a more systemic autoimmune disease (relevant link here, see the last paragraph), by some termed autoimmune polyendocrine syndromes. When you treat people with Addison’s you give them glucocorticoids, and this treatment can have deleterious effects on bone density especially in the long run – they note in the paper that exposure to corticosteroids is a significant fracture predictor in their models, which is not surprising. In one of the chapters included in Horowitz & Samson‘s book (again, I hope to cover it in more detail later…) the authors note that the combination of coeliac disease and diabetes may lead to protein malabsorption (among other things), which can obviously affect bone health, and they also observe e.g. that common lab abnormalities found in patients with coeliac include “low levels of haemoglobin, albumin, calcium, potassium, magnesium and iron” and furthermore that “extra-intestinal symptoms [include] muscle cramps, bone pain due to osteoporotic fractures or osteomalacia” – coeliac is obviously relevant here, especially as the condition is much more common in type 1 diabetics than in non-diabetics (“The prevalence of coeliac disease in type 1 diabetic children varies from 1.0% to 3.5%, which is at least 15 times higher than the prevalence among children without diabetes” – also an observation from H&S’s book, chapter 5).

Moving on…

“During the study period, incident fractures occurred in 2,615 participants (8.6%) with type 1 diabetes compared with 18,624 participants (6.1%) without diabetes. […] The incidence in males was greatest in the 10- to 20-year age bracket, at 297.2 and 261.3 fractures per 10,000 person-years in participants with and without type 1 diabetes, respectively. The fracture incidence in women was greatest in the 80- to 90-year age bracket, at 549.1 and 333.9 fractures per 10,000 person-years in participants with and without type 1 diabetes, respectively.”

It’s important to note that the first percentages reported above (8.6% vs 6.1%) may be slightly misleading as the follow-up periods for the two groups were dissimilar; type 1s in the study were on average followed for a shorter amount of time than were the controls (4.7 years vs 3.89 years), meaning that raw incident fracture risk estimates like these cannot be translated directly into person-year estimates. The risk differential is thus at least slightly higher than these percentages would suggest. A good view of how the person-year risk difference evolves as a function of age/time are displayed in the paper’s figure 2.

“Hip fractures alone comprised 5.5% and 11.6% of all fractures in males and females with type 1 diabetes, compared with 4.1% and 8.6% in males and females without diabetes (P = 0.04 for males and P = 0.001 for females). Participants with type 1 diabetes with a lower extremity fracture were more likely to have retinopathy (30% vs. 22.5%, P < 0.001) and neuropathy (5.4% vs. 2.9%, P = 0.001) compared with those with fractures at other sites. The median average HbA1c did not differ between the two groups.”

I’ll reiterate this because it’s important: They care about lower-extremity fractures because some of those kinds of fractures, especially hip fractures, have a really poor prognosis. It’s not that it’s annoying and you’ll need a cast; I’ve seen estimates suggesting that roughly one-third of diabetics who sustain a hip fracture die within a year; a prognosis like that is much worse than many cancers. A few relevant observations from Czernik and Fowlkes:

“Together, [studies conducted during the last 15 years on type 1 diabetics] demonstrate an unequivocally increased fracture risk at the hip [compared to non-diabetic controls], with most demonstrating a six to ninefold increase in relative risk. […] type I DM patients have hip fractures at a younger age on average, with a mean of 43 for women and 41 for men in one study. Almost 7 % of people with type I DM can be expected to have sustained a hip fracture by age 65 [7] […] Patients with DM and hip fracture are at a higher risk of mortality than patients without DM, with 1-year rates as high as 32 % vs. 13 % of nondiabetic patients”.

Back to the paper:

“Incident hip fracture risk was increased in all age categories for female participants with type 1 diabetes, and in age categories >30 years in men. […] Type 1 diabetes remained significantly associated with fracture after adjustment for covariates in all previously significant sex and age strata, with the exception of women aged 40–49. […] Each 1% (11 mmol/mol) greater average HbA1c level was associated with a 5% greater risk of incident fracture in males and an 11% greater risk of fracture in females. Diabetic neuropathy was a significant risk factor for incident fracture in males (HR 1.33; 95% CI 1.03–1.72) and females (HR 1.52; 95% CI 1.19–1.92); however, diabetic retinopathy was significant only in males (HR 1.13; 95% CI 1.01–1.28). […] The presence of celiac disease was associated with an increased risk of fractures in females, with an HR of 1.8 (95% CI 1.18–2.76), but not in males. A higher BMI was protective against fracture. Smoking was a risk factor for fracture in males in the 13,763 participants with type 1 diabetes with smoking and BMI data available for analysis.”

The Hba1c-link was interesting to me because the relationships between glycemic control and fracture risk has in other contexts been somewhat unclear; one problem is that Hba1c levels in the lower ranges increase the risk of hypoglycemic episodes, and such episodes may increase the risk of fractures, so even if chronic hyperglycemia is bad for bone health if you don’t have access e.g. to event-level/-rate data on hypoglycemic episodes confounding may be an issue causing a (very plausible) chronic hyperglycemia-fracture risk link to perhaps be harder to detect than it otherwise might have been. It’s of note that these guys did not have access to data on hypoglycemic episodes. They observe later in the paper that: “If hypoglycemia was a major contributing factor, we might have expected a negative effect of HbA1c on fracture risk; our data indicated the opposite.” I don’t think you can throw out hypoglycemia as a contributing factor that easily.

Anyway, a few final observations from the paper:

CONCLUSIONS Type 1 diabetes was associated with increased risk of incident fracture that began in childhood and extended across the life span. Participants with type 1 diabetes sustained a disproportionately greater number of lower extremity fractures. These findings have important public health implications, given the increasing prevalence of type 1 diabetes and the morbidity and mortality associated with hip fractures.”

“To our knowledge, this is the first study to show that the increased fracture risk in type 1 diabetes begins in childhood. This finding has important implications for researchers planning future studies and for clinicians caring for patients in this population. Although peak bone mass is attained by the end of the third decade of life, peak bone accrual occurs in adolescence in conjunction with the pubertal growth spurt (31). This critical time for bone accrual may represent a period of increased skeletal vulnerability and also a window of opportunity for the implementation of therapies to improve bone formation (32). This is an especially important consideration in the population with type 1 diabetes, because the incidence of this disease peaks in early adolescence. Three-quarters of individuals will develop the condition before 18 years of age, and therefore before attainment of peak bone mass (33). The development and evaluation of therapies aimed at increasing bone formation and strength in adolescence may lead to a lifelong reduction in fracture risk.”

“The underlying mechanism for the increased fracture risk in patients with type 1 diabetes is not fully understood. Current evidence suggests that bone quantity and quality may both be abnormal in this condition. Clinical studies using dual-energy X-ray absorptiometry and peripheral quantitative computed tomography have identified mild to modest deficits in BMD and bone structure in both pediatric and adult participants with type 1 diabetes (6,8,34). Deficits in BMD are unlikely to be the only factor contributing to skeletal fragility in type 1 diabetes, however, as evidenced by a recent meta-analysis that found that the increased fracture risk seen in type 1 diabetes could not be explained by deficits in BMD alone (16). Recent cellular and animal models have shown that insulin signaling in osteoblasts and osteoblast progenitor cells promotes postnatal bone acquisition, suggesting that the insulin deficiency inherent in type 1 diabetes is a significant contributor to the pathogenesis of skeletal disease (35). Other proposed mechanisms contributing to skeletal fragility in type 1 diabetes include chronic hyperglycemia (36), impaired production of IGF-1 (37), and the accumulation of advanced glycation end products in bone (38). Our results showed that a higher average HbA1c was associated with an increased risk of fracture in participants with type 1 diabetes, supporting the hypothesis that chronic hyperglycemia and its sequelae contribute to skeletal fragility.”

“In summary, our study found that participants of all ages with type 1 diabetes were at increased risk of fracture. The adverse effect of type 1 diabetes on the skeleton is an underrecognized complication that is likely to grow into a significant public health burden given the increasing incidence and prevalence of this disease. […] Our novel finding that children with type 1 diabetes were already at increased risk of fracture suggests that therapeutic interventions aimed at children and adolescents may have an important effect on reducing lifelong fracture risk.”

August 15, 2017 Posted by | Diabetes, Epidemiology, Medicine, Studies | Leave a comment

Depression and Heart Disease (II)

Below I have added some more observations from the book, which I gave four stars on goodreads.

“A meta-analysis of twin (and family) studies estimated the heritability of adult MDD around 40% [16] and this estimate is strikingly stable across different countries [17, 18]. If measurement error due to unreliability is taken into account by analysing MDD assessed on two occasions, heritability estimates increase to 66% [19]. Twin studies in children further show that there is already a large genetic contribution to depressive symptoms in youth, with heritability estimates varying between 50% and 80% [20–22]. […] Cardiovascular research in twin samples has suggested a clear-cut genetic contribution to hypertension (h2 = 61%) [30], fatal stroke (h2 = 32%) [31] and CAD (h2 = 57% in males and 38% in females) [32]. […] A very important, and perhaps underestimated, source of pleiotropy in the association of MDD and CAD are the major behavioural risk factors for CAD: smoking and physical inactivity. These factors are sometimes considered ‘environmental’, but twin studies have shown that such behaviours have a strong genetic component [33–35]. Heritability estimates for [many] established risk factors [for CAD – e.g. BMI, smoking, physical inactivity – US] are 50% or higher in most adult twin samples and these estimates remain remarkably similar across the adult life span [41–43].”

“The crucial question is whether the genetic factors underlying MDD also play a role in CAD and CAD risk factors. To test for an overlap in the genetic factors, a bivariate extension of the structural equation model for twin data can be used [57]. […] If the depressive symptoms in a twin predict the IL-6 level in his/her co-twin, this can only be explained by an underlying factor that affects both depression and IL-6 levels and is shared by members of a family. If the prediction is much stronger in MZ than in DZ twins, this signals that the underlying factor is their shared genetic make-up, rather than their shared (family) environment. […] It is important to note clearly here that genetic correlations do not prove the existence of pleiotropy, because genes that influence MDD may, through causal effects of MDD on CAD risk, also become ‘CAD genes’. The absence of a genetic correlation, however, can be used to falsify the existence of genetic pleiotropy. For instance, the hypothesis that genetic pleiotropy explains part of the association between depressive symptoms and IL-6 requires the genetic correlation between these traits to be significantly different from zero. [Furthermore,] the genetic correlation should have a positive value. A negative genetic correlation would signal that genes that increase the risk for depression decrease the risk for higher IL-6 levels, which would go against the genetic pleiotropy hypothesis. […] Su et al. [26] […] tested pleiotropy as a possible source of the association of depressive symptoms with Il-6 in 188 twin pairs of the Vietnam Era Twin (VET) Registry. The genetic correlation between depressive symptoms and IL-6 was found to be positive and significant (RA = 0.22, p = 0.046)”

“For the association between MDD and physical inactivity, the dominant hypothesis has not been that MDD causes a reduction in regular exercise, but instead that regular exercise may act as a protective factor against mood disorders. […] we used the twin method to perform a rigorous test of this popular hypothesis [on] 8558 twins and their family members using their longitudinal data across 2-, 4-, 7-, 9- and 11-year follow-up periods. In spite of sufficient statistical power, we found only the genetic correlation to be significant (ranging between *0.16 and *0.44 for different symptom scales and different time-lags). The environmental correlations were essentially zero. This means that the environmental factors that cause a person to take up exercise do not cause lower anxiety or depressive symptoms in that person, currently or at any future time point. In contrast, the genetic factors that cause a person to take up exercise also cause lower anxiety or depressive symptoms in that person, at the present and all future time points. This pattern of results falsifies the causal hypothesis and leaves genetic pleiotropy as the most likely source for the association between exercise and lower levels of anxiety and depressive symptoms in the population at large. […] Taken together, [the] studies support the idea that genetic pleiotropy may be a factor contributing to the increased risk for CAD in subjects suffering from MDD or reporting high counts of depressive symptoms. The absence of environmental correlations in the presence of significant genetic correlations for a number of the CAD risk factors (CFR, cholesterol, inflammation and regular exercise) suggests that pleiotropy is the sole reason for the association between MDD and these CAD risk factors, whereas for other CAD risk factors (e.g. smoking) and CAD incidence itself, pleiotropy may coexist with causal effects.”

“By far the most tested polymorphism in psychiatric genetics is a 43-base pair insertion or deletion in the promoter region of the serotonin transporter gene (5HTT, renamed SLC6A4). About 55% of Caucasians carry a long allele (L) with 16 repeat units. The short allele (S, with 14 repeat units) of this length polymorphism repeat (LPR) reduces transcriptional efficiency, resulting in decreased serotonin transporter expression and function [83]. Because serotonin plays a key role in one of the major theories of MDD [84], and because the most prescribed antidepressants act directly on this transporter, 5HTT is an obvious candidate gene for this disorder. […] The dearth of studies attempting to associate the 5HTTLPR to MDD or related personality traits tells a revealing story about the fate of most candidate genes in psychiatric genetics. Many conflicting findings have been reported, and the two largest studies failed to link the 5HTTLPR to depressive symptoms or clinical MDD [85, 86]. Even at the level of reviews and meta-analyses, conflicting conclusions have been drawn about the role of this polymorphism in the development of MDD [87, 88]. The initially promising explanation for discrepant findings – potential interactive effects of the 5HTTLPR and stressful life events [89] – did not survive meta-analysis [90].”

“Across the board, overlooking the wealth of candidate gene studies on MDD, one is inclined to conclude that this approach has failed to unambiguously identify genetic variants involved in MDD […]. Hope is now focused on the newer GWA [genome wide association] approach. […] At the time of writing, only two GWA studies had been published on MDD [81, 95]. […] In theory, the strategy to identify potential pleiotropic genes in the MDD–CAD relationship is extremely straightforward. We simply select the genes that occur in the lists of confirmed genes from the GWA studies for both traits. In practice, this is hard to do, because genetics in psychiatry is clearly lagging behind genetics in cardiology and diabetes medicine. […] What is shown by the reviewed twin studies is that some genetic variants may influence MDD and CAD risk factors. This can occur through one of three mechanisms: (a) the genetic variants that increase the risk for MDD become part of the heritability of CAD through a causal effect of MDD on CAD risk factors (causality); (b) the genetic variants that increase the risk for CAD become part of the heritability of MDD through a direct causal effect of CAD on MDD (reverse causality); (c) the genetic variants influence shared risk factors that independently increase the risk for MDD as well as CAD (pleiotropy). I suggest that to fully explain the MDD–CAD association we need to be willing to be open to the possibility that these three mechanisms co-exist. Even in the presence of true pleiotropic effects, MDD may influence CAD risk factors, and having CAD in turn may worsen the course of MDD.”

“Patients with depression are more likely to exhibit several unhealthy behaviours or avoid other health-promoting ones than those without depression. […] Patients with depression are more likely to have sleep disturbances [6]. […] sleep deprivation has been linked with obesity, diabetes and the metabolic syndrome [13]. […] Physical inactivity and depression display a complex, bidirectional relationship. Depression leads to physical inactivity and physical inactivity exacerbates depression [19]. […] smoking rates among those with depression are about twice that of the general population [29]. […] Poor attention to self-care is often a problem among those with major depressive disorder. In the most severe cases, those with depression may become inattentive to their personal hygiene. One aspect of this relationship that deserves special attention with respect to cardiovascular disease is the association of depression and periodontal disease. […] depression is associated with poor adherence to medical treatment regimens in many chronic illnesses, including heart disease. […] There is some evidence that among patients with an acute coronary syndrome, improvement in depression is associated with improvement in adherence. […] Individuals with depression are often socially withdrawn or isolated. It has been shown that patients with heart disease who are depressed have less social support [64], and that social isolation or poor social support is associated with increased mortality in heart disease patients [65–68]. […] [C]linicians who make recommendations to patients recovering from a heart attack should be aware that low levels of social support and social isolation are particularly common among depressed individuals and that high levels of social support appear to protect patients from some of the negative effects of depression [78].”

“Self-efficacy describes an individual’s self-confidence in his/her ability to accomplish a particular task or behaviour. Self-efficacy is an important construct to consider when one examines the psychological mechanisms linking depression and heart disease, since it influences an individual’s engagement in behaviour and lifestyle changes that may be critical to improving cardiovascular risk. Many studies on individuals with chronic illness show that depression is often associated with low self-efficacy [95–97]. […] Low self-efficacy is associated with poor adherence behaviour in patients with heart failure [101]. […] Much of the interest in self-efficacy comes from the fact that it is modifiable. Self-efficacy-enhancing interventions have been shown to improve cardiac patients’ self-efficacy and thereby improve cardiac health outcomes [102]. […] One problem with targeting self-efficacy in depressed heart disease patients is [however] that depressive symptoms reduce the effects of self-efficacy-enhancing interventions [105, 106].”

“Taken together, [the] SADHART and ENRICHD [studies] suggest, but do not prove, that antidepressant drug therapy in general, and SSRI treatment in particular, improve cardiovascular outcomes in depressed post-acute coronary syndrome (ACS) patients. […] even large epidemiological studies of depression and antidepressant treatment are not usually informative, because they confound the effects of depression and antidepressant treatment. […] However, there is one Finnish cohort study in which all subjects […] were followed up through a nationwide computerised database [17]. The purpose of this study was not to examine the relationship between depression and cardiac mortality, but rather to look at the relationship between antidepressant use and suicide. […] unexpectedly, ‘antidepressant use, and especially SSRI use, was associated with a marked reduction in total mortality (=49%, p < 0.001), mostly attributable to a decrease in cardiovascular deaths’. The study involved 15 390 patients with a mean follow-up of 3.4 years […] One of the marked differences between the SSRIs and the earlier tricyclic antidepressants is that the SSRIs do not cause cardiac death in overdose as the tricyclics do [41]. There has been literature that suggested that tricyclics even at therapeutic doses could be cardiotoxic and more problematic than SSRIs [42, 43]. What has been surprising is that both in the clinical trial data from ENRICHD and the epidemiological data from Finland, tricyclic treatment has also been associated with a decreased risk of mortality. […] Given that SSRI treatment of depression in the post-ACS period is safe, effective in reducing depressed mood, able to improve health behaviours and may reduce subsequent cardiac morbidity and mortality, it would seem obvious that treating depression is strongly indicated. However, the vast majority of post-ACS patients will not see a psychiatrically trained professional and many cases are not identified [33].”

“That depression is associated with cardiovascular morbidity and mortality is no longer open to question. Similarly, there is no question that the risk of morbidity and mortality increases with increasing severity of depression. Questions remain about the mechanisms that underlie this association, whether all types of depression carry the same degree of risk and to what degree treating depression reduces that risk. There is no question that the benefits of treating depression associated with coronary artery disease far outweigh the risks.”

“Two competing trends are emerging in research on psychotherapy for depression in cardiac patients. First, the few rigorous RCTs that have been conducted so far have shown that even the most efficacious of the current generation of interventions produce relatively modest outcomes. […] Second, there is a growing recognition that, even if an intervention is highly efficacious, it may be difficult to translate into clinical practice if it requires intensive or extensive contacts with a highly trained, experienced, clinically sophisticated psychotherapist. It can even be difficult to implement such interventions in the setting of carefully controlled, randomised efficacy trials. Consequently, there are efforts to develop simpler, more efficient interventions that can be delivered by a wider variety of interventionists. […] Although much more work remains to be done in this area, enough is already known about psychotherapy for comorbid depression in heart disease to suggest that a higher priority should be placed on translation of this research into clinical practice. In many cases, cardiac patients do not receive any treatment for their depression.”

August 14, 2017 Posted by | Books, Cardiology, Diabetes, Genetics, Medicine, Pharmacology, Psychiatry, Psychology | Leave a comment

Promoting the unknown, a continuing series

August 12, 2017 Posted by | Music | Leave a comment

Depression and Heart Disease (I)

I’m currently reading this book. It’s a great book, with lots of interesting observations.

Below I’ve added some quotes from the book.

“Frasure-Smith et al. [1] demonstrated that patients diagnosed with depression post MI [myocardial infarction, US] were more than five times more likely to die from cardiac causes by 6 months than those without major depression. At 18 months, cardiac mortality had reached 20% in patients with major depression, compared with only 3% in non-depressed patients [5]. Recent work has confirmed and extended these findings. A meta-analysis of 22 studies of post-MI subjects found that post-MI depression was associated with a 2.0–2.5 increased risk of negative cardiovascular outcomes [6]. Another meta-analysis examining 20 studies of subjects with MI, coronary artery bypass graft (CABG), angioplasty or angiographically documented CAD found a twofold increased risk of death among depressed compared with non-depressed patients [7]. Though studies included in these meta-analyses had substantial methodological variability, the overall results were quite similar [8].”

“Blumenthal et al. [31] published the largest cohort study (N = 817) to date on depression in patients undergoing CABG and measured depression scores, using the CES-D, before and at 6 months after CABG. Of those patients, 26% had minor depression (CES-D score 16–26) and 12% had moderate to severe depression (CES-D score =27). Over a mean follow-up of 5.2 years, the risk of death, compared with those without depression, was 2.4 (HR adjusted; 95% CI 1.4, 4.0) in patients with moderate to severe depression and 2.2 (95% CI 1.2, 4.2) in those whose depression persisted from baseline to follow-up at 6 months. This is one of the few studies that found a dose response (in terms of severity and duration) between depression and death in CABG in particular and in CAD in general.”

“Of the patients with known CAD but no recent MI, 12–23% have major depressive disorder by DSM-III or DSM-IV criteria [20, 21]. Two studies have examined the prognostic association of depression in patients whose CAD was confirmed by angiography. […] In [Carney et al.], a diagnosis of major depression by DSM-III criteria was the best predictor of cardiac events (MI, bypass surgery or death) at 1 year, more potent than other clinical risk factors such as impaired left ventricular function, severity of coronary disease and smoking among the 52 patients. The relative risk of a cardiac event was 2.2 times higher in patients with major depression than those with no depression.[…] Barefoot et al. [23] provided a larger sample size and longer follow-up duration in their study of 1250 patients who had undergone their first angiogram. […] Compared with non-depressed patients, those who were moderately to severely depressed had 69% higher odds of cardiac death and 78% higher odds of all-cause mortality. The mildly depressed had a 38% higher risk of cardiac death and a 57% higher risk of all-cause mortality than non-depressed patients.”

“Ford et al. [43] prospectively followed all male medical students who entered the Johns Hopkins Medical School from 1948 to 1964. At entry, the participants completed questionnaires about their personal and family history, health status and health behaviour, and underwent a standard medical examination. The cohort was then followed after graduation by mailed, annual questionnaires. The incidence of depression in this study was based on the mailed surveys […] 1190 participants [were included in the] analysis. The cumulative incidence of clinical depression in this population at 40 years of follow-up was 12%, with no evidence of a temporal change in the incidence. […] In unadjusted analysis, clinical depression was associated with an almost twofold higher risk of subsequent CAD. This association remained after adjustment for time-dependent covariates […]. The relative risk ratio for CAD development with versus without clinical depression was 2.12 (95% CI 1.24, 3.63), as was their relative risk ratio for future MI (95% CI 1.11, 4.06), after adjustment for age, baseline serum cholesterol level, parental MI, physical activity, time-dependent smoking, hypertension and diabetes. The median time from the first episode of clinical depression to first CAD event was 15 years, with a range of 1–44 years.”

“In the Women’s Ischaemia Syndrome Evaluation (WISE) study, 505 women referred for coronary angiography were followed for a mean of 4.9 years and completed the BDI [46]. Significantly increased mortality and cardiovascular events were found among women with elevated BDI scores, even after adjustment for age, cholesterol, stenosis score on angiography, smoking, diabetes, education, hyper-tension and body mass index (RR 3.1; 95% CI 1.5, 6.3). […] Further compelling evidence comes from a meta-analysis of 28 studies comprising almost 80 000 subjects [47], which demonstrated that, despite heterogeneity and differences in study quality, depression was consistently associated with increased risk of cardiovascular diseases in general, including stroke.”

“The preponderance of evidence strongly suggests that depression is a risk factor for CAD [coronary artery disease, US] development. […] In summary, it is fair to conclude that depression plays a significant role in CAD development, independent of conventional risk factors, and its adverse impact endures over time. The impact of depression on the risk of MI is probably similar to that of smoking [52]. […] Results of longitudinal cohort studies suggest that depression occurs before the onset of clinically significant CAD […] Recent brain imaging studies have indicated that lesions resulting from cerebrovascular insufficiency may lead to clinical depression [54, 55]. Depression may be a clinical manifestation of atherosclerotic lesions in certain areas of the brain that cause circulatory deficits. The depression then exacerbates the onset of CAD. The exact aetiological mechanism of depression and CAD development remains to be clarified.”

“Rutledge et al. [65] conducted a meta-analysis in 2006 in order to better understand the prevalence of depression among patients with CHF and the magnitude of the relationship between depression and clinical outcomes in the CHF population. They found that clinically significant depression was present in 21.5% of CHF patients, varying by the use of questionnaires versus diagnostic interview (33.6% and 19.3%, respectively). The combined results suggested higher rates of death and secondary events (RR 2.1; 95% CI 1.7, 2.6), and trends toward increased health care use and higher rates of hospitalisation and emergency room visits among depressed patients.”

“In the past 15 years, evidence has been provided that physically healthy subjects who suffer from depression are at increased risk for cardiovascular morbidity and mortality [1, 2], and that the occurrence of depression in patients with either unstable angina [3] or myocardial infarction (MI) [4] increases the risk for subsequent cardiac death. Moreover, epidemiological studies have proved that cardiovascular disease is a risk factor for depression, since the prevalence of depression in individuals with a recent MI or with coronary artery disease (CAD) or congestive heart failure has been found to be significantly higher than in the general population [5, 6]. […] findings suggest a bidirectional association between depression and cardiovascular disease. The pathophysiological mechanisms underlying this association are, at present, largely unclear, but several candidate mechanisms have been proposed.”

“Autonomic nervous system dysregulation is one of the most plausible candidate mechanisms underlying the relationship between depression and ischaemic heart disease, since changes of autonomic tone have been detected in both depression and cardiovascular disease [7], and autonomic imbalance […] has been found to lower the threshold for ventricular tachycardia, ventricular fibrillation and sudden cardiac death in patients with CAD [8, 9]. […] Imbalance between prothrombotic and antithrombotic mechanisms and endothelial dysfunction have [also] been suggested to contribute to the increased risk of cardiac events in both medically well patients with depression and depressed patients with CAD. Depression has been consistently associated with enhanced platelet activation […] evidence has accumulated that selective serotonin reuptake inhibitors (SSRIs) reduce platelet hyperreactivity and hyperaggregation of depressed patients [39, 40] and reduce the release of the platelet/endothelial biomarkers ß-thromboglobulin, P-selectin and E-selectin in depressed patients with acute CAD [41]. This may explain the efficacy of SSRIs in reducing the risk of mortality in depressed patients with CAD [42–44].”

“[S]everal studies have shown that reduced endothelium-dependent flow-mediated vasodilatation […] occurs in depressed adults with or without CAD [48–50]. Atherosclerosis with subsequent plaque rupture and thrombosis is the main determinant of ischaemic cardiovascular events, and atherosclerosis itself is now recognised to be fundamentally an inflammatory disease [56]. Since activation of inflammatory processes is common to both depression and cardiovascular disease, it would be reasonable to argue that the link between depression and ischaemic heart disease might be mediated by inflammation. Evidence has been provided that major depression is associated with a significant increase in circulating levels of both pro-inflammatory cytokines, such as IL-6 and TNF-a, and inflammatory acute phase proteins, especially the C-reactive protein (CRP) [57, 58], and that antidepressant treatment is able to normalise CRP levels irrespective of whether or not patients are clinically improved [59]. […] Vaccarino et al. [79] assessed specifically whether inflammation is the mechanism linking depression to ischaemic cardiac events and found that, in women with suspected coronary ischaemia, depression was associated with increased circulating levels of CRP and IL-6 and was a strong predictor of ischaemic cardiac events”

“Major depression has been consistently associated with hyperactivity of the HPA axis, with a consequent overstimulation of the sympathetic nervous system, which in turn results in increased circulating catecholamine levels and enhanced serum cortisol concentrations [68–70]. This may cause an imbalance in sympathetic and parasympathetic activity, which results in elevated heart rate and blood pressure, reduced HRV [heart rate variability], disruption of ventricular electrophysiology with increased risk of ventricular arrhythmias as well as an increased risk of atherosclerotic plaque rupture and acute coronary thrombosis. […] In addition, glucocorticoids mobilise free fatty acids, causing endothelial inflammation and excessive clotting, and are associated with hypertension, hypercholesterolaemia and glucose dysregulation [88, 89], which are risk factors for CAD.”

“Most of the literature on [the] comorbidity [between major depressive disorder (MDD) and coronary artery disease (CAD), US] has tended to favour the hypothesis of a causal effect of MDD on CAD, but reversed causality has also been suggested to contribute. Patients with severe CAD at baseline, and consequently a worse prognosis, may simply be more prone to report mood disturbances than less severely ill patients. Furthermore, in pre-morbid populations, insipid atherosclerosis in cerebral vessels may cause depressive symptoms before the onset of actual cardiac or cerebrovascular events, a variant of reverse causality known as the ‘vascular depression’ hypothesis [2]. To resolve causality, comorbidity between MDD and CAD has been addressed in longitudinal designs. Most prospective studies reported that clinical depression or depressive symptoms at baseline predicted higher incidence of heart disease at follow-up [1], which seems to favour the hypothesis of causal effects of MDD. We need to remind ourselves, however […] [that] [p]rospective associations do not necessarily equate causation. Higher incidence of CAD in depressed individuals may reflect the operation of common underlying factors on MDD and CAD that become manifest in mental health at an earlier stage than in cardiac health. […] [T]he association between MDD and CAD may be due to underlying genetic factors that lead to increased symptoms of anxiety and depression, but may also independently influence the atherosclerotic process. This phenomenon, where low-level biological variation has effects on multiple complex traits at the organ and behavioural level, is called genetic ‘pleiotropy’. If present in a time-lagged form, that is if genetic effects on MDD risk precede effects of the same genetic variants on CAD risk, this phenomenon can cause longitudinal correlations that mimic a causal effect of MDD.”

 

August 12, 2017 Posted by | Books, Cardiology, Genetics, Medicine, Neurology, Pharmacology, Psychiatry, Psychology | Leave a comment

Infectious Disease Surveillance (II)

Some more observation from the book below.

“There are three types of influenza viruses — A, B, and C — of which only types A and B cause widespread outbreaks in humans. Influenza A viruses are classified into subtypes based on antigenic differences between their two surface glycoproteins, hemagglutinin and neuraminidase. Seventeen hemagglutinin subtypes (H1–H17) and nine neuraminidase subtypes (N1–N9) have been identifed. […] The internationally accepted naming convention for influenza viruses contains the following elements: the type (e.g., A, B, C), geographical origin (e.g., Perth, Victoria), strain number (e.g., 361), year of isolation (e.g., 2011), for influenza A the hemagglutinin and neuraminidase antigen description (e.g., H1N1), and for nonhuman origin viruses the host of origin (e.g., swine) [4].”

“Only two antiviral drug classes are licensed for chemoprophylaxis and treatment of influenza—the adamantanes (amantadine and rimantadine) and the neuraminidase inhibitors (oseltamivir and zanamivir). […] Antiviral resistant strains arise through selection pressure in individual patients during treatment [which can lead to treatment failure]. […] they usually do not transmit further (because of impaired virus fitness) and have limited public health implications. On the other hand, primarily resistant viruses have emerged in the past decade and in some cases have completely replaced the susceptible strains. […] Surveillance of severe influenza illness is challenging because most cases remain undiagnosed. […] In addition, most of the influenza burden on the healthcare system is because of complications such as secondary bacterial infections and exacerbations of pre-existing chronic diseases, and often influenza is not suspected as an underlying cause. Even if suspected, the virus could have been already cleared from the respiratory secretions when the testing is performed, making diagnostic confirmation impossible. […] Only a small proportion of all deaths caused by influenza are classified as influenza-related on death certificates. […] mortality surveillance based only on death certificates is not useful for the rapid assessment of an influenza epidemic or pandemic severity. Detection of excess mortality in real time can be done by establishing specific monitoring systems that overcome these delays [such as sentinel surveillance systems, US].”

“Influenza vaccination programs are extremely complex and costly. More than half a billion doses of influenza vaccines are produced annually in two separate vaccine production cycles, one for the Northern Hemisphere and one for the Southern Hemisphere [54]. Because the influenza virus evolves constantly and vaccines are reformulated yearly, both vaccine effectiveness and safety need to be monitored routinely. Vaccination campaigns are also organized annually and require continuous public health efforts to maintain an acceptable level of vaccination coverage in the targeted population. […] huge efforts are made and resources spent to produce and distribute influenza vaccines annually. Despite these efforts, vaccination coverage among those at risk in many parts of the world remains low.”

“The Active Bacterial Core surveillance (ABCs) network and its predecessor have been examples of using surveillance as information for action for over 20 years. ABCs has been used to measure disease burden, to provide data for vaccine composition and recommended-use policies, and to monitor the impact of interventions. […] sites represent wide geographic diversity and approximately reflect the race and urban-to-rural mix of the U.S. population [37]. Currently, the population under surveillance is 19–42 million and varies by pathogen and project. […] ABCs has continuously evolved to address challenging questions posed by the six pathogens (H. influenzae; GAS [Group A Streptococcus], GBS [Group B Streptococcus], S.  pneumoniae, N. meningitidis, and MRSA) and other emerging infections. […] For the six core pathogens, the objectives are (1) to determine the incidence and epidemiologic characteristics of invasive disease in geographically diverse populations in the United States through active, laboratory, and population-based surveillance; (2) to determine molecular epidemiologic patterns and microbiologic characteristics of isolates collected as part of routine surveillance in order to track antimicrobial resistance; (3) to detect the emergence of new strains with new resistance patterns and/or virulence and contribute to development and evaluation of new vaccines; and (4) to provide an infrastructure for surveillance of other emerging pathogens and for conducting studies aimed at identifying risk factors for disease and evaluating prevention policies.”

“Food may become contaminated by over 250 bacterial, viral, and parasitic pathogens. Many of these agents cause diarrhea and vomiting, but there is no single clinical syndrome common to all foodborne diseases. Most of these agents can also be transmitted by nonfoodborne routes, including contact with animals or contaminated water. Therefore, for a given illness, it is often unclear whether the source of infection is foodborne or not. […] Surveillance systems for foodborne diseases provide extremely important information for prevention and control.”

“Since 1995, the Centers for Disease Control and Prevention (CDC) has routinely used an automated statistical outbreak detection algorithm that compares current reports of each Salmonella serotype with the preceding 5-year mean number of cases for the same geographic area and week of the year to look for unusual clusters of infection [5]. The sensitivity of Salmonella serotyping to detect outbreaks is greatest for rare serotypes, because a small increase is more noticeable against a rare background. The utility of serotyping has led to its widespread adoption in surveillance for food pathogens in many countries around the world [6]. […] Today, a new generation of subtyping methods […] is increasing the specificity of laboratory-based surveillance and its power to detect outbreaks […] Molecular subtyping allows comparison of the molecular “fingerprint” of bacterial strains. In the United States, the CDC coordinates a network called PulseNet that captures data from standardized molecular subtyping by PFGE [pulsed field gel electrophoresis]. By comparing new submissions and past data, public health officials can rapidly identify geographically dispersed clusters of disease that would otherwise not be apparent and evaluate them as possible foodborne-disease outbreaks [8]. The ability to identify geographically dispersed outbreaks has become increasingly important as more foods are mass-produced and widely distributed. […] Similar networks have been developed in Canada, Europe, the Asia Pacifc region, Latin America and the Caribbean region, the Middle Eastern region and, most recently, the African region”.

“Food consumption and practices have changed during the past 20 years in the United States, resulting in a shift from readily detectable, point-source outbreaks (e.g., attendance at a wedding dinner), to widespread outbreaks that occur over many communities with only a few illnesses in each community. One of the changes has been establishment of large food-producing facilities that disseminate products throughout the country. If a food product is contaminated with a low level of pathogen, contaminated food products are distributed across many states; and only a few illnesses may occur in each community. This type of outbreak is often difficult to detect. PulseNet has been critical for the detection of widely dispersed outbreaks in the United States [17]. […] The growth of the PulseNet database […] and the use of increasingly sophisticated epidemiological approaches have led to a dramatic increase in the number of multistate outbreaks detected and investigated.”

“Each year, approximately 35 million people are hospitalized in the United States, accounting for 170 million inpatient days [1,2]. There are no recent estimates of the numbers of healthcare-associated infections (HAI). However, two decades ago, HAI were estimated to affect more than 2 million hospital patients annually […] The mortality attributed to these HAI was estimated at about 100,000 deaths annually. […] Almost 85% of HAI in the United States are associated with bacterial pathogens, and 33% are thought to be preventable [4]. […] The primary purpose of surveillance [in the context of HAI] is to alert clinicians, epidemiologists, and laboratories of the need for targeted prevention activities required to reduce HAI rates. HAI surveillance data help to establish baseline rates that may be used to determine the potential need to change public health policy, to act and intervene in clinical settings, and to assess the effectiveness of microbiology methods, appropriateness of tests, and allocation of resources. […] As less than 10% of HAI in the United States occur as recognized epidemics [18], HAI surveillance should not be embarked on merely for the detection of outbreaks.”

“There are two types of rate comparisons — intrahospital and interhospital. The primary goals of intrahospital comparison are to identify areas within the hospital where HAI are more likely to occur and to measure the efficacy of interventional efforts. […] Without external comparisons, hospital infection control departments may [however] not know if the endemic rates in their respective facilities are relatively high or where to focus the limited fnancial and human resources of the infection control program. […] The CDC has been the central aggregating institution for active HAI surveillance in the United States since the 1960s.”

“Low sensitivity (i.e., missed infections) in a surveillance system is usually more common than low specificity (i.e., patients reported to have infections who did not actually have infections).”

“Among the numerous analyses of CDC hospital data carried out over the years, characteristics consistently found to be associated with higher HAI rates include affiliation with a medical school (i.e., teaching vs. nonteaching), size of the hospital and ICU categorized by the number of beds (large hospitals and larger ICUs generally had higher infection rates), type of control or ownership of the hospital (municipal, nonprofit, investor owned), and region of the country [43,44]. […] Various analyses of SENIC and NNIS/NHSN data have shown that differences in patient risk factors are largely responsible for interhospital differences in HAI rates. After controlling for patients’ risk factors, average lengths of stay, and measures of the completeness of diagnostic workups for infection (e.g., culturing rates), the differences in the average HAI rates of the various hospital groups virtually disappeared. […] For all of these reasons, an overall HAI rate, per se, gives little insight into whether the facility’s infection control efforts are effective.”

“Although a hospital’s surveillance system might aggregate accurate data and generate appropriate risk-adjusted HAI rates for both internal and external comparison, comparison may be misleading for several reasons. First, the rates may not adjust for patients’ unmeasured intrinsic risks for infection, which vary from hospital to hospital. […] Second, if surveillance techniques are not uniform among hospitals or are used inconsistently over time, variations will occur in sensitivity and specificity for HAI case finding. Third, the sample size […] must be sufficient. This issue is of concern for hospitals with fewer than 200 beds, which represent about 10% of hospital admissions in the United States. In most CDC analyses, rates from hospitals with very small denominators tend to be excluded [37,46,49]. […] Although many healthcare facilities around the country aggregate HAI surveillance data for baseline establishment and interhospital comparison, the comparison of HAI rates is complex, and the value of the aggregated data must be balanced against the burden of their collection. […] If a hospital does not devote sufficient resources to data collection, the data will be of limited value, because they will be replete with inaccuracies. No national database has successfully dealt with all the problems in collecting HAI data and each varies in its ability to address these problems. […] While comparative data can be useful as a tool for the prevention of HAI, in some instances no data might be better than bad data.”

August 10, 2017 Posted by | Books, Data, Epidemiology, Infectious disease, Medicine, Statistics | Leave a comment

A few diabetes papers of interest

i. Long-term Glycemic Variability and Risk of Adverse Outcomes: A Systematic Review and Meta-analysis.

“This systematic review and meta-analysis evaluates the association between HbA1c variability and micro- and macrovascular complications and mortality in type 1 and type 2 diabetes. […] Seven studies evaluated HbA1c variability among patients with type 1 diabetes and showed an association of HbA1c variability with renal disease (risk ratio 1.56 [95% CI 1.08–2.25], two studies), cardiovascular events (1.98 [1.39–2.82]), and retinopathy (2.11 [1.54–2.89]). Thirteen studies evaluated HbA1c variability among patients with type 2 diabetes. Higher HbA1c variability was associated with higher risk of renal disease (1.34 [1.15–1.57], two studies), macrovascular events (1.21 [1.06–1.38]), ulceration/gangrene (1.50 [1.06–2.12]), cardiovascular disease (1.27 [1.15–1.40]), and mortality (1.34 [1.18–1.53]). Most studies were retrospective with lack of adjustment for potential confounders, and inconsistency existed in the definition of HbA1c variability.

CONCLUSIONS HbA1c variability was positively associated with micro- and macrovascular complications and mortality independently of the HbA1c level and might play a future role in clinical risk assessment.”

Two observations related to the paper: One, although only a relatively small number of studies were included in the review, the number of patients included in some of those included studies was rather large – the 7 type 1 studies thus included 44,021 participants, and the 13 type 2 studies included in total 43,620 participants. Two, it’s noteworthy that some of the associations already look at least reasonably strong, despite interest in HbA1c variability being a relatively recent phenomenon. Confounding might be an issue, but then again it almost always might be, and to give an example, out of 11 studies analyzing the association between renal disease and HbA1c variability included in the review, ten of them support a link and the only one which does not was a small study on pediatric patients which was almost certainly underpowered to investigate such a link in the first place (the base rate of renal complications is, as mentioned before here on this blog quite recently (link 3), quite low in pediatric samples).

ii. Risk of Severe Hypoglycemia in Type 1 Diabetes Over 30 Years of Follow-up in the DCCT/EDIC Study.

(I should perhaps note here that I’m already quite familiar with the context of the DCCT/EDIC study/studies, and although readers may not be, and although background details are included in the paper, I decided not to cover such details here although they would make my coverage of the paper easier to understand. I instead decided to limit my coverage of the paper to a few observations which I myself found to be of interest.)

“During the DCCT, the rates of SH [Severe Hypoglycemia, US], including episodes with seizure or coma, were approximately threefold greater in the intensive treatment group than in the conventional treatment group […] During EDIC, the frequency of SH increased in the former conventional group and decreased in the former intensive group so that the difference in SH event rates between the two groups was no longer significant (36.6 vs. 40.8 episodes per 100 patient-years, respectively […] By the end of DCCT, with an average of 6.5 years of follow-up, 65% of the intensive group versus 35% of the conventional group experienced at least one episode of SH. In contrast, ∼50% of participants within each group reported an episode of SH during the 20 years of EDIC.”

“Of [the] participants reporting episodes of SH, during the DCCT, 54% of the intensive group and 30% of the conventional group experienced four or more episodes, whereas in EDIC, 37% of the intensive group and 33% of the conventional group experienced four or more events […]. Moreover, a subset of participants (14% [99 of 714]) experienced nearly one-half of all SH episodes (1,765 of 3,788) in DCCT, and a subset of 7% (52 of 709) in EDIC experienced almost one-third of all SH episodes (888 of 2,813) […] Fifty-one major accidents occurred during the 6.5 years of DCCT and 143 during the 20 years of EDIC […] The most frequent type of major accident was that involving a motor vehicle […] Hypoglycemia played a role as a possible, probable, or principal cause in 18 of 28 operator-caused motor vehicle accidents (MVAs) during DCCT […] and in 23 of 54 operator-caused MVAs during EDIC”.

“The T1D Exchange Clinic Registry recently reported that 8% of 4,831 adults with T1D living in the U.S. had a seizure or coma event during the 3 months before their most recent annual visit (11). During EDIC, we observed that 27% of the cohort experienced a coma or seizure event over the 20 years of 3-month reporting intervals (∼1.4% per year), a much lower annual risk than in the T1D Exchange Clinic Registry. In part, the open enrollment of patients into the T1D Exchange may be reflected without the exclusion of participants with a history of SH as in the DCCT and other clinical trials. The current data support the clinical perception that a small subset of individuals is more susceptible to SH (7% of patients with 11 or more SH episodes during EDIC, which represents 32% of all SH episodes in EDIC) […] a history of SH during DCCT and lower current HbA1c levels were the two major factors associated with an increased risk of SH during EDIC. Safety concerns were the reason why a history of frequent SH events was an exclusion criterion for enrollment in DCCT. […] Of note, we found that participants who entered the DCCT as adolescents were more likely to experience SH during EDIC.”

“In summary, although event rates in the DCCT/EDIC cohort seem to have fallen and stabilized over time, SH remains an ever-present threat for patients with T1D who use current technology, occurring at a rate of ∼36–41 episodes per 100 patient-years, even among those with longer diabetes duration. Having experienced one or more such prior events is the strongest predictor of a future SH episode.”

I didn’t actually like that summary. If a history of severe hypoglycemia was an exclusion criterion in the DCCT trial, which it was, then the event rate you’d get from this data set is highly likely to provide a biased estimator of the true event rate, as the Exchange Clinic Registry data illustrate. The true population event rate in unselected samples is higher.

Another note which may also be important to add is that many diabetics who do not have a ‘severe event’ during a specific time period might still experience a substantial number of hypoglycemic episodes; ‘severe events’ (which require the assistance of another individual) is a somewhat blunt instrument in particular for assessing quality-of-life aspects of hypoglycemia.

iii. The Presence and Consequence of Nonalbuminuric Chronic Kidney Disease in Patients With Type 1 Diabetes.

“This study investigated the prevalence of nonalbuminuric chronic kidney disease in type 1 diabetes to assess whether it increases the risk of cardiovascular and renal outcomes as well as all-cause mortality. […] This was an observational follow-up of 3,809 patients with type 1 diabetes from the Finnish Diabetic Nephropathy Study. […] mean age was 37.6 ± 11.8 years and duration of diabetes 21.2 ± 12.1 years. […] During 13 years of median follow-up, 378 developed end-stage renal disease, 415 suffered an incident cardiovascular event, and 406 died. […] At baseline, 78 (2.0%) had nonalbuminuric chronic kidney disease. […] Nonalbuminuric chronic kidney disease did not increase the risk of albuminuria (hazard ratio [HR] 2.0 [95% CI 0.9–4.4]) or end-stage renal disease (HR 6.4 [0.8–53.0]) but did increase the risk of cardiovascular events (HR 2.0 [1.4–3.5]) and all-cause mortality (HR 2.4 [1.4–3.9]). […] ESRD [End-Stage Renal Disease] developed during follow-up in 0.3% of patients with nonalbuminuric non-CKD [CKD: Chronic Kidney Disease], in 1.3% of patients with nonalbuminuric CKD, in 13.9% of patients with albuminuric non-CKD, and in 63.0% of patients with albuminuric CKD (P < 0.001).”

CONCLUSIONS Nonalbuminuric chronic kidney disease is not a frequent finding in patients with type 1 diabetes, but when present, it is associated with an increased risk of cardiovascular morbidity and all-cause mortality but not with renal outcomes.”

iv. Use of an α-Glucosidase Inhibitor and the Risk of Colorectal Cancer in Patients With Diabetes: A Nationwide, Population-Based Cohort Study.

This one relates closely to stuff covered in Horowitz & Samsom’s book about Gastrointestinal Function in Diabetes Mellitus which I just finished (and which I liked very much). Here’s a relevant quote from chapter 7 of that book (which is about ‘Hepato-biliary and Pancreatic Function’):

“Several studies have provided evidence that the risk of pancreatic cancer is increased in patients with type 1 and type 2 diabetes mellitus [136,137]. In fact, diabetes has been associated with an increased risk of several cancers, including those of the pancreas, liver, endometrium and kidney [136]. The pooled relative risk of pancreatic cancer for diabetics vs. non-diabetics in a meta-analysis was 2.1 (95% confidence interval 1.6–2.8). Patients presenting with diabetes mellitus within a period of 12 months of the diagnosis of pancreatic cancer were excluded because in these cases diabetes may be an early presenting sign of pancreatic cancer rather than a risk factor [137]”.

They don’t mention colon cancer there, but it’s obvious from the research which has been done – and which is covered extensively in that book – that diabetes has the potential to cause functional changes in a large number of components of the digestive system (and I hope to cover this kind of stuff in a lot more detail later on) so the fact that some of these changes may lead to neoplastic changes should hardly be surprising. However evaluating causal pathways is more complicated here than it might have been, because e.g. pancreatic diseases may also themselves cause secondary diabetes in some patients. Liver pathologies like hepatitis B and C also display positive associations with diabetes, although again causal pathways here are not completely clear; treatments used may be a contributing factor (interferon-treatment may induce diabetes), but there are also suggestions that diabetes should be considered one of the extrahepatic manifestations of hepatitis. This stuff is complicated.

The drug mentioned in the paper, acarbose, is incidentally a drug also discussed in some detail in the book. It belongs to a group of drugs called alpha glucosidase inhibitors, and it is ‘the first antidiabetic medication designed to act through an influence on intestinal functions.’ Anyway, some quotes from the paper:

“We conducted a nationwide, population-based study using a large cohort with diabetes in the Taiwan National Health Insurance Research Database. Patients with newly diagnosed diabetes (n = 1,343,484) were enrolled between 1998 and 2010. One control subject not using acarbose was randomly selected for each subject using acarbose after matching for age, sex, diabetes onset, and comorbidities. […] There were 1,332 incident cases of colorectal cancer in the cohort with diabetes during the follow-up period of 1,487,136 person-years. The overall incidence rate was 89.6 cases per 100,000 person-years. Patients treated with acarbose had a 27% reduction in the risk of colorectal cancer compared with control subjects. The adjusted HRs were 0.73 (95% CI 0.63–0.83), 0.69 (0.59–0.82), and 0.46 (0.37–0.58) for patients using >0 to <90, 90 to 364, and ≥365 cumulative defined daily doses of acarbose, respectively, compared with subjects who did not use acarbose (P for trend < 0.001).

CONCLUSIONS Acarbose use reduced the risk of incident colorectal cancer in patients with diabetes in a dose-dependent manner.”

It’s perhaps worth mentioning that the prevalence of type 1 is relatively low in East Asian populations and that most of the patients included were type 2 (this is also clearly indicated by this observation from the paper: “The median age at the time of the initial diabetes diagnosis was 54.1 years, and the median diabetes duration was 8.9 years.”). Another thing worth mentioning is that colon cancer is a very common type of cancer, and so even moderate risk reductions here at the individual level may translate into a substantial risk reduction at the population level. A third thing, noted in Horowitz & Samsom’s coverage, is that the side effects of acarbose are quite mild, so widespread use of the drug is not out of the question, at least poor tolerance is not likely to be an obstacle; the drug may cause e.g. excessive flatulence and something like 10% of patients may have to stop treatment because of gastrointestinal side effects, but although the side effects are annoying and may be unacceptable to some patients, they are not dangerous; it’s a safe drug which can be used even in patients with renal failure (a context where some of the other oral antidiabetic treatments available are contraindicated).

v. Diabetes, Lower-Extremity Amputation, and Death.

“Worldwide, every 30 s, a limb is lost to diabetes (1,2). Nearly 2 million people living in the U.S. are living with limb loss (1). According to the World Health Organization, lower-extremity amputations (LEAs) are 10 times more common in people with diabetes than in persons who do not have diabetes. In the U.S. Medicare population, the incidence of diabetic foot ulcers is ∼6 per 100 individuals with diabetes per year and the incidence of LEA is 4 per 1,000 persons with diabetes per year (3). LEA in those with diabetes generally carries yearly costs between $30,000 and $60,000 and lifetime costs of half a million dollars (4). In 2012, it was estimated that those with diabetes and lower-extremity wounds in the U.S. Medicare program accounted for $41 billion in cost, which is ∼1.6% of all Medicare health care spending (47). In 2012, in the U.K., it was estimated that the National Health Service spent between £639 and 662 million on foot ulcers and LEA, which was approximately £1 in every £150 spent by the National Health Service (8).”

“LEA does not represent a traditional medical complication of diabetes like myocardial infarction (MI), renal failure, or retinopathy in which organ failure is directly associated with diabetes (2). An LEA occurs because of a disease complication, usually a foot ulcer that is not healing (e.g., organ failure of the skin, failure of the biomechanics of the foot as a unit, nerve sensory loss, and/or impaired arterial vascular supply), but it also occurs at least in part as a consequence of a medical plan to amputate based on a decision between health care providers and patients (9,10). […] 30-day postoperative mortality can approach 10% […]. Previous reports have estimated that the 1-year post-LEA mortality rate in people with diabetes is between 10 and 50%, and the 5-year mortality rate post-LEA is between 30 and 80% (4,1315). More specifically, in the U.S. Medicare population mortality within a year after an incident LEA was 23.1% in 2006, 21.8% in 2007, and 20.6% in 2008 (4). In the U.K., up to 80% will die within 5 years of an LEA (8). In general, those with diabetes with an LEA are two to three times more likely to die at any given time point than those with diabetes who have not had an LEA (5). For perspective, the 5-year death rate after diagnosis of malignancy in the U.S. was 32% in 2010 (16).”

“Evidence on why individuals with diabetes and an LEA die is based on a few mainly small (e.g., <300 subjects) and often single center–based (13,1720) studies or <1 year duration of evaluation (11). In these studies, death is primarily associated with a previous history of cardiovascular disease and renal insufficiency, which are also major complications of diabetes; these complications are also associated with an increased risk of LEA. The goal of our study was to determine whether complications of diabetes well-known to be associated with death in those with diabetes such as cardiovascular disease and renal failure fully explain the higher rate of death in those who have undergone an LEA.”

“This is the largest and longest evaluation of the risk of death among those with diabetes and LEA […] Between 2003 and 2012, 416,434 individuals met the entrance criteria for the study. This cohort accrued an average of 9.0 years of follow-up and a total of 3.7 million diabetes person-years of follow-up. During this period of time, 6,566 (1.6%) patients had an LEA and 77,215 patients died (18.5%). […] The percentage of individuals who died within 30 days, 1 year, and by year 5 of their initial code for an LEA was 1.0%, 9.9%, and 27.2%, respectively. For those >65 years of age, the rates were 12.2% and 31.7%, respectively. For the full cohort of those with diabetes, the rate of death was 2.0% after 1 year of follow up and 7.3% after 5 years of follow up. In general, those with an LEA were more than three times more likely to die during a year of follow-up than an individual with diabetes who had not had an LEA. […] In any given year, >5% of those with diabetes and an LEA will die.”

“From 2003 to 2012, the HR [hazard rate, US] for death after an LEA was 3.02 (95% CI 2.90, 3.14). […] our a priori assumption was that the HR associating LEA with death would be fully diminished (i.e., it would become 1) when adjusted for the other risk factor variables. However, the fully adjusted LEA HR was diminished only ∼22% to 2.37 (95% CI 2.27, 2.48). With the exception of age >65 years, individual risk factors, in general, had minimal effect (<10%) on the HR of the association between LEA and death […] We conducted sensitivity analyses to determine the general statistical parameters of an unmeasured risk factor that could remove the association of LEA with death. We found that even if there existed a very strong risk factor with an HR of death of three, a prevalence of 10% in the general diabetes population, and a prevalence of 60% in those who had an LEA, LEA would still be associated with a statistically significant and clinically important risk of 1.30. These findings are describing a variable that would seem to be so common and so highly associated with death that it should already be clinically apparent. […] In summary, individuals with diabetes and an LEA are more likely to die at any given point in time than those who have diabetes but no LEA. While some of this variation can be explained by other known complications of diabetes, the amount that can be explained is small. Based on the results of this study, including a sensitivity analysis, it is highly unlikely that a “new” major risk factor for death exists. […] LEA is often performed because of an end-stage disease process like chronic nonhealing foot ulcer. By the time a patient has a foot ulcer and an LEA is offered, they are likely suffering from the end-stage consequence of diabetes. […] We would […] suggest that patients who have had an LEA require […] vigilant follow-up and evaluation to assure that their medical care is optimized. It is also important that GPs communicate to their patients about the risk of death to assure that patients have proper expectations about the severity of their disease.”

vi. Trends in Health Care Expenditure in U.S. Adults With Diabetes: 2002–2011.

Before quoting from the paper, I’ll remind people reading along here that ‘total medical expenditures’ != ‘total medical costs’. Lots of relevant medical costs are not included when you focus only on direct medical expenditures (sick days, early retirement, premature mortality and productivity losses associated therewith, etc., etc.). With that out of the way…

“This study examines trends in health care expenditures by expenditure category in U.S. adults with diabetes between 2002 and 2011. […] We analyzed 10 years of data representing a weighted population of 189,013,514 U.S. adults aged ≥18 years from the Medical Expenditure Panel Survey. […] Relative to individuals without diabetes ($5,058 [95% CI 4,949–5,166]), individuals with diabetes ($12,180 [11,775–12,586]) had more than double the unadjusted mean direct expenditures over the 10-year period. After adjustment for confounders, individuals with diabetes had $2,558 (2,266–2,849) significantly higher direct incremental expenditures compared with those without diabetes. For individuals with diabetes, inpatient expenditures rose initially from $4,014 in 2002/2003 to $4,183 in 2004/2005 and then decreased continuously to $3,443 in 2010/2011, while rising steadily for individuals without diabetes. The estimated unadjusted total direct expenditures for individuals with diabetes were $218.6 billion/year and adjusted total incremental expenditures were approximately $46 billion/year. […] in the U.S., direct medical costs associated with diabetes were $176 billion in 2012 (1,3). This is almost double to eight times the direct medical cost of other chronic diseases: $32 billion for COPD in 2010 (10), $93 billion for all cancers in 2008 (11), $21 billion for heart failure in 2012 (12), and $43 billion for hypertension in 2010 (13). In the U.S., total economic cost of diabetes rose by 41% from 2007 to 2012 (2). […] Our findings show that compared with individuals without diabetes, individuals with diabetes had significantly higher health expenditures from 2002 to 2011 and the bulk of the expenditures came from hospital inpatient and prescription expenditures.”

 

August 10, 2017 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Economics, Epidemiology, Gastroenterology, Health Economics, Medicine, Nephrology, Pharmacology | Leave a comment

Infectious Disease Surveillance (I)

Concepts and Methods in Infectious Disease Surveillance […] familiarizes the reader with basic surveillance concepts; the legal basis for surveillance in the United States and abroad; and the purposes, structures, and intended uses of surveillance at the local, state, national, and international level. […] A desire for a readily accessible, concise resource that detailed current methods and challenges in disease surveillance inspired the collaborations that resulted in this volume. […] The book covers major topics at an introductory-to-intermediate level and was designed to serve as a resource or class text for instructors. It can be used in graduate level courses in public health, human and veterinary medicine, as well as in undergraduate programs in public health–oriented disciplines. We hope that the book will be a useful primer for frontline public health practitioners, hospital epidemiologists, infection-control practitioners, laboratorians in public health settings, infectious disease researchers, and medical informatics specialists interested in a concise overview of infectious disease surveillance.”

I thought the book was sort of okay, but not really all that great. I assume part of the reason I didn’t like it as much as I might have is that someone like me don’t really need to know all the details about, say, the issues encountered in Florida while they were trying to implement electronic patient records, or whether or not the mandated reporting requirements for brucellosis in, say, Texas are different from those of, say, Florida – but the book has a lot of that kind of information. Useful knowledge if you work with this stuff, but if you don’t and you’re just curious about the topic ‘in a general way’ those kinds of details can subtract a bit from the experience. A lot of chapters cover similar topics and don’t seem all that well coordinated, in the sense that details which could easily have been left out of specific chapters without any significant information loss (because those details were covered elsewhere in the publication) are included anyway; we are probably told at least ten times what is the difference between active and passive surveillance. It probably means that the various chapters can be read more or less independently (you don’t need to read chapter 5 to understand the coverage in chapter 11), but if you’re reading the book from cover to cover the way I was that sort of approach is not ideal. However in terms of the coverage included in the individual chapters and the content in general, I feel reasonably confident that if you’re actually working in public health or related fields and so a lot of this stuff might be ‘work-relevant’ (especially if you’re from the US), it’s probably a very useful book to keep around/know about. I didn’t need to know how many ‘NBS-states’ there are, and whether or not South Carolina is such a state, but some people might.

As I’ve pointed out before, a two star goodreads rating on my part (which is the rating I gave this publication) is not an indication that I think a book is terrible, it’s an indication that the book is ‘okay’.

Below I’ve added some quotes and observations from the book. The book is an academic publication but it is not a ‘classic textbook’ with key items in bold etc.; I decided to use bold to highlight key concepts and observations below, to make the post easier to navigate later on (none of the bolded words below were in bold in the original text), but aside from that I have made no changes to the quotes included in this post. I would note that given that many of the chapters included in the book are not covered by copyright (many chapters include this observation: “Materials appearing in this chapter are prepared by individuals as part of their official duties as United States government employees and are not covered by the copyright of the book, and any views expressed herein do not necessarily represent the views of the United States government.”) I may decide to cover the book in a bit more detail than I otherwise would have.

“The methods used for infectious disease surveillance depend on the type of disease. Part of the rationale for this is that there are fundamental differences in etiology, mode of transmission, and control measures between different types of infections. […] Despite the fact that much of surveillance is practiced on a disease-specific basis, it is worth remembering that surveillance is a general tool used across all types of infectious and, noninfectious conditions, and, as such, all surveillance methods share certain core elements. We advocate the view that surveillance should not be regarded as a public health “specialty,” but rather that all public health practitioners should understand the general principles underlying surveillance.”

“Control of disease spread is achieved through public health actions. Public health actions resulting from information gained during the investigation usually go beyond what an individual physician can provide to his or her patients presenting in a clinical setting. Examples of public health actions include identifying the source of infection […] identifying persons who were in contact with the index case or any infected person who may need vaccines or antiinfectives to prevent them from developing the infection; closure of facilities implicated in disease spread; or isolation of sick individuals or, in rare circumstances, quarantining those exposed to an infected person. […] Monitoring surveillance data enables public health authorities to detect sudden changes in disease occurrence and distribution, identify changes in agents or host factors, and detect changes in healthcare practices […] The primary use of surveillance data at the local and state public health level is to identify cases or outbreaks in order to implement immediate disease control and prevention activities. […] Surveillance data are also used by states and CDC to monitor disease trends, demonstrate the need for public health interventions such as vaccines and vaccine policy, evaluate public health activities, and identify future research priorities. […] The final and most-important link in the surveillance chain is the application of […] data to disease prevention and control. A surveillance system includes a functional capacity for data collection, analysis, and dissemination linked to public health programs [6].

“The majority of reportable disease surveillance is conducted through passive surveillance methods. Passive surveillance means that public health agencies inform healthcare providers and other entities of their reporting requirements, but they do not usually conduct intensive efforts to solicit all cases; instead, the public health agency waits for the healthcare entities to submit case reports. Because passive surveillance is often incomplete, public health agencies may use hospital discharge data, laboratory testing records, mortality data, or other sources of information as checks on completeness of reporting and to identify additional cases. This is called active surveillance. Active surveillance usually includes intensive activities on the part of the public health agency to identify all cases of a specific reportable disease or group of diseases. […] Because it can be very labor intensive, active surveillance is usually conducted for a subset of reportable conditions, in a defined geographic locale and for a defined period of time.”

“Active surveillance may be conducted on a routine basis or in response to an outbreak […]. When an outbreak is suspected or identified, another type of surveillance known as enhanced passive surveillance may also be initiated. In enhanced passive surveillance methods, public health may improve communication with the healthcare community, schools, daycare centers, and other facilities and request that all suspected cases be reported to public health. […] Case-based surveillance is supplemented through laboratory-based surveillance activities. As opposed to case-based surveillance, the focus is on laboratory results themselves, independent of whether or not an individual’s result is associated with a “case” of illness meeting the surveillance case definition. Laboratory-based surveillance is conducted by state public health laboratories as well as the healthcare community (e.g., hospital, private medical office, and commercial laboratories). […] State and local public health entities participate in sentinel surveillance activities. With sentinel methods, surveillance is conducted in a sample of reporting entities, such as healthcare providers or hospitals, or in a specific population known to be an early indicator of disease activity (e.g., pediatric). However, because the goal of sentinel surveillance is not to identify every case, it is not necessarily representative of the underlying population of interest; and results should be interpreted accordingly.”

Syndromic surveillance identifies unexpected changes in prediagnostic information from a variety of sources to detect potential outbreaks [56]. Sources include work- or school-absenteeism records, pharmacy sales for over-the-counter pharmaceuticals, or emergency room admission data [51]. During the 2009 H1N1 pandemic, syndromic surveillance of emergency room visits for influenza-like illness correlated well with laboratory diagnosed cases of influenza [57]. […] According to a 2008 survey of U.S. health departments, 88% of respondents reported that they employ syndromic-based approaches as part of routine surveillance [21].

“Public health operated for many decades (and still does to some extent) using stand-alone, case-based information systems for collection of surveillance data that do not allow information sharing between systems and do not permit the ability to track the occurrences of different diseases in a specific person over time. One of the primary objectives of NEDSS [National Electronic Disease Surveillance System] is to promote person-based surveillance and integrated and interoperable surveillance systems. In an integrated person-based system, information is collected to create a public health record for a given person for different diseases over time. This enables public health to track public health conditions associated with a person over time, allowing analyses of public health events and comorbidities, as well as more robust public health interventions. An interoperable system can exchange information with other systems. For example, data are shared between surveillance systems or between other public health or clinical systems, such as an electronic health record or outbreak management system. Achieving the goal of establishing a public health record for an individual over time does not require one monolithic system that supports all needs; this can, instead, be achieved through integration and/or interoperability of systems.

“For over a decade, public health has focused on automation of reporting of laboratory results to public health from clinical laboratories and healthcare providers. Paper-based submission of laboratory results to public health for reportable conditions results in delays in receipt of information, incomplete ascertainment of possible cases, and missing information on individual reports. All of these aspects are improved through automation of the process [39–43].”

“During the pre-vaccine era, rotavirus infected nearly every unvaccinated child before their fifth birthday. In the absence of vaccine, multiple rotavirus infections may occur during infancy and childhood. Rotavirus causes severe diarrhea and vomiting (acute gastroenteritis [AGE]), which can lead to dehydration, electrolyte depletion, complications of viremia, shock, and death. Nearly one-half million children around the world die of rotavirus infections each year […] [In the US] this virus was responsible for 40–50% of hospitalizations because of acute gastroenteritis during the winter months in the era before vaccines were introduced. […] Because first infections have been shown to induce strong immunity against severe rotavirus reinfections [3] and because vaccination mimics such first infections without causing illness, vaccination was identified as the optimal strategy for decreasing the burden associated with severe and fatal rotavirus diarrhea. Any changes that may be later attributed to vaccination effects require knowledge of the pre-licensure (i.e., baseline) rates and trends in the target disease as a reference […] Efforts to obtain baseline data are necessary before a vaccine is licensed and introduced [13]. […] After the first year of widespread rotavirus vaccination coverage in 2008, very large and consistent decreases in rotavirus hospitalizations were noted around the country. Many of the decreases in childhood hospitalizations resulting from rotavirus were 90% or more, compared with the pre-licensure, baseline period.”

There is no single perfect data source for assessing any VPD [Vaccine-Preventable Disease, US]. Meaningful surveillance is achieved by the much broader approach of employing diverse datasets. The true impact of a vaccine or the accurate assessment of disease trends in a population is more likely the result of evaluating many datasets having different strengths and weaknesses. Only by understanding these strengths and weaknesses can a public health practitioner give the appropriate consideration to the findings derived from these data. […] In a Phase III clinical trial, the vaccine is typically administered to large numbers of people who have met certain inclusionary and exclusionary criteria and are then randomly selected to receive either the vaccine or a placebo. […] Phase III trials represent the “best case scenario” of vaccine protection […] Once the Phase III trials show adequate protection and safety, the vaccine may be licensed by the FDA […] When the vaccine is used in routine clinical practice, Phase IV trials (called post-licensure studies or post-marketing studies) are initiated. These are the evaluations conducted during the course of VPD surveillance that delineate additional performance information in settings where strict controls on who receives the vaccine are not present. […] Often, measuring vaccine performance in the broader population yields slightly lower protective results compared to Phase III clinical trials […] During these post-licensure Phase IV studies, it is not the vaccine’s efficacy but its effectiveness that is assessed. […] Administrative datasets may be created by research institutions, managed-care organizations, or national healthcare utilization repositories. They are not specifically created for VPD surveillance and may contain coded data […] on health events. They often do not provide laboratory confirmation of specific diseases, unlike passive and active VPD surveillance. […] administrative datasets offer huge sample sizes, which allow for powerful inferences within the confines of any data limitations.”

August 6, 2017 Posted by | Books, Epidemiology, Infectious disease, Medicine, Pharmacology | Leave a comment

Beyond Significance Testing (IV)

Below I have added some quotes from chapters 5, 6, and 7 of the book.

“There are two broad classes of standardized effect sizes for analysis at the group or variable level, the d family, also known as group difference indexes, and the r family, or relationship indexes […] Both families are metric- (unit-) free effect sizes that can compare results across studies or variables measured in different original metrics. Effect sizes in the d family are standardized mean differences that describe mean contrasts in standard deviation units, which can exceed 1.0 in absolute value. Standardized mean differences are signed effect sizes, where the sign of the statistic indicates the direction of the corresponding contrast. Effect sizes in the r family are scaled in correlation units that generally range from 1.0 to +1.0, where the sign indicates the direction of the relation […] Measures of association are unsigned effect sizes and thus do not indicate directionality.”

“The correlation rpb is for designs with two unrelated samples. […] rpb […] is affected by base rate, or the proportion of cases in one group versus the other, p and q. It tends to be highest in balanced designs. As the design becomes more unbalanced holding all else constant, rpb approaches zero. […] rpb is not directly comparable across studies with dissimilar relative group sizes […]. The correlation rpb is also affected by the total variability (i.e., ST). If this variation is not constant over samples, values of rpb may not be directly comparable.”

“Too many researchers neglect to report reliability coefficients for scores analyzed. This is regrettable because effect sizes cannot be properly interpreted without knowing whether the scores are precise. The general effect of measurement error in comparative studies is to attenuate absolute standardized effect sizes and reduce the power of statistical tests. Measurement error also contributes to variation in observed results over studies. Of special concern is when both score reliabilities and sample sizes vary from study to study. If so, effects of sampling error are confounded with those due to measurement error. […] There are ways to correct some effect sizes for measurement error (e.g., Baguley, 2009), but corrected effect sizes are rarely reported. It is more surprising that measurement error is ignored in most meta-analyses, too. F. L. Schmidt (2010) found that corrected effect sizes were analyzed in only about 10% of the 199 meta-analytic articles published in Psychological Bulletin from 1978 to 2006. This implies that (a) estimates of mean effect sizes may be too low and (b) the wrong statistical model may be selected when attempting to explain between-studies variation in results. If a fixed
effects model is mistakenly chosen over a random effects model, confidence intervals based on average effect sizes tend to be too narrow, which can make those results look more precise than they really are. Underestimating mean effect sizes while simultaneously overstating their precision is a potentially serious error.”

“[D]emonstration of an effect’s significance — whether theoretical, practical, or clinical — calls for more discipline-specific expertise than the estimation of its magnitude”.

“Some outcomes are categorical instead of continuous. The levels of a categorical outcome are mutually exclusive, and each case is classified into just one level. […] The risk difference (RD) is defined as pCpT, and it estimates the parameter πC πT. [Those ‘n-resembling letters’ is how wordpress displays pi; this is one of an almost infinite number of reasons why I detest blogging equations on this blog and usually do not do this – US] […] The risk ratio (RR) is the ratio of the risk rates […] which rate appears in the numerator versus the denominator is arbitrary, so one should always explain how RR is computed. […] The odds ratio (OR) is the ratio of the within-groups odds for the undesirable event. […] A convenient property of OR is that it can be converted to a kind of standardized mean difference known as logit d (Chinn, 2000). […] Reporting logit d may be of interest when the hypothetical variable that underlies the observed dichotomy is continuous.”

“The risk difference RD is easy to interpret but has a drawback: Its range depends on the values of the population proportions πC and πT. That is, the range of RD is greater when both πC and πT are closer to .50 than when they are closer to either 0 or 1.00. The implication is that RD values may not be comparable across different studies when the corresponding parameters πC and πT are quite different. The risk ratio RR is also easy to interpret. It has the shortcoming that only the finite interval from 0 to < 1.0 indicates lower risk in the group represented in the numerator, but the interval from > 1.00 to infinity is theoretically available for describing higher risk in the same group. The range of RR varies according to its denominator. This property limits the value of RR for comparing results across different studies. […] The odds ratio or shares the limitation that the finite interval from 0 to < 1.0 indicates lower risk in the group represented in the numerator, but the interval from > 1.0 to infinity describes higher risk for the same group. Analyzing natural log transformations of OR and then taking antilogs of the results deals with this problem, just as for RR. The odds ratio may be the least intuitive of the comparative risk effect sizes, but it probably has the best overall statistical properties. This is because OR can be estimated in prospective studies, in studies that randomly sample from exposed and unexposed populations, and in retrospective studies where groups are first formed based on the presence or absence of a disease before their exposure to a putative risk factor is determined […]. Other effect sizes may not be valid in retrospective studies (RR) or in studies without random sampling ([Pearson correlations between dichotomous variables, US]).”

“Sensitivity and specificity are determined by the threshold on a screening test. This means that different thresholds on the same test will generate different sets of sensitivity and specificity values in the same sample. But both sensitivity and specificity are independent of population base rate and sample size. […] Sensitivity and specificity affect predictive value, the proportion of test results that are correct […] In general, predictive values increase as sensitivity and specificity increase. […] Predictive value is also influenced by the base rate (BR), the proportion of all cases with the disorder […] In general, PPV [positive predictive value] decreases and NPV [negative…] increases as BR approaches zero. This means that screening tests tend to be more useful for ruling out rare disorders than correctly predicting their presence. It also means that most positive results may be false positives under low base rate conditions. This is why it is difficult for researchers or social policy makers to screen large populations for rare conditions without many false positives. […] The effect of BR on predictive values is striking but often overlooked, even by professionals […]. One misunderstanding involves confusing sensitivity and specificity, which are invariant to BR, with PPV and NPV, which are not. This means that diagnosticians fail to adjust their estimates of test accuracy for changes in base rates, which exemplifies the base rate fallacy. […] In general, test results have greater impact on changing the pretest odds when the base rate is moderate, neither extremely low (close to 0) nor extremely high (close to 1.0). But if the target disorder is either very rare or very common, only a result from a highly accurate screening test will change things much.”

“The technique of ANCOVA [ANalysis of COVAriance, US] has two more assumptions than ANOVA does. One is homogeneity of regression, which requires equal within-populations unstandardized regression coefficients for predicting outcome from the covariate. In nonexperimental designs where groups differ systematically on the covariate […] the homogeneity of regression assumption is rather likely to be violated. The second assumption is that the covariate is measured without error […] Violation of either assumption may lead to inaccurate results. For example, an unreliable covariate in experimental designs causes loss of statistical power and in nonexperimental designs may also cause inaccurate adjustment of the means […]. In nonexperimental designs where groups differ systematically, these two extra assumptions are especially likely to be violated. An alternative to ANCOVA is propensity score analysis (PSA). It involves the use of logistic regression to estimate the probability for each case of belonging to different groups, such as treatment versus control, in designs without randomization, given the covariate(s). These probabilities are the propensities, and they can be used to match cases from nonequivalent groups.”

August 5, 2017 Posted by | Books, Epidemiology, Papers, Statistics | Leave a comment

A few diabetes papers of interest

i. Clinically Relevant Cognitive Impairment in Middle-Aged Adults With Childhood-Onset Type 1 Diabetes.

“Modest cognitive dysfunction is consistently reported in children and young adults with type 1 diabetes (T1D) (1). Mental efficiency, psychomotor speed, executive functioning, and intelligence quotient appear to be most affected (2); studies report effect sizes between 0.2 and 0.5 (small to modest) in children and adolescents (3) and between 0.4 and 0.8 (modest to large) in adults (2). Whether effect sizes continue to increase as those with T1D age, however, remains unknown.

A key issue not yet addressed is whether aging individuals with T1D have an increased risk of manifesting “clinically relevant cognitive impairment,” defined by comparing individual cognitive test scores to demographically appropriate normative means, as opposed to the more commonly investigated “cognitive dysfunction,” or between-group differences in cognitive test scores. Unlike the extensive literature examining cognitive impairment in type 2 diabetes, we know of only one prior study examining cognitive impairment in T1D (4). This early study reported a higher rate of clinically relevant cognitive impairment among children (10–18 years of age) diagnosed before compared with after age 6 years (24% vs. 6%, respectively) or a non-T1D cohort (6%).”

“This study tests the hypothesis that childhood-onset T1D is associated with an increased risk of developing clinically relevant cognitive impairment detectable by middle age. We compared cognitive test results between adults with and without T1D and used demographically appropriate published norms (1012) to determine whether participants met criteria for impairment for each test; aging and dementia studies have selected a score ≥1.5 SD worse than the norm on that test, corresponding to performance at or below the seventh percentile (13).”

“During 2010–2013, 97 adults diagnosed with T1D and aged <18 years (age and duration 49 ± 7 and 41 ± 6 years, respectively; 51% female) and 138 similarly aged adults without T1D (age 49 ± 7 years; 55% female) completed extensive neuropsychological testing. Biomedical data on participants with T1D were collected periodically since 1986–1988.  […] The prevalence of clinically relevant cognitive impairment was five times higher among participants with than without T1D (28% vs. 5%; P < 0.0001), independent of education, age, or blood pressure. Effect sizes were large (Cohen d 0.6–0.9; P < 0.0001) for psychomotor speed and visuoconstruction tasks and were modest (d 0.3–0.6; P < 0.05) for measures of executive function. Among participants with T1D, prevalent cognitive impairment was related to 14-year average A1c >7.5% (58 mmol/mol) (odds ratio [OR] 3.0; P = 0.009), proliferative retinopathy (OR 2.8; P = 0.01), and distal symmetric polyneuropathy (OR 2.6; P = 0.03) measured 5 years earlier; higher BMI (OR 1.1; P = 0.03); and ankle-brachial index ≥1.3 (OR 4.2; P = 0.01) measured 20 years earlier, independent of education.”

“Having T1D was the only factor significantly associated with the between-group difference in clinically relevant cognitive impairment in our sample. Traditional risk factors for age-related cognitive impairment, in particular older age and high blood pressure (24), were not related to the between-group difference we observed. […] Similar to previous studies of younger adults with T1D (14,26), we found no relationship between the number of severe hypoglycemic episodes and cognitive impairment. Rather, we found that chronic hyperglycemia, via its associated vascular and metabolic changes, may have triggered structural changes in the brain that disrupt normal cognitive function.”

Just to be absolutely clear about these results: The type 1 diabetics they recruited in this study were on average not yet fifty years old, yet more than one in four of them were cognitively impaired to a clinically relevant degree. This is a huge effect. As they note later in the paper:

“Unlike previous reports of mild/modest cognitive dysfunction in young adults with T1D (1,2), we detected clinically relevant cognitive impairment in 28% of our middle-aged participants with T1D. This prevalence rate in our T1D cohort is comparable to the prevalence of mild cognitive impairment typically reported among community-dwelling adults aged 85 years and older (29%) (20).”

The type 1 diabetics included in the study had had diabetes for roughly a decade more than I have. And the number of cognitively impaired individuals in that sample corresponds roughly to what you find when you test random 85+ year-olds. Having type 1 diabetes is not good for your brain.

ii. Comment on Nunley et al. Clinically Relevant Cognitive Impairment in Middle-Aged Adults With Childhood-Onset Type 1 Diabetes.

This one is a short comment to the above paper, below I’ve quoted ‘the meat’ of the comment:

“While the […] study provides us with important insights regarding cognitive impairment in adults with type 1 diabetes, we regret that depression has not been taken into account. A systematic review and meta-analysis published in 2014 identified significant objective cognitive impairment in adults and adolescents with depression regarding executive functioning, memory, and attention relative to control subjects (2). Moreover, depression is two times more common in adults with diabetes compared with those without this condition, regardless of type of diabetes (3). There is even evidence that the co-occurrence of diabetes and depression leads to additional health risks such as increased mortality and dementia (3,4); this might well apply to cognitive impairment as well. Furthermore, in people with diabetes, the presence of depression has been associated with the development of diabetes complications, such as retinopathy, and higher HbA1c values (3). These are exactly the diabetes-specific correlates that Nunley et al. (1) found.”

“We believe it is a missed opportunity that Nunley et al. (1) mainly focused on biological variables, such as hyperglycemia and microvascular disease, and did not take into account an emotional disorder widely represented among people with diabetes and closely linked to cognitive impairment. Even though severe or chronic cases of depression are likely to have been excluded in the group without type 1 diabetes based on exclusion criteria (1), data on the presence of depression (either measured through a diagnostic interview or by using a validated screening questionnaire) could have helped to interpret the present findings. […] Determining the role of depression in the relationship between cognitive impairment and type 1 diabetes is of significant importance. Treatment of depression might improve cognitive impairment both directly by alleviating cognitive depression symptoms and indirectly by improving treatment nonadherence and glycemic control, consequently lowering the risk of developing complications.”

iii. Prevalence of Diabetes and Diabetic Nephropathy in a Large U.S. Commercially Insured Pediatric Population, 2002–2013.

“[W]e identified 96,171 pediatric patients with diabetes and 3,161 pediatric patients with diabetic nephropathy during 2002–2013. We estimated prevalence of pediatric diabetes overall, by diabetes type, age, and sex, and prevalence of pediatric diabetic nephropathy overall, by age, sex, and diabetes type.”

“Although type 1 diabetes accounts for a majority of childhood and adolescent diabetes, type 2 diabetes is becoming more common with the increasing rate of childhood obesity and it is estimated that up to 45% of all new patients with diabetes in this age-group have type 2 diabetes (1,2). With the rising prevalence of diabetes in children, a rise in diabetes-related complications, such as nephropathy, is anticipated. Moreover, data suggest that the development of clinical macrovascular complications, neuropathy, and nephropathy may be especially rapid among patients with young-onset type 2 diabetes (age of onset <40 years) (36). However, the natural history of young patients with type 2 diabetes and resulting complications has not been well studied.”

I’m always interested in the identification mechanisms applied in papers like this one, and I’m a little confused about the high number of patients without prescriptions (almost one-third of patients); I sort of assume these patients do take (/are given) prescription drugs, but get them from sources not available to the researchers (parents get prescriptions for the antidiabetic drugs, and the researchers don’t have access to these data? Something like this..) but this is a bit unclear. The mechanism they employ in the paper is not perfect (no mechanism is), but it probably works:

“Patients who had one or more prescription(s) for insulin and no prescriptions for another antidiabetes medication were classified as having type 1 diabetes, while those who filled prescriptions for noninsulin antidiabetes medications were considered to have type 2 diabetes.”

When covering limitations of the paper, they observe incidentally in this context that:

“Klingensmith et al. (31) recently reported that in the initial month after diagnosis of type 2 diabetes around 30% of patients were treated with insulin only. Thus, we may have misclassified a small proportion of type 2 cases as type 1 diabetes or vice versa. Despite this, we found that 9% of patients had onset of type 2 diabetes at age <10 years, consistent with the findings of Klingensmith et al. (8%), but higher than reported by the SEARCH for Diabetes in Youth study (<3%) (31,32).”

Some more observations from the paper:

“There were 149,223 patients aged <18 years at first diagnosis of diabetes in the CCE database from 2002 through 2013. […] Type 1 diabetes accounted for a majority of the pediatric patients with diabetes (79%). Among these, 53% were male and 53% were aged 12 to <18 years at onset, while among patients with type 2 diabetes, 60% were female and 79% were aged 12 to <18 years at onset.”

“The overall annual prevalence of all diabetes increased from 1.86 to 2.82 per 1,000 during years 2002–2013; it increased on average by 9.5% per year from 2002 to 2006 and slowly increased by 0.6% after that […] The prevalence of type 1 diabetes increased from 1.48 to 2.32 per 1,000 during the study period (average increase of 8.5% per year from 2002 to 2006 and 1.4% after that; both P values <0.05). The prevalence of type 2 diabetes increased from 0.38 to 0.67 per 1,000 during 2002 through 2006 (average increase of 13.3% per year; P < 0.05) and then dropped from 0.56 to 0.49 per 1,000 during 2007 through 2013 (average decrease of 2.7% per year; P < 0.05). […] Prevalence of any diabetes increased by age, with the highest prevalence in patients aged 12 to <18 years (ranging from 3.47 to 5.71 per 1,000 from 2002 through 2013).” […] The annual prevalence of diabetes increased over the study period mainly because of increases in type 1 diabetes.”

“Dabelea et al. (8) reported, based on data from the SEARCH for Diabetes in Youth study, that the annual prevalence of type 1 diabetes increased from 1.48 to 1.93 per 1,000 and from 0.34 to 0.46 per 1,000 for type 2 diabetes from 2001 to 2009 in U.S. youth. In our study, the annual prevalence of type 1 diabetes was 1.48 per 1,000 in 2002 and 2.10 per 1,000 in 2009, which is close to their reported prevalence.”

“We identified 3,161 diabetic nephropathy cases. Among these, 1,509 cases (47.7%) were of specific diabetic nephropathy and 2,253 (71.3%) were classified as probable cases. […] The annual prevalence of diabetic nephropathy in pediatric patients with diabetes increased from 1.16 to 3.44% between 2002 and 2013; it increased by on average 25.7% per year from 2002 to 2005 and slowly increased by 4.6% after that (both P values <0.05).”

Do note that the relationship between nephropathy prevalence and diabetes prevalence is complicated and that you cannot just explain an increase in the prevalence of nephropathy over time easily by simply referring to an increased prevalence of diabetes during the same time period. This would in fact be a very wrong thing to do, in part but not only on account of the data structure employed in this study. One problem which is probably easy to understand is that if more children got diabetes but the same proportion of those new diabetics got nephropathy, the diabetes prevalence would go up but the diabetic nephropathy prevalence would remain fixed; when you calculate the diabetic nephropathy prevalence you implicitly condition on diabetes status. But this just scratches the surface of the issues you encounter when you try to link these variables, because the relationship between the two variables is complicated; there’s an age pattern to diabetes risk, with risk (incidence) increasing with age (up to a point, after which it falls – in most samples I’ve seen in the past peak incidence in pediatric populations is well below the age of 18). However diabetes prevalence increases monotonously with age as long as the age-specific death rate of diabetics is lower than the age-specific incidence, because diabetes is chronic, and then on top of that you have nephropathy-related variables, which display diabetes-related duration-dependence (meaning that although nephropathy risk is also increasing with age when you look at that variable in isolation, that age-risk relationship is confounded by diabetes duration – a type 1 diabetic at the age of 12 who’s had diabetes for 10 years has a higher risk of nephropathy than a 16-year old who developed diabetes the year before). When a newly diagnosed pediatric patient is included in the diabetes sample here this will actually decrease the nephropathy prevalence in the short run, but not in the long run, assuming no changes in diabetes treatment outcomes over time. This is because the probability that that individual has diabetes-related kidney problems as a newly diagnosed child is zero, so he or she will unquestionably only contribute to the denominator during the first years of illness (the situation in the middle-aged type 2 context is different; here you do sometimes have newly-diagnosed patients who have developed complications already). This is one reason why it would be quite wrong to say that increased diabetes prevalence in this sample is the reason why diabetic nephropathy is increasing as well. Unless the time period you look at is very long (e.g. you have a setting where you follow all individuals with a diagnosis until the age of 18), the impact of increasing prevalence of one condition may well be expected to have a negative impact on the estimated risk of associated conditions, if those associated conditions display duration-dependence (which all major diabetes complications do). A second factor supporting a default assumption of increasing incidence of diabetes leading to an expected decreasing rate of diabetes-related complications is of course the fact that treatment options have tended to increase over time, and especially if you take a long view (look back 30-40 years) the increase in treatment options and improved medical technology have lead to improved metabolic control and better outcomes.

That both variables grew over time might be taken to indicate that both more children got diabetes and that a larger proportion of this increased number of children with diabetes developed kidney problems, but this stuff is a lot more complicated than it might look and it’s in particular important to keep in mind that, say, the 2005 sample and the 2010 sample do not include the same individuals, although there’ll of course be some overlap; in age-stratified samples like this you always have some level of implicit continuous replacement, with newly diagnosed patients entering and replacing the 18-year olds who leave the sample. As long as prevalence is constant over time, associated outcome variables may be reasonably easy to interpret, but when you have dynamic samples as well as increasing prevalence over time it gets difficult to say much with any degree of certainty unless you crunch the numbers in a lot of detail (and it might be difficult even if you do that). A factor I didn’t mention above but which is of course also relevant is that you need to be careful about how to interpret prevalence rates when you look at complications with high mortality rates (and late-stage diabetic nephropathy is indeed a complication with high mortality); in such a situation improvements in treatment outcomes may have large effects on prevalence rates but no effect on incidence. Increased prevalence is not always bad news, sometimes it is good news indeed. Gleevec substantially increased the prevalence of CML.

In terms of the prevalence-outcomes (/complication risk) connection, there are also in my opinion reasons to assume that there may be multiple causal pathways between prevalence and outcomes. For example a very low prevalence of a condition in a given area may mean that fewer specialists are educated to take care of these patients than would be the case for an area with a higher prevalence, and this may translate into a more poorly developed care infrastructure. Greatly increasing prevalence may on the other hand lead to a lower level of care for all patients with the illness, not just the newly diagnosed ones, due to binding budget constraints and care rationing. And why might you have changes in prevalence; might they not sometimes rather be related to changes in diagnostic practices, rather than changes in the True* prevalence? If that’s the case, you might not be comparing apples to apples when you’re comparing the evolving complication rates. There are in my opinion many reasons to believe that the relationship between chronic conditions and the complication rates of these conditions is far from simple to model.

All this said, kidney problems in children with diabetes is still rare, compared to the numbers you see when you look at adult samples with longer diabetes duration. It’s also worth distinguishing between microalbuminuria and overt nephropathy; children rarely proceed to develop diabetes-related kidney failure, although poor metabolic control may mean that they do develop this complication later, in early adulthood. As they note in the paper:

“It has been reported that overt diabetic nephropathy and kidney failure caused by either type 1 or type 2 diabetes are uncommon during childhood or adolescence (24). In this study, the annual prevalence of diabetic nephropathy for all cases ranged from 1.16 to 3.44% in pediatric patients with diabetes and was extremely low in the whole pediatric population (range 2.15 to 9.70 per 100,000), confirming that diabetic nephropathy is a very uncommon condition in youth aged <18 years. We observed that the prevalence of diabetic nephropathy increased in both specific and unspecific cases before 2006, with a leveling off of the specific nephropathy cases after 2005, while the unspecific cases continued to increase.”

iv. Adherence to Oral Glucose-Lowering Therapies and Associations With 1-Year HbA1c: A Retrospective Cohort Analysis in a Large Primary Care Database.

“Between a third and a half of medicines prescribed for type 2 diabetes (T2DM), a condition in which multiple medications are used to control cardiovascular risk factors and blood glucose (1,2), are not taken as prescribed (36). However, estimates vary widely depending on the population being studied and the way in which adherence to recommended treatment is defined.”

“A number of previous studies have used retrospective databases of electronic health records to examine factors that might predict adherence. A recent large cohort database examined overall adherence to oral therapy for T2DM, taking into account changes of therapy. It concluded that overall adherence was 69%, with individuals newly started on treatment being significantly less likely to adhere (19).”

“The impact of continuing to take glucose-lowering medicines intermittently, but not as recommended, is unknown. Medication possession (expressed as a ratio of actual possession to expected possession), derived from prescribing records, has been identified as a valid adherence measure for people with diabetes (7). Previous studies have been limited to small populations in managed-care systems in the U.S. and focused on metformin and sulfonylurea oral glucose-lowering treatments (8,9). Further studies need to be carried out in larger groups of people that are more representative of the general population.

The Clinical Practice Research Database (CPRD) is a long established repository of routine clinical data from more than 13 million patients registered with primary care services in England. […] The Genetics of Diabetes and Audit Research Tayside Study (GoDARTS) database is derived from integrated health records in Scotland with primary care, pharmacy, and hospital data on 9,400 patients with diabetes. […] We conducted a retrospective cohort study using [these databases] to examine the prevalence of nonadherence to treatment for type 2 diabetes and investigate its potential impact on HbA1c reduction stratified by type of glucose-lowering medication.”

“In CPRD and GoDARTS, 13% and 15% of patients, respectively, were nonadherent. Proportions of nonadherent patients varied by the oral glucose-lowering treatment prescribed (range 8.6% [thiazolidinedione] to 18.8% [metformin]). Nonadherent, compared with adherent, patients had a smaller HbA1c reduction (0.4% [4.4 mmol/mol] and 0.46% [5.0 mmol/mol] for CPRD and GoDARTs, respectively). Difference in HbA1c response for adherent compared with nonadherent patients varied by drug (range 0.38% [4.1 mmol/mol] to 0.75% [8.2 mmol/mol] lower in adherent group). Decreasing levels of adherence were consistently associated with a smaller reduction in HbA1c.”

“These findings show an association between adherence to oral glucose-lowering treatment, measured by the proportion of medication obtained on prescription over 1 year, and the corresponding decrement in HbA1c, in a population of patients newly starting treatment and continuing to collect prescriptions. The association is consistent across all commonly used oral glucose-lowering therapies, and the findings are consistent between the two data sets examined, CPRD and GoDARTS. Nonadherent patients, taking on average <80% of the intended medication, had about half the expected reduction in HbA1c. […] Reduced medication adherence for commonly used glucose-lowering therapies among patients persisting with treatment is associated with smaller HbA1c reductions compared with those taking treatment as recommended. Differences observed in HbA1c responses to glucose-lowering treatments may be explained in part by their intermittent use.”

“Low medication adherence is related to increased mortality (20). The mean difference in HbA1c between patients with MPR <80% and ≥80% is between 0.37% and 0.55% (4 mmol/mol and 6 mmol/mol), equivalent to up to a 10% reduction in death or an 18% reduction in diabetes complications (21).”

v. Health Care Transition in Young Adults With Type 1 Diabetes: Perspectives of Adult Endocrinologists in the U.S.

“Empiric data are limited on best practices in transition care, especially in the U.S. (10,1316). Prior research, largely from the patient perspective, has highlighted challenges in the transition process, including gaps in care (13,1719); suboptimal pediatric transition preparation (13,20); increased post-transition hospitalizations (21); and patient dissatisfaction with the transition experience (13,1719). […] Young adults with type 1 diabetes transitioning from pediatric to adult care are at risk for adverse outcomes. Our objective was to describe experiences, resources, and barriers reported by a national sample of adult endocrinologists receiving and caring for young adults with type 1 diabetes.”

“We received responses from 536 of 4,214 endocrinologists (response rate 13%); 418 surveys met the eligibility criteria. Respondents (57% male, 79% Caucasian) represented 47 states; 64% had been practicing >10 years and 42% worked at an academic center. Only 36% of respondents reported often/always reviewing pediatric records and 11% reported receiving summaries for transitioning young adults with type 1 diabetes, although >70% felt that these activities were important for patient care.”

“A number of studies document deficiencies in provider hand-offs across other chronic conditions and point to the broader relevance of our findings. For example, in two studies of inflammatory bowel disease, adult gastroenterologists reported inadequacies in young adult transition preparation (31) and infrequent receipt of medical histories from pediatric providers (32). In a study of adult specialists caring for young adults with a variety of chronic diseases (33), more than half reported that they had no contact with the pediatric specialists.

Importantly, more than half of the endocrinologists in our study reported a need for increased access to mental health referrals for young adult patients with type 1 diabetes, particularly in nonacademic settings. Report of barriers to care was highest for patient scenarios involving mental health issues, and endocrinologists without easy access to mental health referrals were significantly more likely to report barriers to diabetes management for young adults with psychiatric comorbidities such as depression, substance abuse, and eating disorders.”

“Prior research (34,35) has uncovered the lack of mental health resources in diabetes care. In the large cross-national Diabetes Attitudes, Wishes and Needs (DAWN) study (36) […] diabetes providers often reported not having the resources to manage mental health problems; half of specialist diabetes physicians felt unable to provide psychiatric support for patients and one-third did not have ready access to outside expertise in emotional or psychiatric matters. Our results, which resonate with the DAWN findings, are particularly concerning in light of the vulnerability of young adults with type 1 diabetes for adverse medical and mental health outcomes (4,34,37,38). […] In a recent report from the Mental Health Issues of Diabetes conference (35), which focused on type 1 diabetes, a major observation included the lack of trained mental health professionals, both in academic centers and the community, who are knowledgeable about the mental health issues germane to diabetes.”

August 3, 2017 Posted by | Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology, Psychiatry, Psychology, Statistics, Studies | Leave a comment

How Species Interact

There are multiple reasons why I have not covered Arditi and Ginzburg’s book before, but none of them are related to the quality of the book’s coverage. It’s a really nice book. However the coverage is somewhat technical and model-focused, which makes it harder to blog than other kinds of books. Also, the version of the book I read was a hardcover ‘paper book’ version, and ‘paper books’ take a lot more work for me to cover than do e-books.

I should probably get it out of the way here at the start of the post that if you’re interested in ecology, predator-prey dynamics, etc., this book is a book you would be well advised to read; or, if you don’t read the book, you should at least familiarize yourself with the ideas therein e.g. through having a look at some of Arditi & Ginzburg’s articles on these topics. I should however note that I don’t actually think skipping the book and having a look at some articles instead will necessarily be a labour-saving strategy; the book is not particularly long and it’s to the point, so although it’s not a particularly easy read their case for ratio dependence is actually somewhat easy to follow – if you take the effort – in the sense that I believe how different related ideas and observations are linked is quite likely better expounded upon in the book than they might have been in their articles. The presumably wrote the book precisely in order to provide a concise yet coherent overview.

I have had some trouble figuring out how to cover this book, and I’m still not quite sure what might be/have been the best approach; when covering technical books I’ll often skip a lot of detail and math and try to stick to what might be termed ‘the main ideas’ when quoting from such books, but there’s a clear limit as to how many of the technical details included in a book like this it is possible to skip if you still want to actually talk about the stuff covered in the work, and this sometimes make blogging such books awkward. These authors spend a lot of effort talking about how different ecological models work and which sort of conclusions these different models may lead to in different contexts, and this kind of stuff is a very big part of the book. I’m not sure if you strictly need to have read an ecology textbook or two before you read this one in order to be able to follow the coverage, but I know that I personally derived some benefit from having read Gurney & Nisbet’s ecology text in the past and I did look up stuff in that book a few times along the way, e.g. when reminding myself what a Holling type 2 functional response is and how models with such a functional response pattern behave. ‘In theory’ I assume one might argue that you could theoretically look up all the relevant concepts along the way without any background knowledge of ecology – assuming you have a decent understanding of basic calculus/differential equations, linear algebra, equilibrium dynamics, etc. (…systems analysis? It’s hard for me to know and outline exactly which sources I’ve read in the past which helped make this book easier to read than it otherwise would have been, but suffice it to say that if you look at the page count and think that this will be an quick/easy read, it will be that only if you’ve read more than a few books on ‘related topics’, broadly defined, in the past), but I wouldn’t advise reading the book if all you know is high school math – the book will be incomprehensible to you, and you won’t make it. I ended up concluding that it would simply be too much work to try to make this post ‘easy’ to read for people who are unfamiliar with these topics and have not read the book, so although I’ve hardly gone out of my way to make the coverage hard to follow, the blog coverage that is to follow is mainly for my own benefit.

First a few relevant links, then some quotes and comments.

Lotka–Volterra equations.
Ecosystem model.
Arditi–Ginzburg equations. (Yep, these equations are named after the authors of this book).
Nicholson–Bailey model.
Functional response.
Monod equation.
Rosenzweig-MacArthur predator-prey model.
Trophic cascade.
Underestimation of mutual interference of predators.
Coupling in predator-prey dynamics: Ratio Dependence.
Michaelis–Menten kinetics.
Trophic level.
Advection–diffusion equation.
Paradox of enrichment. [Two quotes from the book: “actual systems do not behave as Rosensweig’s model predict” + “When ecologists have looked for evidence of the paradox of enrichment in natural and laboratory systems, they often find none and typically present arguments about why it was not observed”]
Predator interference emerging from trophotaxis in predator–prey systems: An individual-based approach.
Directed movement of predators and the emergence of density dependence in predator-prey models.

“Ratio-dependent predation is now covered in major textbooks as an alternative to the standard prey-dependent view […]. One of this book’s messages is that the two simple extreme theories, prey dependence and ratio dependence, are not the only alternatives: they are the ends of a spectrum. There are ecological domains in which one view works better than the other, with an intermediate view also being a possible case. […] Our years of work spent on the subject have led us to the conclusion that, although prey dependence might conceivably be obtained in laboratory settings, the common case occurring in nature lies close to the ratio-dependent end. We believe that the latter, instead of the prey-dependent end, can be viewed as the “null model of predation.” […] we propose the gradual interference model, a specific form of predator-dependent functional response that is approximately prey dependent (as in the standard theory) at low consumer abundances and approximately ratio dependent at high abundances. […] When density is low, consumers do not interfere and prey dependence works (as in the standard theory). When consumers density is sufficiently high, interference causes ratio dependence to emerge. In the intermediate densities, predator-dependent models describe partial interference.”

“Studies of food chains are on the edge of two domains of ecology: population and community ecology. The properties of food chains are determined by the nature of their basic link, the interaction of two species, a consumer and its resource, a predator and its prey.1 The study of this basic link of the chain is part of population ecology while the more complex food webs belong to community ecology. This is one of the main reasons why understanding the dynamics of predation is important for many ecologists working at different scales.”

“We have named predator-dependent the functional responses of the form g = g(N,P), where the predator density P acts (in addition to N [prey abundance, US]) as an independent variable to determine the per capita kill rate […] predator-dependent functional response models have one more parameter than the prey-dependent or the ratio-dependent models. […] The main interest that we see in these intermediate models is that the additional parameter can provide a way to quantify the position of a specific predator-prey pair of species along a spectrum with prey dependence at one end and ratio dependence at the other end:

g(N) <- g(N,P) -> g(N/P) (1.21)

In the Hassell-Varley and Arditi-Akçakaya models […] the mutual interference parameter m plays the role of a cursor along this spectrum, from m = 0 for prey dependence to m = 1 for ratio dependence. Note that this theory does not exclude that strong interference goes “beyond ratio dependence,” with m > 1.2 This is also called overcompensation. […] In this book, rather than being interested in the interference parameters per se, we use predator-dependent models to determine, either parametrically or nonparametrically, which of the ends of the spectrum (1.21) better describes predator-prey systems in general.”

“[T]he fundamental problem of the Lotka-Volterra and the Rosensweig-MacArthur dynamic models lies in the functional response and in the fact that this mathematical function is assumed not to depend on consumer density. Since this function measures the number of prey captured per consumer per unit time, it is a quantity that should be accessible to observation. This variable could be apprehended either on the fast behavioral time scale or on the slow demographic time scale. These two approaches need not necessarily reveal the same properties: […] a given species could display a prey-dependent response on the fast scale and a predator-dependent response on the slow scale. The reason is that, on a very short scale, each predator individually may “feel” virtually alone in the environment and react only to the prey that it encounters. On the long scale, the predators are more likely to be affected by the presence of conspecifics, even without direct encounters. In the demographic context of this book, it is the long time scale that is relevant. […] if predator dependence is detected on the fast scale, then it can be inferred that it must be present on the slow scale; if predator dependence is not detected on the fast scale, it cannot be inferred that it is absent on the slow scale.”

Some related thoughts. A different way to think about this – which they don’t mention in the book, but which sprang to mind to me as I was reading it – is to think about this stuff in terms of a formal predator territorial overlap model and then asking yourself this question: Assume there’s zero territorial overlap – does this fact mean that the existence of conspecifics does not matter? The answer is of course no. The sizes of the individual patches/territories may be greatly influenced by the predator density even in such a context. Also, the territorial area available to potential offspring (certainly a fitness-relevant parameter) may be greatly influenced by the number of competitors inhabiting the surrounding territories. In relation to the last part of the quote it’s easy to see that in a model with significant territorial overlap you don’t need direct behavioural interaction among predators for the overlap to be relevant; even if two bears never meet, if one of them eats a fawn the other one would have come across two days later, well, such indirect influences may be important for prey availability. Of course as prey tend to be mobile, even if predator territories are static and non-overlapping in a geographic sense, they might not be in a functional sense. Moving on…

“In [chapter 2 we] attempted to assess the presence and the intensity of interference in all functional response data sets that we could gather in the literature. Each set must be trivariate, with estimates of the prey consumed at different values of prey density and different values of predator densities. Such data sets are not very abundant because most functional response experiments present in the literature are simply bivariate, with variations of the prey density only, often with a single predator individual, ignoring the fact that predator density can have an influence. This results from the usual presentation of functional responses in textbooks, which […] focus only on the influence of prey density.
Among the data sets that we analyzed, we did not find a single one in which the predator density did not have a significant effect. This is a powerful empirical argument against prey dependence. Most systems lie somewhere on the continuum between prey dependence (m=0) and ratio dependence (m=1). However, they do not appear to be equally distributed. The empirical evidence provided in this chapter suggests that they tend to accumulate closer to the ratio-dependent end than to the prey-dependent end.”

“Equilibrium properties result from the balanced predator-prey equations and contain elements of the underlying dynamic model. For this reason, the response of equilibria to a change in model parameters can inform us about the structure of the underlying equations. To check the appropriateness of the ratio-dependent versus prey-dependent views, we consider the theoretical equilibrium consequences of the two contrasting assumptions and compare them with the evidence from nature. […] According to the standard prey-dependent theory, in reference to [an] increase in primary production, the responses of the populations strongly depend on their level and on the total number of trophic levels. The last, top level always responds proportionally to F [primary input]. The next to the last level always remains constant: it is insensitive to enrichment at the bottom because it is perfectly controled [sic] by the last level. The first, primary producer level increases if the chain length has an odd number of levels, but declines (or stays constant with a Lotka-Volterra model) in the case of an even number of levels. According to the ratio-dependent theory, all levels increase proportionally, independently of how many levels are present. The present purpose of this chapter is to show that the second alternative is confirmed by natural data and that the strange predictions of the prey-dependent theory are unsupported.”

“If top predators are eliminated or reduced in abundance, models predict that the sequential lower trophic levels must respond by changes of alternating signs. For example, in a three-level system of plants-herbivores-predators, the reduction of predators leads to the increase of herbivores and the consequential reduction in plant abundance. This response is commonly called the trophic cascade. In a four-level system, the bottom level will increase in response to harvesting at the top. These predicted responses are quite intuitive and are, in fact, true for both short-term and long-term responses, irrespective of the theory one employs. […] A number of excellent reviews have summarized and meta-analyzed large amounts of data on trophic cascades in food chains […] In general, the cascading reaction is strongest in lakes, followed by marine systems, and weakest in terrestrial systems. […] Any theory that claims to describe the trophic chain equilibria has to produce such cascading when top predators are reduced or eliminated. It is well known that the standard prey-dependent theory supports this view of top-down cascading. It is not widely appreciated that top-down cascading is likewise a property of ratio-dependent trophic chains. […] It is [only] for equilibrial responses to enrichment at the bottom that predictions are strikingly different according to the two theories”.

As the book does spend a little time on this I should perhaps briefly interject here that the above paragraph should not be taken to indicate that the two types of models provide identical predictions in the top-down cascading context in all cases; both predict cascading, but there are even so some subtle differences between the models here as well. Some of these differences are however quite hard to test.

“[T]he traditional Lotka-Volterra interaction term […] is nothing other than the law of mass action of chemistry. It assumes that predator and prey individuals encounter each other randomly in the same way that molecules interact in a chemical solution. Other prey-dependent models, like Holling’s, derive from the same idea. […] an ecological system can only be described by such a model if conspecifics do not interfere with each other and if the system is sufficiently homogeneous […] we will demonstrate that spatial heterogeneity, be it in the form of a prey refuge or in the form of predator clusters, leads to emergence of gradual interference or of ratio dependence when the functional response is observed at the population level. […] We present two mechanistic individual-based models that illustrate how, with gradually increasing predator density and gradually increasing predator clustering, interference can become gradually stronger. Thus, a given biological system, prey dependent at low predator density, can gradually become ratio dependent at high predator density. […] ratio dependence is a simple way of summarizing the effects induced by spatial heterogeneity, while the prey dependent [models] (e.g., Lotka-Volterra) is more appropriate in homogeneous environments.”

“[W]e consider that a good model of interacting species must be fundamentally invariant to a proportional change of all abundances in the system. […] Allowing interacting populations to expand in balanced exponential growth makes the laws of ecology invariant with respect to multiplying interacting abundances by the same constant, so that only ratios matter. […] scaling invariance is required if we wish to preserve the possibility of joint exponential growth of an interacting pair. […] a ratio-dependent model allows for joint exponential growth. […] Neither the standard prey-dependent models nor the more general predator-dependent models allow for balanced growth. […] In our view, communities must be expected to expand exponentially in the presence of unlimited resources. Of course, limiting factors ultimately stop this expansion just as they do for a single species. With our view, it is the limiting resources that stop the joint expansion of the interacting populations; it is not directly due to the interactions themselves. This partitioning of the causes is a major simplification that traditional theory implies only in the case of a single species.”

August 1, 2017 Posted by | Biology, Books, Chemistry, Ecology, Mathematics, Studies | Leave a comment

Words

Most of these words are words I encountered while reading Rex Stout novels. To be more specific, perhaps 70 out of these 80 words are words I encountered while reading the Stout novels: And Be a Villain, Trouble in Triplicate, The Second Confession, Three Doors to Death, In the Best Families, Curtains for Three, Murder by the Book, Triple Jeopardy, Prisoner’s Base, and The Golden Spiders.

A few of the words are words which I have also included in previous posts of this kind, but the great majority of the words included are words which I have not previously blogged.

Percipient. Mantlet. Crick. Sepal. Shad. Lam. Gruff. Desist. Arachnology. Raffia. Electroplate. Runt. Temerarious. Temerity. Grump. Chousing. Gyp. Percale. Piddling. Dubiety.

Consommé. Pentathlon. Glower. Divvy. Styptic. Pattycake. Sagacity. Folderol. Glisten. Tassel. Bruit. Petiole. Zwieback. Hock. Flub. Shamus. Concessionaire. Pleat. Echelon. Colleen.

Apodictical. Glisten. Tortfeasor. Arytenoid. Cricoid. Splenetic. Zany. Tint. Boorish. Shuttlecock. Rangy. Gangly. Kilter. Caracul. Adventitious. Malefic. Rancor. Seersucker. Stooge. Frontispiece.

Flange. Avocation. Kobold. Platen. Forlorn. Sourpuss. Celadon. Griddle. Malum. Moot. Albacore. Gaff. Exigency. Cartado. Witling. Flounce. Glom. Pennant. Vernier. Blat.

July 28, 2017 Posted by | Books, Language | 2 Comments

A New Classification System for Diabetes: Rationale and Implications of the β-Cell–Centric Classification Schema

When I started writing this post I intended to write a standard diabetes post covering a variety of different papers, but while I was covering one of the papers I intended to include in the post I realized that I felt like I had to cover that paper in a lot of detail, and I figured I might as well make a separate post about it. Here’s a link to the paper: The Time Is Right for a New Classification System for Diabetes: Rationale and Implications of the β-Cell–Centric Classification Schema.

I have frequently discussed the problem of how best to think about and -categorize the various disorders of glucose homeostasis which are currently lumped together into the various discrete diabetes categories, both online and offline, see e.g. the last few paragraphs of this recent post. I have frequently noted in such contexts that simplistic and very large ‘boxes’ like ‘type 1’ and ‘type 2’ leave out a lot of details, and that some of the details that are lost by employing such a categorization scheme might well be treatment-relevant in some contexts. Individualized medicine is however expensive, so I still consider it an open question to which extent valuable information – which is to say, information that could potentially be used cost-effectively in the treatment context – is lost on account of the current diagnostic practices, but information is certainly lost and treatment options potentially neglected. Relatedly, what’s not cost-effective today may well be tomorrow.

As I decided to devote an entire post to this paper, it is of course a must-read if you’re interested in these topics. I have quoted extensively from the paper below:

“The current classification system presents challenges to the diagnosis and treatment of patients with diabetes mellitus (DM), in part due to its conflicting and confounding definitions of type 1 DM, type 2 DM, and latent autoimmune diabetes of adults (LADA). The current schema also lacks a foundation that readily incorporates advances in our understanding of the disease and its treatment. For appropriate and coherent therapy, we propose an alternate classification system. The β-cell–centric classification of DM is a new approach that obviates the inherent and unintended confusions of the current system. The β-cell–centric model presupposes that all DM originates from a final common denominator — the abnormal pancreatic β-cell. It recognizes that interactions between genetically predisposed β-cells with a number of factors, including insulin resistance (IR), susceptibility to environmental influences, and immune dysregulation/inflammation, lead to the range of hyperglycemic phenotypes within the spectrum of DM. Individually or in concert, and often self-perpetuating, these factors contribute to β-cell stress, dysfunction, or loss through at least 11 distinct pathways. Available, yet underutilized, treatments provide rational choices for personalized therapies that target the individual mediating pathways of hyperglycemia at work in any given patient, without the risk of drug-related hypoglycemia or weight gain or imposing further burden on the β-cells.”

“The essential function of a classification system is as a navigation tool that helps direct research, evaluate outcomes, establish guidelines for best practices for prevention and care, and educate on all of the above. Diabetes mellitus (DM) subtypes as currently categorized, however, do not fit into our contemporary understanding of the phenotypes of diabetes (16). The inherent challenges of the current system, together with the limited knowledge that existed at the time of the crafting of the current system, yielded definitions for type 1 DM, type 2 DM, and latent autoimmune diabetes in adults (LADA) that are not distinct and are ambiguous and imprecise.”

“Discovery of the role played by autoimmunity in the pathogenesis of type 1 DM created the assumption that type 1 DM and type 2 DM possess unique etiologies, disease courses, and, consequently, treatment approaches. There exists, however, overlap among even the most “typical” patient cases. Patients presenting with otherwise classic insulin resistance (IR)-associated type 2 DM may display hallmarks of type 1 DM. Similarly, obesity-related IR may be observed in patients presenting with “textbook” type 1 DM (7). The late presentation of type 1 DM provides a particular challenge for the current classification system, in which this subtype of DM is generally termed LADA. Leading diabetes organizations have not arrived at a common definition for LADA (5). There has been little consensus as to whether this phenotype constitutes a form of type 2 DM with early or fast destruction of β-cells, a late manifestation of type 1 DM (8), or a distinct entity with its own genetic footprint (5). Indeed, current parameters are inadequate to clearly distinguish any of the subforms of DM (Fig. 1).

https://i2.wp.com/care.diabetesjournals.org/content/diacare/39/2/179/F1.medium.gif

The use of IR to define type 2 DM similarly needs consideration. The fact that many obese patients with IR do not develop DM indicates that IR is insufficient to cause type 2 DM without predisposing factors that affect β-cell function (9).”

“The current classification schema imposes unintended constraints on individualized medicine. Patients diagnosed with LADA who retain endogenous insulin production may receive “default” insulin therapy as treatment of choice. This decision is guided largely by the categorization of LADA within type 1 DM, despite the capacity for endogenous insulin production. Treatment options that do not pose the risks of hypoglycemia or weight gain might be both useful and preferable for LADA but are typically not considered beyond use in type 2 DM (10). […] We believe that there is little rationale for limiting choice of therapy solely on the current definitions of type 1 DM, type 2 DM, and LADA. We propose that choice of therapy should be based on the particular mediating pathway(s) of hyperglycemia present in each individual patient […] the issue is not “what is LADA” or any clinical presentation of DM under the current system. The issue is the mechanisms and rate of destruction of β-cells at work in all DM. We present a model that provides a more logical approach to classifying DM: the β-cell–centric classification of DM. In this schema, the abnormal β-cell is recognized as the primary defect in DM. The β-cell–centric classification system recognizes the interplay of genetics, IR, environmental factors, and inflammation/immune system on the function and mass of β-cells […]. Importantly, this model is universal for the characterization of DM. The β-cell–centric concept can be applied to DM arising in genetically predisposed β-cells, as well as in strongly genetic IR syndromes, such as the Rabson-Mendenhall syndrome (28), which may exhaust nongenetically predisposed β-cells. Finally, the β-cell–centric classification of all DM supports best practices in the management of DM by identifying mediating pathways of hyperglycemia that are operative in each patient and directing treatment to those specific dysfunctions.”

“A key premise is that the mediating pathways of hyperglycemia are common across prediabetes, type 1 DM, type 2 DM, and other currently defined forms of DM. Accordingly, we believe that the current antidiabetes armamentarium has broader applicability across the spectrum of DM than is currently utilized.

The ideal treatment paradigm would be one that uses the least number of agents possible to target the greatest number of mediating pathways of hyperglycemia operative in the given patient. It is prudent to use agents that will help patients reach target A1C levels without introducing drug-related hypoglycemia or weight gain. Despite the capacity of insulin therapy to manage glucotoxicity, there is a concern for β-cell damage due to IR that has been exacerbated by exogenous insulin-induced hyperinsulinemia and weight gain (41).”

“We propose that the β-cell–centric model is a conceptual framework that could help optimize processes of care for DM. A1C, fasting blood glucose, and postprandial glucose testing remain the basis of DM diagnosis and monitoring. Precision medicine in the treatment of DM could be realized by additional diagnostic testing that could include C-peptide (1), islet cell antibodies or other markers of inflammation (1,65), measures of IR, improved assays for β-cell mass, and markers of environmental damage and by the development of markers for the various mediating pathways of hyperglycemia.

We uphold that there is, and will increasingly be, a place for genotyping in DM standard of care. Pharmacogenomics could help direct patient-level care (6669) and holds the potential to spur on research through the development of DM gene banks for analyzing genetic distinctions between type 1 DM, LADA, type 2 DM, and maturity-onset diabetes of the young. The cost for genotyping has become increasingly affordable.”

“The ideal treatment regimens should not be potentially detrimental to the long-term integrity of the β-cells. Specifically, sulfonylureas and glinides should be ardently avoided. Any benefits associated with sulfonylureas and glinides (including low cost) are not enduring and are far outweighed by their attendant risks (and associated treatment costs) of hypoglycemia and weight gain, high rate of treatment failure and subsequent enhanced requirements for antihyperglycemic management, potential for β-cell exhaustion (42), increased risk of cardiovascular events (74), and potential for increased risk of mortality (75,76). Fortunately, there are a large number of classes now available that do not pose these risks.”

“Newer agents present alternatives to insulin therapy, including in patients with “advanced” type 2 DM with residual insulin production. Insulin therapy induces hypoglycemia, weight gain, and a range of adverse consequences of hyperinsulinemia with both short- and long-term outcomes (77–85). Newer antidiabetes classes may be used to delay insulin therapy in candidate patients with endogenous insulin production (19). […] When insulin therapy is needed, we suggest it be incorporated as add-on therapy rather than as substitution for noninsulin antidiabetes agents. Outcomes research is needed to fully evaluate various combination therapeutic approaches, as well as the potential of newer agents to address drivers of β-cell dysfunction and loss.

The principles of the β-cell–centric model provide a rationale for adjunctive therapy with noninsulin regimens in patients with type 1 DM (7,1216). Thiazolidinedione (TZD) therapy in patients with type 1 DM presenting with IR, for example, is appropriate and can be beneficial (17). Clinical trials in type 1 DM show that incretins (20) or SGLT-2 inhibitors (25,88) as adjunctive therapy to exogenous insulin appear to reduce plasma glucose variability.”

July 24, 2017 Posted by | Diabetes, Medicine, Papers | Leave a comment

Epilepsy Diagnosis & Treatment – 5 New Things Every Physician Should Know

Links to related stuff:
i. Sudden unexpected death in epilepsy (SUDEP).
ii. Status epilepticus.
iii. Epilepsy surgery.
iv. Temporal lobe epilepsy.
v. Lesional epilepsy surgery.
vi. Nonlesional neocortical epilepsy.
vii. A Randomized, Controlled Trial of Surgery for Temporal-Lobe Epilepsy.
viii. Stereoelectroencephalography.
ix. Accuracy of intracranial electrode placement for stereoencephalography: A systematic review and meta-analysis. (The results of the review is not discussed in the lecture, for obvious reasons – lecture is a few years old, this review is brand new – but seemed relevant to me.)
x. MRI-guided laser ablation in epilepsy treatment.
xi. Laser thermal therapy: real-time MRI-guided and computer-controlled procedures for metastatic brain tumors.
xii. Critical review of the responsive neurostimulator system for epilepsy (Again, not covered but relevant).
xiii. A Multicenter, Prospective Pilot Study of Gamma Knife Radiosurgery for Mesial Temporal Lobe Epilepsy: Seizure Response, Adverse Events, and Verbal Memory.
xiv. Gamma Knife radiosurgery for recurrent or residual seizures after anterior temporal lobectomy in mesial temporal lobe epilepsy patients with hippocampal sclerosis: long-term follow-up results of more than 4 years (Not covered but relevant).

July 19, 2017 Posted by | Lectures, Medicine, Neurology, Studies | Leave a comment

Detecting Cosmic Neutrinos with IceCube at the Earth’s South Pole

I thought there were a bit too many questions/interruptions for my taste, mainly because you can’t really hear the questions posed by the members of the audience, but aside from that it’s a decent lecture. I’ve added a few links below which covers some of the topics discussed in the lecture.

Neutrino astronomy.
Antarctic Impulse Transient Antenna (ANITA).
Hydrophone.
Neutral pion decays.
IceCube Neutrino Observatory.
Evidence for High-Energy Extraterrestrial Neutrinos at the IceCube Detector (Science).
Atmospheric and astrophysical neutrinos above 1 TeV interacting in IceCube.
Notes on isotropy.
Measuring the flavor ratio of astrophysical neutrinos.
Blazar.
Supernova 1987A neutrino emissions.

July 18, 2017 Posted by | Astronomy, Lectures, Physics, Studies | Leave a comment

Beyond Significance Testing (III)

There are many ways to misinterpret significance tests, and this book spends quite a bit of time and effort on these kinds of issues. I decided to include in this post quite a few quotes from chapter 4 of the book, which deals with these topics in some detail. I also included some notes on effect sizes.

“[P] < .05 means that the likelihood of the data or results even more extreme given random sampling under the null hypothesis is < .05, assuming that all distributional requirements of the test statistic are satisfied and there are no other sources of error variance. […] the odds-against-chance fallacy […] [is] the false belief that p indicates the probability that a result happened by sampling error; thus, p < .05 says that there is less than a 5% likelihood that a particular finding is due to chance. There is a related misconception i call the filter myth, which says that p values sort results into two categories, those that are a result of “chance” (H0 not rejected) and others that are due to “real” effects (H0 rejected). These beliefs are wrong […] When p is calculated, it is already assumed that H0 is true, so the probability that sampling error is the only explanation is already taken to be 1.00. It is thus illogical to view p as measuring the likelihood of sampling error. […] There is no such thing as a statistical technique that determines the probability that various causal factors, including sampling error, acted on a particular result.

Most psychology students and professors may endorse the local Type I error fallacy [which is] the mistaken belief that p < .05 given α = .05 means that the likelihood that the decision just taken to reject H0 is a type I error is less than 5%. […] p values from statistical tests are conditional probabilities of data, so they do not apply to any specific decision to reject H0. This is because any particular decision to do so is either right or wrong, so no probability is associated with it (other than 0 or 1.0). Only with sufficient replication could one determine whether a decision to reject H0 in a particular study was correct. […] the valid research hypothesis fallacy […] refers to the false belief that the probability that H1 is true is > .95, given p < .05. The complement of p is a probability, but 1 – p is just the probability of getting a result even less extreme under H0 than the one actually found. This fallacy is endorsed by most psychology students and professors”.

“[S]everal different false conclusions may be reached after deciding to reject or fail to reject H0. […] the magnitude fallacy is the false belief that low p values indicate large effects. […] p values are confounded measures of effect size and sample size […]. Thus, effects of trivial magnitude need only a large enough sample to be statistically significant. […] the zero fallacy […] is the mistaken belief that the failure to reject a nil hypothesis means that the population effect size is zero. Maybe it is, but you cannot tell based on a result in one sample, especially if power is low. […] The equivalence fallacy occurs when the failure to reject H0: µ1 = µ2 is interpreted as saying that the populations are equivalent. This is wrong because even if µ1 = µ2, distributions can differ in other ways, such as variability or distribution shape.”

“[T]he reification fallacy is the faulty belief that failure to replicate a result is the failure to make the same decision about H0 across studies […]. In this view, a result is not considered replicated if H0 is rejected in the first study but not in the second study. This sophism ignores sample size, effect size, and power across different studies. […] The sanctification fallacy refers to dichotomous thinking about continuous p values. […] Differences between results that are “significant” versus “not significant” by close margins, such as p = .03 versus p = .07 when α = .05, are themselves often not statistically significant. That is, relatively large changes in p can correspond to small, nonsignificant changes in the underlying variable (Gelman & Stern, 2006). […] Classical parametric statistical tests are not robust against outliers or violations of distributional assumptions, especially in small, unrepresentative samples. But many researchers believe just the opposite, which is the robustness fallacy. […] most researchers do not provide evidence about whether distributional or other assumptions are met”.

“Many [of the above] fallacies involve wishful thinking about things that researchers really want to know. These include the probability that H0 or H1 is true, the likelihood of replication, and the chance that a particular decision to reject H0 is wrong. Alas, statistical tests tell us only the conditional probability of the data. […] But there is [however] a method that can tell us what we want to know. It is not a statistical technique; rather, it is good, old-fashioned replication, which is also the best way to deal with the problem of sampling error. […] Statistical significance provides even in the best case nothing more than low-level support for the existence of an effect, relation, or difference. That best case occurs when researchers estimate a priori power, specify the correct construct definitions and operationalizations, work with random or at least representative samples, analyze highly reliable scores in distributions that respect test assumptions, control other major sources of imprecision besides sampling error, and test plausible null hypotheses. In this idyllic scenario, p values from statistical tests may be reasonably accurate and potentially meaningful, if they are not misinterpreted. […] The capability of significance tests to address the dichotomous question of whether effects, relations, or differences are greater than expected levels of sampling error may be useful in some new research areas. Due to the many limitations of statistical tests, this period of usefulness should be brief. Given evidence that an effect exists, the next steps should involve estimation of its magnitude and evaluation of its substantive significance, both of which are beyond what significance testing can tell us. […] It should be a hallmark of a maturing research area that significance testing is not the primary inference method.”

“[An] effect size [is] a quantitative reflection of the magnitude of some phenomenon used for the sake of addressing a specific research question. In this sense, an effect size is a statistic (in samples) or parameter (in populations) with a purpose, that of quantifying a phenomenon of interest. more specific definitions may depend on study design. […] cause size refers to the independent variable and specifically to the amount of change in it that produces a given effect on the dependent variable. A related idea is that of causal efficacy, or the ratio of effect size to the size of its cause. The greater the causal efficacy, the more that a given change on an independent variable results in proportionally bigger changes on the dependent variable. The idea of cause size is most relevant when the factor is experimental and its levels are quantitative. […] An effect size measure […] is a named expression that maps data, statistics, or parameters onto a quantity that represents the magnitude of the phenomenon of interest. This expression connects dimensions or generalized units that are abstractions of variables of interest with a specific operationalization of those units.”

“A good effect size measure has the [following properties:] […] 1. Its scale (metric) should be appropriate for the research question. […] 2. It should be independent of sample size. […] 3. As a point estimate, an effect size should have good statistical properties; that is, it should be unbiased, consistent […], and efficient […]. 4. The effect size [should be] reported with a confidence interval. […] Not all effect size measures […] have all the properties just listed. But it is possible to report multiple effect sizes that address the same question in order to improve the communication of the results.” 

“Examples of outcomes with meaningful metrics include salaries in dollars and post-treatment survival time in years. means or contrasts for variables with meaningful units are unstandardized effect sizes that can be directly interpreted. […] In medical research, physical measurements with meaningful metrics are often available. […] But in psychological research there are typically no “natural” units for abstract, nonphysical constructs such as intelligence, scholastic achievement, or self-concept. […] Therefore, metrics in psychological research are often arbitrary instead of meaningful. An example is the total score for a set of true-false items. Because responses can be coded with any two different numbers, the total is arbitrary. Standard scores such as percentiles and normal deviates are arbitrary, too […] Standardized effect sizes can be computed for results expressed in arbitrary metrics. Such effect sizes can also be directly compared across studies where outcomes have different scales. this is because standardized effect sizes are based on units that have a common meaning regardless of the original metric.”

“1. It is better to report unstandardized effect sizes for outcomes with meaningful metrics. This is because the original scale is lost when results are standardized. 2. Unstandardized effect sizes are best for comparing results across different samples measured on the same outcomes. […] 3. Standardized effect sizes are better for comparing conceptually similar results based on different units of measure. […] 4. Standardized effect sizes are affected by the corresponding unstandardized effect sizes plus characteristics of the study, including its design […], whether factors are fixed or random, the extent of error variance, and sample base rates. This means that standardized effect sizes are less directly comparable over studies that differ in their designs or samples. […] 5. There is no such thing as T-shirt effect sizes (Lenth, 2006– 2009) that classify standardized effect sizes as “small,” “medium,” or “large” and apply over all research areas. This is because what is considered a large effect in one area may be seen as small or trivial in another. […] 6. There is usually no way to directly translate standardized effect sizes into implications for substantive significance. […] It is standardized effect sizes from sets of related studies that are analyzed in most meta analyses.”

July 16, 2017 Posted by | Books, Psychology, Statistics | Leave a comment

Quotes

i. “Mathematics is a tool which ideally permits mediocre minds to solve complicated problems expeditiously.” (Floyd Alburn Firestone)

ii. “Growing old’s like being increasingly penalized for a crime you haven’t committed.” (Anthony Dymoke Powell)

iii. “To make a discovery is not necessarily the same as to understand a discovery.” (Abraham Pais)

iv. “People usually take for granted that the way things are is the way things must be.” (Poul William Anderson)

v. ” Space isn’t remote at all. It’s only an hour’s drive away if your car could go straight upwards.” (Fred Hoyle)

vi. “One can never pay in gratitude; one can only pay “in kind” somewhere else in life.” (Anne Morrow Lindbergh)

vii. “When a nice quote comes to mind, I always attribute it to Montesquieu, or to La Rochefoucauld. They’ve never complained.” (Indro Montanelli)

viii. “Program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.” (Edsger Wybe Dijkstra)

ix. “History teaches us that men and nations behave wisely once they have exhausted all other alternatives.” (Abba Eban)

x. “Scientific research is not conducted in a social vacuum.” (Robert K. Merton)

xi. “No man knows fully what has shaped his own thinking” (-ll-)

xii. “I write as clearly as I am able to. I sometimes tackle ideas and notions that are relatively complex, and it is very difficult to be sure that I am conveying them in the best way. Anyone who goes beyond cliche phrases and cliche ideas will have this trouble.” (Raphael Aloysius Lafferty)

xiii. “Change should be a friend. It should happen by plan, not by accident.” (Philip B. Crosby)

xiv. “The universe of all things that exist may be understood as a universe of systems where a system is defined as any set of related and interacting elements. This concept is primitive and powerful and has been used increasingly over the last half-century to organize knowledge in virtually all domains of interest to investigators. As human inventions and social interactions grow more complex, general conceptual frameworks that integrate knowledge among different disciplines studying those emerging systems grow more important.” (Gale Alden Swanson & James Grier Miller, Living Systems Theory)

xv. “When I die it’s not me that will be affected. It’s the ones I leave behind.” (Cameron Troy Duncan)

xvi. “I was always deeply uncertain about my own intellectual capacity; I thought I was unintelligent. And it is true that I was, and still am, rather slow. I need time to seize things because I always need to understand them fully. […] At the end of the eleventh grade, I […] came to the conclusion that rapidity doesn’t have a precise relation to intelligence. What is important is to deeply understand things and their relations to each other. This is where intelligence lies. The fact of being quick or slow isn’t really relevant. Naturally, it’s helpful to be quick, like it is to have a good memory. But it’s neither necessary nor sufficient for intellectual success.” (Laurent-Moïse Schwartz)

xvii. “A slowly moving queue does not move uniformly. Rather, waves of motion pass down the queue. The frequency and amplitude of these waves is inversely related to the speed at which the queue is served.” (Anthony Stafford Beer)

xviii. “It is terribly important to appreciate that some things remain obscure to the bitter end.” (-ll-)

xix. “Definitions, like questions and metaphors, are instruments for thinking. Their authority rests entirely on their usefulness, not their correctness. We use definitions in order to delineate problems we wish to investigate, or to further interests we wish to promote. In other words, we invent definitions and discard them as suits our purposes. […] definitions are hypotheses, and […] embedded in them is a particular philosophical, sociological, or epistemological point of view.” (Neil Postman)

xx. “There’s no system foolproof enough to defeat a sufficiently great fool.” (Edward Teller)

July 15, 2017 Posted by | Quotes/aphorisms | Leave a comment

Gravity

“The purpose of this book is to give the reader a very brief introduction to various different aspects of gravity. We start by looking at the way in which the theory of gravity developed historically, before moving on to an outline of how it is understood by scientists today. We will then consider the consequences of gravitational physics on the Earth, in the Solar System, and in the Universe as a whole. The final chapter describes some of the frontiers of current research in theoretical gravitational physics.”

I was not super impressed by this book, mainly because the level of coverage was not quite as high as has been the level of coverage of some of the other physics books in the OUP – A Brief Introduction series. But it’s definitely an okay book about this topic, I was much closer to a three star rating on goodreads than a one star rating, and I did learn some new things from it. I might still change my mind about my two-star rating of the book.

I’ll cover the book the same way I’ve covered some of the other books in the series; I’ll post some quotes with some observations of interest, and then I’ll add some supplementary links towards the end of the post. ‘As usual’ (see e.g. also the introductory remarks to this post) I’ll add links to topics even if I have previously, perhaps on multiple occasions, added the same links when covering other books – the idea behind the links is to remind me – and indicate to you – which kinds of topics are covered in the book.

“[O]ver large distances it is gravity that dominates. This is because gravity is only ever attractive and because it can never be screened. So while most large objects are electrically neutral, they can never be gravitationally neutral. The gravitational force between objects with mass always acts to pull those objects together, and always increases as they become more massive.”

“The challenges involved in testing Newton’s law of gravity in the laboratory arise principally due to the weakness of the gravitational force compared to the other forces of nature. This weakness means that even the smallest residual electric charges on a piece of experimental equipment can totally overwhelm the gravitational force, making it impossible to measure. All experimental equipment therefore needs to be prepared with the greatest of care, and the inevitable electric charges that sneak through have to be screened by introducing metal shields that reduce their influence. This makes the construction of laboratory experiments to test gravity extremely difficult, and explains why we have so far only probed gravity down to scales a little below 1mm (this can be compared to around a billionth of a billionth of a millimetre for the electric force).”

“There are a large number of effects that result from Einstein’s theory. […] [T]he anomalous orbit of the planet Mercury; the bending of starlight around the Sun; the time delay of radio signals as they pass by the Sun; and the behaviour of gyroscopes in orbit around the Earth […] are four of the most prominent relativistic gravitational effects that can be observed in the Solar System.” [As an aside, I only yesterday watched the first ~20 minutes of the first of Nima Arkani-Hamed’s lectures on the topic of ‘Robustness of GR. Attempts to Modify Gravity’, which was recently uploaded on the IAS youtube channel, before I concluded that I was probably not going to be able to follow the lecture – I would have been able to tell Hamed, on account of having read this book, that the name of the ‘American’ astronomer whose name eluded him early on in the lecture (5 minutes in or so) was John Couch Adams (who was in fact British, not American)].

“[T]he overall picture we are left with is very encouraging for Einstein’s theory of gravity. The foundational assumptions of this theory, such as the constancy of mass and the Universality of Free Fall, have been tested to extremely high accuracy. The inverse square law that formed the basis of Newton’s theory, and which is a good first approximation to Einstein’s theory, has been tested from the sub-millimetre scale all the way up to astrophysical scales. […] We […] have very good evidence that Newton’s inverse square law is a good approximation to gravity over a wide range of distance scales. These scales range from a fraction of a millimetre, to hundreds of millions of metres. […] We are also now in possession of a number of accurate experimental results that probe the tiny, subtle effects that result from Einstein’s theory specifically. This data allows us direct experimental insight into the relationship between matter and the curvature of space-time, and all of it is so far in good agreement with Einstein’s predictions.”

“[A]ll of the objects in the Solar System are, relatively speaking, rather slow moving and not very dense. […] If we set our sights a little further though, we can find objects that are much more extreme than anything we have available nearby. […] observations of them have allowed us to explore gravity in ways that are simply impossible in our own Solar System. The extreme nature of these objects amplifies the effects of Einstein’s theory […] Just as the orbit of Mercury precesses around the Sun so too the neutron stars in the Hulse–Taylor binary system precess around each other. To compare with similar effects in our Solar System, the orbit of the Hulse–Taylor pulsar precesses as much in a day as Mercury does in a century.”

“[I]n Einstein’s theory, gravity is due to the curvature of space-time. Massive objects like stars and planets deform the shape of the space-time in which they exist, so that other bodies that move through it appear to have their trajectories bent. It is the mistaken interpretation of the motion of these bodies as occurring in a flat space that leads us to infer that there is a force called gravity. In fact, it is just the curvature of space-time that is at work. […] The relevance of this for gravitational waves is that if a group of massive bodies are in relative motion […], then the curvature of the space-time in which they exist is not usually fixed in time. The curvature of the space-time is set by the massive bodies, so if the bodies are in motion, the curvature of space-time should be expected to be constantly changing. […] in Einstein’s theory, space-time is a dynamical entity. As an example of this, consider the supernovae […] Before their cores collapse, leading to catastrophic explosion, they are relatively stable objects […] After they explode they settle down to a neutron star or a black hole, and once again return to a relatively stable state, with a gravitational field that doesn’t change much with time. During the explosion, however, they eject huge amounts of mass and energy. Their gravitational field changes rapidly throughout this process, and therefore so does the curvature of the space-time around them.

Like any system that is pushed out of equilibrium and made to change rapidly, this causes disturbances in the form of waves. A more down-to-earth example of a wave is what happens when you throw a stone into a previously still pond. The water in the pond was initially in a steady state, but the stone causes a rapid change in the amount of water at one point. The water in the pond tries to return to its tranquil initial state, which results in the propagation of the disturbance, in the form of ripples that move away from the point where the stone landed. Likewise, a loud noise in a previously quiet room originates from a change in air pressure at a point (e.g. a stereo speaker). The disturbance in the air pressure propagates outwards as a pressure wave as the air tries to return to a stable state, and we perceive these pressure waves as sound. So it is with gravity. If the curvature of space-time is pushed out of equilibrium, by the motion of mass or energy, then this disturbance travels outwards as waves. This is exactly what occurs when a star collapses and its outer envelope is ejected by the subsequent explosion. […] The speed with which waves propagate usually depends on the medium through which they travel. […] The medium for gravitational waves is space-time itself, and according to Einstein’s theory, they propagate at exactly the same speed as light. […] [If a gravitational wave passes through a cloud of gas,] the gravitational wave is not a wave in the gas, but rather a propagating disturbance in the space-time in which the gas exists. […] although the atoms in the gas might be closer together (or further apart) than they were before the wave passed through them, it is not because the atoms have moved, but because the amount of space between them has been decreased (or increased) by the wave. The gravitational wave changes the distance between objects by altering how much space there is in between them, not by moving them within a fixed space.”

“If we look at the right galaxies, or collect enough data, […] we can use it to determine the gravitational fields that exist in space. […] we find that there is more gravity than we expected there to be, from the astrophysical bodies that we can see directly. There appears to be a lot of mass, which bends light via its gravitational field, but that does not interact with the light in any other way. […] Moving to even smaller scales, we can look at how individual galaxies behave. It has been known since the 1970s that the rate at which galaxies rotate is too high. What I mean is that if the only source of gravity in a galaxy was the visible matter within it (mostly stars and gas), then any galaxy that rotated as fast as those we see around us would tear itself apart. […] That they do not fly apart, despite their rapid rotation, strongly suggests that the gravitational fields within them are larger than we initially suspected. Again, the logical conclusion is that there appears to be matter in galaxies that we cannot see but which contributes to the gravitational field. […] Many of the different physical processes that occur in the Universe lead to the same surprising conclusion: the gravitational fields we infer, by looking at the Universe around us, require there to be more matter than we can see with our telescopes. Beyond this, in order for the largest structures in the Universe to have evolved into their current state, and in order for the seeds of these structures to look the way they do in the CMB, this new matter cannot be allowed to interact with light at all (or, at most, interact only very weakly). This means that not only do we not see this matter, but that it cannot be seen at all using light, because light is required to pass straight through it. […] The substance that gravitates in this way but cannot be seen is referred to as dark matter. […] There needs to be approximately five times as much dark matter as there is ordinary matter. […] the evidence for the existence of dark matter comes from so many different sources that it is hard to argue with it.”

“[T]here seems to be a type of anti-gravity at work when we look at how the Universe expands. This anti-gravity is required in order to force matter apart, rather than pull it together, so that the expansion of the Universe can accelerate. […] The source of this repulsive gravity is referred to by scientists as dark energy […] our current overall picture of the Universe is as follows: only around 5 per cent of the energy in the Universe is in the form of normal matter; about 25 per cent is thought to be in the form of the gravitationally attractive dark matter; and the remaining 70 per cent is thought to be in the form of the gravitationally repulsive dark energy. These proportions, give or take a few percentage points here and there, seem sufficient to explain all astronomical observations that have been made to date. The total of all three of these types of energy, added together, also seems to be just the right amount to make space flat […] The flat Universe, filled with mostly dark energy and dark matter, is usually referred to as the Concordance Model of the Universe. Among astronomers, it is now the consensus view that this is the model of the Universe that best fits their data.”

 

The universality of free fall.
Galileo’s Leaning Tower of Pisa experiment.
Isaac Newton/Philosophiæ Naturalis Principia Mathematica/Newton’s law of universal gravitation.
Kepler’s laws of planetary motion.
Luminiferous aether.
Special relativity.
Spacetime.
General relativity.
Spacetime curvature.
Pound–Rebka experiment.
Gravitational time dilation.
Gravitational redshift space-probe experiment (Essot & Levine).
Michelson–Morley experiment.
Hughes–Drever experiment.
Tests of special relativity.
Eötvös experiment.
Torsion balance.
Cavendish experiment.
LAGEOS.
Interferometry.
Geodetic precession.
Frame-dragging.
Gravity Probe B.
White dwarf/neutron star/supernova/gravitational collapse/black hole.
Hulse–Taylor binary.
Arecibo Observatory.
PSR J1738+0333.
Gravitational wave.
Square Kilometre Array.
PSR J0337+1715.
LIGO.
Weber bar.
MiniGrail.
Laser Interferometer Space Antenna.
Edwin Hubble/Hubble’s Law.
Physical cosmology.
Alexander Friedmann/Friedmann equations.
Cosmological constant.
Georges Lemaître.
Ralph Asher Alpher/Robert Hermann/CMB/Arno Penzias/Robert Wilson.
Cosmic Background Explorer.
The BOOMERanG experiment.
Millimeter Anisotropy eXperiment IMaging Array.
Wilkinson Microwave Anisotropy Probe.
High-Z Supernova Search Team.
CfA Redshift Survey/CfA2 Great Wall/2dF Galaxy Redshift Survey/Sloan Digital Sky Survey/Sloan Great Wall.
Gravitational lensing.
Inflation (cosmology).
Lambda-CDM model.
BICEP2.
Large Synoptic Survey Telescope.
Grand Unified Theory.
Renormalization (quantum theory).
String theory.
Loop quantum gravity.
Unruh effect.
Hawking radiation.
Anthropic principle.

July 15, 2017 Posted by | Astronomy, Books, cosmology, Physics | Leave a comment