A few (more) diabetes papers of interest

Earlier this week I covered a couple of papers, but the second paper turned out to include a lot of interesting stuff so I decided to cut the post short and postpone my coverage of the other papers I’d intended to cover in that post until a later point in time; this post includes some of those other papers I’d intended to cover in that post.

i. TCF7L2 Genetic Variants Contribute to Phenotypic Heterogeneity of Type 1 Diabetes.

“Although the autoimmune destruction of β-cells has a major role in the development of type 1 diabetes, there is growing evidence that the differences in clinical, metabolic, immunologic, and genetic characteristics among patients (1) likely reflect diverse etiology and pathogenesis (2). Factors that govern this heterogeneity are poorly understood, yet these may have important implications for prognosis, therapy, and prevention.

The transcription factor 7 like 2 (TCF7L2) locus contains the single nucleotide polymorphism (SNP) most strongly associated with type 2 diabetes risk, with an ∼30% increase per risk allele (3). In a U.S. cohort, heterozygous and homozygous carriers of the at-risk alleles comprised 40.6% and 7.9%, respectively, of the control subjects and 44.3% and 18.3%, respectively, of the individuals with type 2 diabetes (3). The locus has no known association with type 1 diabetes overall (48), with conflicting reports in latent autoimmune diabetes in adults (816). […] Our studies in two separate cohorts have shown that the type 2 diabetes–associated TCF7L2 genetic variant is more frequent among specific subsets of individuals with autoimmune type 1 diabetes, specifically those with fewer markers of islet autoimmunity (22,23). These observations support a role of this genetic variant in the pathogenesis of diabetes at least in a subset of individuals with autoimmune diabetes. However, whether individuals with type 1 diabetes and this genetic variant have distinct metabolic abnormalities has not been investigated. We aimed to study the immunologic and metabolic characteristics of individuals with type 1 diabetes who carry a type 2 diabetes–associated allele of the TCF7L2 locus.”

“We studied 810 TrialNet participants with newly diagnosed type 1 diabetes and found that among individuals 12 years and older, the type 2 diabetes–associated TCF7L2 genetic variant is more frequent in those presenting with a single autoantibody than in participants who had multiple autoantibodies. These TCF7L2 variants were also associated with higher mean C-peptide AUC and lower mean glucose AUC levels at the onset of type 1 diabetes. […] These findings suggest that, besides the well-known link with type 2 diabetes, the TCF7L2 locus may play a role in the development of type 1 diabetes. The type 2 diabetes–associated TCF7L2 genetic variant identifies a subset of individuals with autoimmune type 1 diabetes and fewer markers of islet autoimmunity, lower glucose, and higher C-peptide at diagnosis. […] A possible interpretation of these data is that TCF7L2-encoded diabetogenic mechanisms may contribute to diabetes development in individuals with limited autoimmunity […]. Because the risk of progression to type 1 diabetes is lower in individuals with single compared with multiple autoantibodies, it is possible that in the absence of this type 2 diabetes–associated TCF7L2 variant, these individuals may have not manifested diabetes. If that is the case, we would postulate that disease development in these patients may have a type 2 diabetes–like pathogenesis in which islet autoimmunity is a significant component but not necessarily the primary driver.”

“The association between this genetic variant and single autoantibody positivity was present in individuals 12 years or older but not in children younger than 12 years. […] The results in the current study suggest that the type 2 diabetes–associated TCF7L2 genetic variant plays a larger role in older individuals. There is mounting evidence that the pathogenesis of type 1 diabetes varies by age (31). Younger individuals appear to have a more aggressive form of disease, with faster decline of β-cell function before and after onset of disease, higher frequency and severity of diabetic ketoacidosis, which is a clinical correlate of severe insulin deficiency, and lower C-peptide at presentation (3135). Furthermore, older patients are less likely to have type 1 diabetes–associated HLA alleles and islet autoantibodies (28). […] Taken together, we have demonstrated that individuals with autoimmune type 1 diabetes who carry the type 2 diabetes–associated TCF7L2 genetic variant have a distinct phenotype characterized by milder immunologic and metabolic characteristics than noncarriers, closer to those of type 2 diabetes, with an important effect of age.”

ii. Heart Failure: The Most Important, Preventable, and Treatable Cardiovascular Complication of Type 2 Diabetes.

“Concerns about cardiovascular disease in type 2 diabetes have traditionally focused on atherosclerotic vasculo-occlusive events, such as myocardial infarction, stroke, and limb ischemia. However, one of the earliest, most common, and most serious cardiovascular disorders in patients with diabetes is heart failure (1). Following its onset, patients experience a striking deterioration in their clinical course, which is marked by frequent hospitalizations and eventually death. Many sudden deaths in diabetes are related to underlying ventricular dysfunction rather than a new ischemic event. […] Heart failure and diabetes are linked pathophysiologically. Type 2 diabetes and heart failure are each characterized by insulin resistance and are accompanied by the activation of neurohormonal systems (norepinephrine, angiotensin II, aldosterone, and neprilysin) (3). The two disorders overlap; diabetes is present in 35–45% of patients with chronic heart failure, whether they have a reduced or preserved ejection fraction.”

“Treatments that lower blood glucose do not exert any consistently favorable effect on the risk of heart failure in patients with diabetes (6). In contrast, treatments that increase insulin signaling are accompanied by an increased risk of heart failure. Insulin use is independently associated with an enhanced likelihood of heart failure (7). Thiazolidinediones promote insulin signaling and have increased the risk of heart failure in controlled clinical trials (6). With respect to incretin-based secretagogues, liraglutide increases the clinical instability of patients with existing heart failure (8,9), and the dipeptidyl peptidase 4 inhibitors saxagliptin and alogliptin are associated with an increased risk of heart failure in diabetes (10). The likelihood of heart failure with the use of sulfonylureas may be comparable to that with thiazolidinediones (11). Interestingly, the only two classes of drugs that ameliorate hyperinsulinemia (metformin and sodium–glucose cotransporter 2 inhibitors) are also the only two classes of antidiabetes drugs that appear to reduce the risk of heart failure and its adverse consequences (12,13). These findings are consistent with experimental evidence that insulin exerts adverse effects on the heart and kidneys that can contribute to heart failure (14). Therefore, physicians can prevent many cases of heart failure in type 2 diabetes by careful consideration of the choice of agents used to achieve glycemic control. Importantly, these decisions have an immediate effect; changes in risk are seen within the first few months of changes in treatment. This immediacy stands in contrast to the years of therapy required to see a benefit of antidiabetes drugs on microvascular risk.”

“As reported by van den Berge et al. (4), the prognosis of patients with heart failure has improved over the past two decades; heart failure with a reduced ejection fraction is a treatable disease. Inhibitors of the renin-angiotensin system are a cornerstone of the management of both disorders; they prevent the onset of heart failure and the progression of nephropathy in patients with diabetes, and they reduce the risk of cardiovascular death and hospitalization in those with established heart failure (3,15). Diabetes does not influence the magnitude of the relative benefit of ACE inhibitors in patients with heart failure, but patients with diabetes experience a greater absolute benefit from treatment (16).”

“The totality of evidence from randomized trials […] demonstrates that in patients with diabetes, heart failure is not only common and clinically important, but it can also be prevented and treated. This conclusion is particularly significant because physicians have long ignored heart failure in their focus on glycemic control and their concerns about the ischemic macrovascular complications of diabetes (1).”

iii. Closely related to the above study: Mortality Reduction Associated With β-Adrenoceptor Inhibition in Chronic Heart Failure Is Greater in Patients With Diabetes.

“Diabetes increases mortality in patients with chronic heart failure (CHF) and reduced left ventricular ejection fraction. Studies have questioned the safety of β-adrenoceptor blockers (β-blockers) in some patients with diabetes and reduced left ventricular ejection fraction. We examined whether β-blockers and ACE inhibitors (ACEIs) are associated with differential effects on mortality in CHF patients with and without diabetes. […] We conducted a prospective cohort study of 1,797 patients with CHF recruited between 2006 and 2014, with mean follow-up of 4 years.”

RESULTS Patients with diabetes were prescribed larger doses of β-blockers and ACEIs than were patients without diabetes. Increasing β-blocker dose was associated with lower mortality in patients with diabetes (8.9% per mg/day; 95% CI 5–12.6) and without diabetes (3.5% per mg/day; 95% CI 0.7–6.3), although the effect was larger in people with diabetes (interaction P = 0.027). Increasing ACEI dose was associated with lower mortality in patients with diabetes (5.9% per mg/day; 95% CI 2.5–9.2) and without diabetes (5.1% per mg/day; 95% CI 2.6–7.6), with similar effect size in these groups (interaction P = 0.76).”

“Our most important findings are:

  • Higher-dose β-blockers are associated with lower mortality in patients with CHF and LVSD, but patients with diabetes may derive more benefit from higher-dose β-blockers.

  • Higher-dose ACEIs were associated with comparable mortality reduction in people with and without diabetes.

  • The association between higher β-blocker dose and reduced mortality is most pronounced in patients with diabetes who have more severely impaired left ventricular function.

  • Among patients with diabetes, the relationship between β-blocker dose and mortality was not associated with glycemic control or insulin therapy.”

“We make the important observation that patients with diabetes may derive more prognostic benefit from higher β-blocker doses than patients without diabetes. These data should provide reassurance to patients and health care providers and encourage careful but determined uptitration of β-blockers in this high-risk group of patients.”

iv. Diabetes, Prediabetes, and Brain Volumes and Subclinical Cerebrovascular Disease on MRI: The Atherosclerosis Risk in Communities Neurocognitive Study (ARIC-NCS).

“Diabetes and prediabetes are associated with accelerated cognitive decline (1), and diabetes is associated with an approximately twofold increased risk of dementia (2). Subclinical brain pathology, as defined by small vessel disease (lacunar infarcts, white matter hyperintensities [WMH], and microhemorrhages), large vessel disease (cortical infarcts), and smaller brain volumes also are associated with an increased risk of cognitive decline and dementia (37). The mechanisms by which diabetes contributes to accelerated cognitive decline and dementia are not fully understood, but contributions of hyperglycemia to both cerebrovascular disease and primary neurodegenerative disease have been suggested in the literature, although results are inconsistent (2,8). Given that diabetes is a vascular risk factor, brain atrophy among individuals with diabetes may be driven by increased cerebrovascular disease. Brain magnetic resonance imaging (MRI) provides a noninvasive opportunity to study associations of hyperglycemia with small vessel disease (lacunar infarcts, WMH, microhemorrhages), large vessel disease (cortical infarcts), and brain volumes (9).”

“Overall, the mean age of participants [(n = 1,713)] was 75 years, 60% were women, 27% were black, 30% had prediabetes (HbA1c 5.7 to <6.5%), and 35% had diabetes. Compared with participants without diabetes and HbA1c <5.7%, those with prediabetes (HbA1c 5.7 to <6.5%) were of similar age (75.2 vs. 75.0 years; P = 0.551), were more likely to be black (24% vs. 11%; P < 0.001), have less than a high school education (11% vs. 7%; P = 0.017), and have hypertension (71% vs. 63%; P = 0.012) (Table 1). Among participants with diabetes, those with HbA1c <7.0% versus ≥7.0% were of similar age (75.4 vs. 75.1 years; P = 0.481), but those with diabetes and HbA1c ≥7.0% were more likely to be black (39% vs. 28%; P = 0.020) and to have less than a high school education (23% vs. 16%; P = 0.031) and were more likely to have a longer duration of diabetes (12 vs. 8 years; P < 0.001).”

“Compared with participants without diabetes and HbA1c <5.7%, those with diabetes and HbA1c ≥7.0% had smaller total brain volume (β −0.20 SDs; 95% CI −0.31, −0.09) and smaller regional brain volumes, including frontal, temporal, occipital, and parietal lobes; deep gray matter; Alzheimer disease signature region; and hippocampus (all P < 0.05) […]. Compared with participants with diabetes and HbA1c <7.0%, those with diabetes and HbA1c ≥7.0% had smaller total brain volume (P < 0.001), frontal lobe volume (P = 0.012), temporal lobe volume (P = 0.012), occipital lobe volume (P = 0.008), parietal lobe volume (P = 0.015), deep gray matter volume (P < 0.001), Alzheimer disease signature region volume (0.031), and hippocampal volume (P = 0.016). Both participants with diabetes and HbA1c <7.0% and those with prediabetes (HbA1c 5.7 to <6.5%) had similar total and regional brain volumes compared with participants without diabetes and HbA1c <5.7% (all P > 0.05). […] No differences in the presence of lobar microhemorrhages, subcortical microhemorrhages, cortical infarcts, and lacunar infarcts were observed among the diabetes-HbA1c categories (all P > 0.05) […]. Compared with participants without diabetes and HbA1c <5.7%, those with diabetes and HbA1c ≥7.0% had increased WMH volume (P = 0.016). The WMH volume among participants with diabetes and HbA1c ≥7.0% was also significantly greater than among those with diabetes and HbA1c <7.0% (P = 0.017).”

“Those with diabetes duration ≥10 years were older than those with diabetes duration <10 years (75.9 vs. 75.0 years; P = 0.041) but were similar in terms of race and sex […]. Compared with participants with diabetes duration <10 years, those with diabetes duration ≥10 years has smaller adjusted total brain volume (β −0.13 SDs; 95% CI −0.20, −0.05) and smaller temporal lobe (β −0.14 SDs; 95% CI −0.24, −0.03), parietal lobe (β − 0.11 SDs; 95% CI −0.21, −0.01), and hippocampal (β −0.16 SDs; 95% CI −0.30, −0.02) volumes […]. Participants with diabetes duration ≥10 years also had a 2.44 times increased odds (95% CI 1.46, 4.05) of lacunar infarcts compared with those with diabetes duration <10 years”.

In this community-based population, we found that ARIC-NCS participants with diabetes with HbA1c ≥7.0% have smaller total and regional brain volumes and an increased burden of WMH, but those with prediabetes (HbA1c 5.7 to <6.5%) and diabetes with HbA1c <7.0% have brain volumes and markers of subclinical cerebrovascular disease similar to those without diabetes. Furthermore, among participants with diabetes, those with more-severe disease (as measured by higher HbA1c and longer disease duration) had smaller total and regional brain volumes and an increased burden of cerebrovascular disease compared with those with lower HbA1c and shorter disease duration. However, we found no evidence that associations of diabetes with smaller brain volumes are mediated by cerebrovascular disease.

The findings of this study extend the current literature that suggests that diabetes is strongly associated with brain volume loss (11,2527). Global brain volume loss (11,2527) has been consistently reported, but associations of diabetes with smaller specific brain regions have been less robust (27,28). Similar to prior studies, the current results show that compared with individuals without diabetes, those with diabetes have smaller total brain volume (11,2527) and regional brain volumes, including frontal and occipital lobes, deep gray matter, and the hippocampus (25,27). Furthermore, the current study suggests that greater severity of disease (as measured by HbA1c and diabetes duration) is associated with smaller total and regional brain volumes. […] Mechanisms whereby diabetes may contribute to brain volume loss include accelerated amyloid-β and hyperphosphorylated tau deposition as a result of hyperglycemia (29). Another possible mechanism involves pancreatic amyloid (amylin) infiltration of the brain, which then promotes amyloid-β deposition (29). […] Taken together, […] the current results suggest that diabetes is associated with both lower brain volumes and increased cerebrovascular pathology (WMH and lacunes).”

v. Interventions to increase attendance for diabetic retinopathy screening (Cochrane review).

“The primary objective of the review was to assess the effectiveness of quality improvement (QI) interventions that seek to increase attendance for DRS in people with type 1 and type 2 diabetes.

Secondary objectives were:
To use validated taxonomies of QI intervention strategies and behaviour change techniques (BCTs) to code the description of interventions in the included studies and determine whether interventions that include particular QI strategies or component BCTs are more effective in increasing screening attendance;
To explore heterogeneity in effect size within and between studies to identify potential explanatory factors for variability in effect size;
To explore differential effects in subgroups to provide information on how equity of screening attendance could be improved;
To critically appraise and summarise current evidence on the resource use, costs and cost effectiveness.”

“We included 66 RCTs conducted predominantly (62%) in the USA. Overall we judged the trials to be at low or unclear risk of bias. QI strategies were multifaceted and targeted patients, healthcare professionals or healthcare systems. Fifty-six studies (329,164 participants) compared intervention versus usual care (median duration of follow-up 12 months). Overall, DRS [diabetic retinopathy screening] attendance increased by 12% (risk difference (RD) 0.12, 95% confidence interval (CI) 0.10 to 0.14; low-certainty evidence) compared with usual care, with substantial heterogeneity in effect size. Both DRS-targeted (RD 0.17, 95% CI 0.11 to 0.22) and general QI interventions (RD 0.12, 95% CI 0.09 to 0.15) were effective, particularly where baseline DRS attendance was low. All BCT combinations were associated with significant improvements, particularly in those with poor attendance. We found higher effect estimates in subgroup analyses for the BCTs ‘goal setting (outcome)’ (RD 0.26, 95% CI 0.16 to 0.36) and ‘feedback on outcomes of behaviour’ (RD 0.22, 95% CI 0.15 to 0.29) in interventions targeting patients, and ‘restructuring the social environment’ (RD 0.19, 95% CI 0.12 to 0.26) and ‘credible source’ (RD 0.16, 95% CI 0.08 to 0.24) in interventions targeting healthcare professionals.”

“Ten studies (23,715 participants) compared a more intensive (stepped) intervention versus a less intensive intervention. In these studies DRS attendance increased by 5% (RD 0.05, 95% CI 0.02 to 0.09; moderate-certainty evidence).”

“Overall, we found that there is insufficient evidence to draw robust conclusions about the relative cost effectiveness of the interventions compared to each other or against usual care.”

“The results of this review provide evidence that QI interventions targeting patients, healthcare professionals or the healthcare system are associated with meaningful improvements in DRS attendance compared to usual care. There was no statistically significant difference between interventions specifically aimed at DRS and those which were part of a general QI strategy for improving diabetes care.”

vi. Diabetes in China: Epidemiology and Genetic Risk Factors and Their Clinical Utility in Personalized Medication.

“The incidence of type 2 diabetes (T2D) has rapidly increased over recent decades, and T2D has become a leading public health challenge in China. Compared with European descents, Chinese patients with T2D are diagnosed at a relatively young age and low BMI. A better understanding of the factors contributing to the diabetes epidemic is crucial for determining future prevention and intervention programs. In addition to environmental factors, genetic factors contribute substantially to the development of T2D. To date, more than 100 susceptibility loci for T2D have been identified. Individually, most T2D genetic variants have a small effect size (10–20% increased risk for T2D per risk allele); however, a genetic risk score that combines multiple T2D loci could be used to predict the risk of T2D and to identify individuals who are at a high risk. […] In this article, we review the epidemiological trends and recent progress in the understanding of T2D genetic etiology and further discuss personalized medicine involved in the treatment of T2D.”

“Over the past three decades, the prevalence of diabetes in China has sharply increased. The prevalence of diabetes was reported to be less than 1% in 1980 (2), 5.5% in 2001 (3), 9.7% in 2008 (4), and 10.9% in 2013, according to the latest published nationwide survey (5) […]. The prevalence of diabetes was higher in the senior population, men, urban residents, individuals living in economically developed areas, and overweight and obese individuals. The estimated prevalence of prediabetes in 2013 was 35.7%, which was much higher than the estimate of 15.5% in the 2008 survey. Similarly, the prevalence of prediabetes was higher in the senior population, men, and overweight and obese individuals. However, prediabetes was more prevalent in rural residents than in urban residents. […] the 2013 survey also compared the prevalence of diabetes among different races. The crude prevalence of diabetes was 14.7% in the majority group, i.e., Chinese Han, which was higher than that in most minority ethnic groups, including Tibetan, Zhuang, Uyghur, and Muslim. The crude prevalence of prediabetes was also higher in the Chinese Han ethnic group. The Tibetan participants had the lowest prevalence of diabetes and prediabetes (4.3% and 31.3%).”

“[T]he prevalence of diabetes in young people is relatively high and increasing. The prevalence of diabetes in the 20- to 39-year age-group was 3.2%, according to the 2008 national survey (4), and was 5.9%, according to the 2013 national survey (5). The prevalence of prediabetes also increased from 9.0% in 2008 to 28.8% in 2013 […]. Young people suffering from diabetes have a higher risk of chronic complications, which are the major cause of mortality and morbidity in diabetes. According to a study conducted in Asia (6), patients with young-onset diabetes had higher mean concentrations of HbA1c and LDL cholesterol and a higher prevalence of retinopathy (20% vs. 18%, P = 0.011) than those with late-onset diabetes. In the Chinese, patients with early-onset diabetes had a higher risk of nonfatal cardiovascular disease (7) than did patients with late-onset diabetes (odds ratio [OR] 1.91, 95% CI 1.81–2.02).”

“As approximately 95% of patients with diabetes in China have T2D, the rapid increase in the prevalence of diabetes in China may be attributed to the increasing rates of overweight and obesity and the reduction in physical activity, which is driven by economic development, lifestyle changes, and diet (3,11). According to a series of nationwide surveys conducted by the China Physical Fitness Surveillance Center (12), the prevalence of overweight (BMI ≥23.0 to <27.5 kg/m2) in Chinese adults aged 20–59 years increased from 37.4% in 2000 to 39.2% in 2005, 40.7% in 2010, and 41.2% in 2014, with an estimated increase of 0.27% per year. The prevalence of obesity (BMI ≥27.5 kg/m2) increased from 8.6% in 2000 to 10.3% in 2005, 12.2% in 2010, and 12.9% in 2014, with an estimated increase of 0.32% per year […]. The prevalence of central obesity increased from 13.9% in 2000 to 18.3% in 2005, 22.1% in 2010, and 24.9% in 2014, with an estimated increase of 0.78% per year. Notably, T2D develops at a considerably lower BMI in the Chinese population than that in European populations. […] The relatively high risk of diabetes at a lower BMI could be partially attributed to the tendency toward visceral adiposity in East Asian populations, including the Chinese population (13). Moreover, East Asian populations have been found to have a higher insulin sensitivity with a much lower insulin response than European descent and African populations, implying a lower compensatory β-cell function, which increases the risk of progressing to overt diabetes (14).”

“Over the past two decades, linkage analyses, candidate gene approaches, and large-scale GWAS have successfully identified more than 100 genes that confer susceptibility to T2D among the world’s major ethnic populations […], most of which were discovered in European populations. However, less than 50% of these European-derived loci have been successfully confirmed in East Asian populations. […] there is a need to identify specific genes that are associated with T2D in other ethnic populations. […] Although many genetic loci have been shown to confer susceptibility to T2D, the mechanism by which these loci participate in the pathogenesis of T2D remains unknown. Most T2D loci are located near genes that are related to β-cell function […] most single nucleotide polymorphisms (SNPs) contributing to the T2D risk are located in introns, but whether these SNPs directly modify gene expression or are involved in linkage disequilibrium with unknown causal variants remains to be investigated. Furthermore, the loci discovered thus far collectively account for less than 15% of the overall estimated genetic heritability.”

“The areas under the receiver operating characteristic curves (AUCs) are usually used to assess the discriminative accuracy of an approach. The AUC values range from 0.5 to 1.0, where an AUC of 0.5 represents a lack of discrimination and an AUC of 1 represents perfect discrimination. An AUC ≥0.75 is considered clinically useful. The dominant conventional risk factors, including age, sex, BMI, waist circumference, blood pressure, family history of diabetes, physical activity level, smoking status, and alcohol consumption, can be combined to construct conventional risk factor–based models (CRM). Several studies have compared the predictive capacities of models with and without genetic information. The addition of genetic markers to a CRM could slightly improve the predictive performance. For example, one European study showed that the addition of an 11-SNP GRS to a CRM marginally improved the risk prediction (AUC was 0.74 without and 0.75 with the genetic markers, P < 0.001) in a prospective cohort of 16,000 individuals (37). A meta-analysis (38) consisting of 23 studies investigating the predictive performance of T2D risk models also reported that the AUCs only slightly increased with the addition of genetic information to the CRM (median AUC was increased from 0.78 to 0.79). […] Despite great advances in genetic studies, the clinical utility of genetic information in the prediction, early identification, and prevention of T2D remains in its preliminary stage.”

“An increasing number of studies have highlighted that early nutrition has a persistent effect on the risk of diabetes in later life (40,41). China’s Great Famine of 1959–1962 is considered to be the largest and most severe famine of the 20th century […] Li et al. (43) found that offspring of mothers exposed to the Chinese famine have a 3.9-fold increased risk of diabetes or hyperglycemia as adults. A more recent study (the Survey on Prevalence in East China for Metabolic Diseases and Risk Factors [SPECT-China]) conducted in 2014, among 6,897 adults from Shanghai, Jiangxi, and Zhejiang provinces, had the same conclusion that famine exposure during the fetal period (OR 1.53, 95% CI 1.09–2.14) and childhood (OR 1.82, 95% CI 1.21–2.73) was associated with diabetes (44). These findings indicate that undernutrition during early life increases the risk of hyperglycemia in adulthood and this association is markedly exaggerated when facing overnutrition in later life.”


February 23, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Health Economics, Immunology, Medicine, Neurology, Ophthalmology, Pharmacology, Studies | Leave a comment

Endocrinology (part 5 – calcium and bone metabolism)

Some observations from chapter 6:

“*Osteoclasts – derived from the monocytic cells; resorb bone. *Osteoblasts – derived from the fibroblast-like cells; make bone. *Osteocytes – buried osteoblasts; sense mechanical strain in bone. […] In order to ensure that bone can undertake its mechanical and metabolic functions, it is in a constant state of turnover […] Bone is laid down rapidly during skeletal growth at puberty. Following this, there is a period of stabilization of bone mass in early adult life. After the age of ~40, there is a gradual loss of bone in both sexes. This occurs at the rate of approximately 0.5% annually. However, in ♀ after the menopause, there is a period of rapid bone loss. The accelerated loss is maximal in the first 2-5 years after the cessation of ovarian function and then gradually declines until the previous gradual rate of loss is once again established. The excess bone loss associated with the menopause is of the order of 10% of skeletal mass. This menopause-associated loss, coupled with higher peak bone mass acquisition in ♂, largely explains why osteoporosis and its associated fractures are more common in ♀.”

“The clinical utility of routine measurements of bone turnover markers is not yet established. […] Skeletal radiology[:] *Useful for: *Diagnosis of fracture. *Diagnosis of specific diseases (e.g. Paget’s disease and osteomalacia). *Identification of bone dysplasia. *Not useful for assessing bone density. […] Isotope bone scans are useful for identifying localized areas of bone disease, such as fracture, metastases, or Paget’s disease. […] Isotope bone scans are particularly useful in Paget’s disease to establish the extent and sites of skeletal involvement and the underlying disease activity. […] Bone biopsy is occasionally necessary for the diagnosis of patients with complex metabolic bone diseases. […] Bone biopsy is not indicated for the routine diagnosis of osteoporosis. It should only be undertaken in highly specialist centres with appropriate expertise. […] Measurement of 24h urinary excretion of calcium provides a measure of risk of renal stone formation or nephrocalcinosis in states of chronic hypercalcaemia. […] 250H vitamin D […] is the main storage form of vitamin D, and the measurement of ‘total vitamin D’ is the most clinically useful measure of vitamin D status. Internationally, there remains controversy around a ‘normal’ or ‘optimal’ concentration of vitamin D. Levels over 50nmol/L are generally accepted as satisfactory and values <25nmol/L representing deficiency. True osteomalacia occurs with vitamin D values <15 nmol/L. Low levels of 250HD can result from a variety of causes […] Bone mass is quoted in terms of the number of standard deviations from an expected mean. […] A reduction of one SD in bone density will approximately double the risk of fracture.”

[I should perhaps add a cautionary note here that while this variable is very useful in general, it is more useful in some contexts than in others; and in some specific disease process contexts it is quite clear that it will tend to underestimate the fracture risk. Type 1 diabetes is a clear example. For more details, see this post.]

“Hypercalcaemia is found in 5% of hospital patients and in 0.5% of the general population. […] Many different disease states can lead to hypercalcaemia. […] In asymptomatic community-dwelling subjects, the vast majority of hypercalcaemia is the result of hyperparathyroidism. […] The clinical features of hypercalcaemia are well recognized […]; unfortunately, they are non-specific […] [They include:] *Polyuria. *Polydipsia. […] *Anorexia. *Vomiting. *Constipation. *Abdominal pain. […] *Confusion. *Lethargy. *Depression. […] Clinical signs of hypercalcaemia are rare. […] the presence of bone pain or fracture and renal stones […] indicate the presence of chronic hypercalcaemia. […] Hypercalcaemia is usually a late manifestation of malignant disease, and the primary lesion is usually evident by the time hypercalcaemia is expressed (50% of patients die within 30 days).”

“Primary hyperparathyroidism [is] [p]resent in up to 1 in 500 of the general population where it is predominantly a disease of post-menopausal ♀ […] The normal physiological response to hypocalcaemia is an increase in PTH secretion. This is termed 2° hyperparathyroidism and is not pathological in as much as the PTH secretion remains under feedback control. Continued stimulation of the parathyroid glands can lead to autonomous production of PTH. This, in turn, causes hypercalcaemia which is termed tertiary hyperparathyroidism. This is usually seen in the context of renal disease […] In majority of patients [with hyperparathyroidism] without end-organ damage, disease is benign and stable. […] Investigation is, therefore, primarily aimed at determining the presence of end-organ damage from hypercalcaemia in order to determine whether operative intervention is indicated. […] It is generally accepted that all patients with symptomatic hyperparathyroidism or evidence of end-organ damage should be considered for parathyroidectomy. This would include: *Definite symptoms of hypercalcaemia. […] *Impaired renal function. *Renal stones […] *Parathyroid bone disease, especially osteitis fibrosis cystica. *Pancreatitis. […] Patients not managed with surgery require regular follow-up. […] <5% fail to become normocalcaemic [after surgery], and these should be considered for a second operation. […] Patients rendered permanently hypoparathyroid by surgery require lifelong supplements of active metabolites of vitamin D with calcium. This can lead to hypercalciuria, and the risk of stone formation may still be present in these patients. […] In hypoparathyroidism, the target serum calcium should be at the low end of the reference range. […] any attempt to raise the plasma calcium well into the normal range is likely to result in unacceptable hypercalciuria”.

“Although hypocalcaemia can result from failure of any of the mechanisms by which serum calcium concentration is maintained, it is usually the result of either failure of PTH secretion or because of the inability to release calcium from bone. […] The clinical features of hypocalcaemia are largely as a result of neuromuscular excitability. In order of  severity, these include: *Tingling – especially of fingers, toes, or lips. *Numbness – especially of fingers, toes, or lips. *Cramps. *Carpopedal spasm. *Stridor due to laryngospasm. *Seizures. […] symptoms of hypocalcaemia tend to reflect the severity and rapidity of onset of the metabolic abnormality. […] there may be clinical signs and symptoms associated with the underlying condition: *Vitamin D deficiency may be associated with generalized bone pain, fractures, or proximal myopathy […] *Hypoparathyroidism can be accompanied by mental slowing and personality disturbances […] *If hypocalcaemia is present during the development of permanent teeth, these may show areas of enamel hypoplasia. This can be a useful physical sign, indicating that the hypocalcaemia is long-standing. […] Acute symptomatic hypocalcaemia is a medical emergency and demands urgent treatment whatever the cause […] *Patients with tetany or seizures require urgent IV treatment with calcium gluconate […] Care must be taken […] as too rapid elevation of the plasma calcium can cause arrhythmias. […] *Treatment of chronic hypocalcaemia is more dependent on the cause. […] In patients with mild parathyroid dysfunction, it may be possible to achieve acceptable calcium concentrations by using calcium supplements alone. […] The majority of patients will not achieve adequate control with such treatment. In those cases, it is necessary to use vitamin D or its metabolites in pharmacological doses to maintain plasma calcium.”

“Pseudohypoparathyroidism[:] *Resistance to parathyroid hormone action. *Due to defective signalling of PTH action via cell membrane receptor. *Also affects TSH, LH, FSH, and GH signalling. […] Patients with the most common type of pseudohypoparathyroidism (type 1a) have a characteristic set of skeletal abnormalities, known as Albright’s hereditary osteodystrophy. This comprises: *Short stature. *Obesity. *Round face. *Short metacarpals. […] The principles underlying the treatment of pseudohypoparathyroidism are the same as those underlying hypoparathyroidism. *Patients with the most common form of pseudohypoparathyroidism may have resistance to the action of other hormones which rely on G protein signalling. They, therefore, need to be assessed for thyroid and gonadal dysfunction (because of defective TSH and gonadotrophin action). If these deficiencies are present, they need to be treated in the conventional manner.”

“Osteomalacia occurs when there is inadequate mineralization of mature bone. Rickets is a disorder of the growing skeleton where there is inadequate mineralization of bone as it is laid down at the epiphysis. In most instances, osteomalacia leads to build-up of excessive unmineralized osteoid within the skeleton. In rickets, there is build-up of unmineralized osteoid in the growth plate. […] These two related conditions may coexist. […] Clinical features [of osteomalacia:] *Bone pain. *Deformity. *Fracture. *Proximal myopathy. *Hypocalcaemia (in vitamin D deficiency). […] The majority of patients with osteomalacia will show no specific radiological abnormalities. *The most characteristic abnormality is the Looser’s zone or pseudofracture. If these are present, they are virtually pathognomonic of osteomalacia. […] Oncogenic osteomalacia[:] Certain tumours appear to be able to produce FGF23 which is phosphaturic. This is rare […] Clinically, such patients usually present with profound myopathy as well as bone pain and fracture. […] Complete removal of the tumour results in resolution of the biochemical and skeletal abnormalities. If this is not possible […], treatment with vitamin D metabolites and phosphate supplements […] may help the skeletal symptoms.”

Hypophosphataemia[:] Phosphate is important for normal mineralization of bone. In the absence of sufficient phosphate, osteomalacia results. […] In addition, phosphate is important in its own right for neuromuscular function, and profound hypophosphataemia can be accompanied by encephalopathy, muscle weakness, and cardiomyopathy. It must be remembered that, as phosphate is primarily an intracellular anion, a low plasma phosphate does not necessarily represent actual phosphate depletion. […] Mainstay [of treatment] is phosphate replacement […] *Long-term administration of phosphate supplements stimulates parathyroid activity. This can lead to hypercalcaemia, a further fall in phosphate, with worsening of the bone disease […] To minimize parathyroid stimulation, it is usual to give one of the active metabolites of vitamin D in conjunction with phosphate.”

“Although the term osteoporosis refers to the reduction in the amount of bony tissue within the skeleton, this is generally associated with a loss of structural integrity of the internal architecture of the bone. The combination of both these changes means that osteoporotic bone is at high risk of fracture, even after trivial injury. […] Historically, there has been a primary reliance on bone mineral density as a threshold for treatment, whereas currently there is far greater emphasis on assessing individual patients’ risk of fracture that incorporates multiple clinical risk factors as well as bone mineral density. […] Osteoporosis may arise from a failure of the body to lay down sufficient bone during growth and maturation; an earlier than usual onset of bone loss following maturity; or an rate of that loss. […] Early menopause or late puberty (in ♂ or ♀) is associated with risk of osteoporosis. […] Lifestyle factors affecting bone mass [include:] *weight-bearing exercise [increase bone mass] […] *Smoking. *Excessive alcohol. *Nulliparity. *Poor calcium nutrition. [These all decrease bone mass] […] The risk of osteoporotic fracture increases with age. Fracture rates in ♂ are approximately half of those seen in ♀ of the same age. An ♀ aged 50 has approximately a 1:2 chance [risk, surely… – US] of sustaining an osteoporotic fracture in the rest of her life. The corresponding figure for a ♂ is 1:5. […] One-fifth of hip fracture victims will die within 6 months of the injury, and only 50% will return to their previous level of independence.”

“Any fracture, other than those affecting fingers, toes, or face, which is caused by a fall from standing height or less is called a fragility (low-trauma) fracture, and underlying osteoporosis should be considered. Patients suffering such a fracture should be considered for investigation and/or treatment for osteoporosis. […] [Osteoporosis is] [u]sually clinically silent until an acute fracture. *Two-thirds of vertebral fractures do not come to clinical attention. […] Osteoporotic vertebral fractures only rarely lead to neurological impairment. Any evidence of spinal cord compression should prompt a search for malignancy or other underlying cause. […] Osteoporosis does not cause generalized skeletal pain. […] Biochemical markers of bone turnover may be helpful in the calculation of fracture risk and in judging the response to drug therapies, but they have no role in the diagnosis of osteoporosis. […] An underlying cause for osteoporosis is present in approximately 10-30% of women and up to 50% of men with osteoporosis. […] 2° causes of osteoporosis are more common in ♂ and need to be excluded in all ♂ with osteoporotic fracture. […] Glucocorticoid treatment is one of the major 2° causes of osteoporosis.”

February 22, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology | Leave a comment

A few diabetes papers of interest

(I hadn’t expected to only cover two papers in this post, but the second paper turned out to include a lot of stuff I figured might be worth adding here. I might add another post later this week including some of the other studies I had intended to cover in this post.)

i. Burden of Mortality Attributable to Diagnosed Diabetes: A Nationwide Analysis Based on Claims Data From 65 Million People in Germany.

“Diabetes is among the 10 most common causes of death worldwide (2). Between 1990 and 2010, the number of deaths attributable to diabetes has doubled (2). People with diabetes have a reduced life expectancy of ∼5 to 6 years (3). The most common cause of death in people with diabetes is cardiovascular disease (3,4). Over the past few decades, a reduction of diabetes mortality has been observed in several countries (59). However, the excess risk of death is still higher than in the population without diabetes, particularly in younger age-groups (4,9,10). Unfortunately, in most countries worldwide, reliable data on diabetes mortality are lacking (1). In a few European countries, such as Denmark (5) and Sweden (4), mortality analyses are based on national diabetes registries that include all age-groups. However, Germany and many other European countries do not have such national registries. Until now, age-standardized hazard ratios for diabetes mortality between 1.4 and 2.6 have been published for Germany on the basis of regional studies and surveys with small respondent numbers (1114). To the best of our knowledge, no nationwide estimates of the number of excess deaths due to diabetes have been published for Germany, and no information on older age-groups >79 years is currently available.

In 2012, changes in the regulation of data transparency enabled the use of nationwide routine health care data from the German statutory health insurance system, which insures ∼90% of the German population (15). These changes have allowed for new possibilities for estimating the burden of diabetes in Germany. Hence, this study estimates the number of excess deaths due to diabetes (ICD-10 codes E10–E14) and type 2 diabetes (ICD-10 code E11) in Germany, which is the number of deaths that could have been prevented if the diabetes mortality rate was as high as that of the population without diabetes.”

“Nationwide data on mortality ratios for diabetes and no diabetes are not available for Germany. […] the age- and sex-specific mortality rate ratios between people with diabetes and without diabetes were used from a Danish study wherein the Danish National Diabetes Register was linked to the individual mortality data from the Civil Registration System that includes all people residing in Denmark (5). Because the Danish National Diabetes Register is one of the most accurate diabetes registries in Europe, with a sensitivity of 86% and positive predictive value of 90% (5), we are convinced that the Danish estimates are highly valid and reliable. Denmark and Germany have a comparable standard of living and health care system. The diabetes prevalence in these countries is similar (Denmark 7.2%, Germany 7.4% [20]) and mortality of people with and without diabetes comparable, as shown in the European mortality database”

“In total, 174,627 excess deaths (137,950 from type 2 diabetes) could have been prevented in 2010 if mortality was the same in people with and without diabetes. Overall, 21% of all deaths in Germany were attributable to diabetes, and 16% were attributable to type 2 diabetes […] Most of the excess deaths occurred in the 70- to 79- and 80- to 89-year-old age-groups (∼34% each) […]. Substantial sex differences were found in diabetes-related excess deaths. From the age of ∼40 years, the number of male excess deaths due to diabetes started to grow, but the number of female excess deaths increased with a delay. Thus, the highest number of male excess deaths due to diabetes occurred at the age of ∼75 years, whereas the peak of female excess deaths was ∼10 years later. […] The diabetes mortality rates increased with age and were always higher than in the population without diabetes. The largest differences in mortality rates between people with and without diabetes were observed in the younger age-groups. […] These results are in accordance with previous studies worldwide (3,4,7,9) and regional studies in Germany (1113).”

“According to official numbers from the Federal Statistical Office, 858,768 people died in Germany in 2010, with 23,131 deaths due to diabetes, representing 2.7% of the all-cause mortality (26). Hence, in Germany, diabetes is not ranked among the top 10 most common causes of death […]. We found that 21% of all deaths were attributable to diabetes and 16% were attributable to type 2 diabetes; hence, we suggest that the number of excess deaths attributable to diabetes is strongly underestimated if we rely on reported causes of death from death certificates, as official statistics do. Estimating diabetes-related mortality is challenging because most people die as a result of diabetes complications and comorbidities, such as cardiovascular disease and renal failure, which often are reported as the underlying cause of death (1,23). For this reason, another approach is to focus not only on the underlying cause of death but also on the multiple causes of death to assess any mention of a disease on the death certificate (27). In a study from Italy, the method of assessing multiple causes of death revealed that in 12.3% of all studied death certificates, diabetes was mentioned, whereas only 2.9% reported diabetes as the underlying cause of death (27), corresponding to a four times higher proportion of death related to diabetes. Another nationwide analysis from Canada found that diabetes was more than twice as likely to be a contributing factor to death than the underlying cause of death from the years 2004–2008 (28). A recently published study from the U.S. that was based on two representative surveys from 1997 to 2010 found that 11.5% of all deaths were attributable to diabetes, which reflects a three to four times higher proportion of diabetes-related deaths (29). Overall, these results, together with the current calculations, demonstrate that deaths due to diabetes contribute to a much higher burden than previously assumed.”

ii. Standardizing Clinically Meaningful Outcome Measures Beyond HbA1c for Type 1 Diabetes: A Consensus Report of the American Association of Clinical Endocrinologists, the American Association of Diabetes Educators, the American Diabetes Association, the Endocrine Society, JDRF International, The Leona M. and Harry B. Helmsley Charitable Trust, the Pediatric Endocrine Society, and the T1D Exchange.

“Type 1 diabetes is a life-threatening, autoimmune disease that strikes children and adults and can be fatal. People with type 1 diabetes have to test their blood glucose multiple times each day and dose insulin via injections or an infusion pump 24 h a day every day. Too much insulin can result in hypoglycemia, seizures, coma, or death. Hyperglycemia over time leads to kidney, heart, nerve, and eye damage. Even with diligent monitoring, the majority of people with type 1 diabetes do not achieve recommended target glucose levels. In the U.S., approximately one in five children and one in three adults meet hemoglobin A1c (HbA1c) targets and the average patient spends 7 h a day hyperglycemic and over 90 min hypoglycemic (13). […] HbA1c is a well-accepted surrogate outcome measure for evaluating the efficacy of diabetes therapies and technologies in clinical practice as well as in research (46). […] While HbA1c is used as a primary outcome to assess glycemic control and as a surrogate for risk of developing complications, it has limitations. As a measure of mean blood glucose over 2 or 3 months, HbA1c does not capture short-term variations in blood glucose or exposure to hypoglycemia and hyperglycemia in individuals with type 1 diabetes; HbA1c also does not capture the impact of blood glucose variations on individuals’ quality of life. Recent advances in type 1 diabetes technologies have made it feasible to assess the efficacy of therapies and technologies using a set of outcomes beyond HbA1c and to expand definitions of outcomes such as hypoglycemia. While definitions for hypoglycemia in clinical care exist, they have not been standardized […]. The lack of standard definitions impedes and can confuse their use in clinical practice, impedes development processes for new therapies, makes comparison of studies in the literature challenging, and may lead to regulatory and reimbursement decisions that fail to meet the needs of people with diabetes. To address this vital issue, the type 1 diabetes–stakeholder community launched the Type 1 Diabetes Outcomes Program to develop consensus definitions for a set of priority outcomes for type 1 diabetes. […] The outcomes prioritized under the program include hypoglycemia, hyperglycemia, time in range, diabetic ketoacidosis (DKA), and patient-reported outcomes (PROs).”

“Hypoglycemia is a significant — and potentially fatal — complication of type 1 diabetes management and has been found to be a barrier to achieving glycemic goals (9). Repeated exposure to severe hypoglycemic events has been associated with an increased risk of cardiovascular events and all-cause mortality in people with type 1 or type 2 diabetes (10,11). Hypoglycemia can also be fatal, and severe hypoglycemic events have been associated with increased mortality (1214). In addition to the physical aspects of hypoglycemia, it can also have negative consequences on emotional status and quality of life.

While there is some variability in how and when individuals manifest symptoms of hypoglycemia, beginning at blood glucose levels <70 mg/dL (3.9 mmol/L) (which is at the low end of the typical post-absorptive plasma glucose range), the body begins to increase its secretion of counterregulatory hormones including glucagon, epinephrine, cortisol, and growth hormone. The release of these hormones can cause moderate autonomic effects, including but not limited to shaking, palpitations, sweating, and hunger (15). Individuals without diabetes do not typically experience dangerously low blood glucose levels because of counterregulatory hormonal regulation of glycemia (16). However, in individuals with type 1 diabetes, there is often a deficiency of the counterregulatory response […]. Moreover, as people with diabetes experience an increased number of episodes of hypoglycemia, the risk of hypoglycemia unawareness, impaired glucose counterregulation (for example, in hypoglycemia-associated autonomic failure [17]), and level 2 and level 3 hypoglycemia […] all increase (18). Therefore, it is important to recognize and treat all hypoglycemic events in people with type 1 diabetes, particularly in populations (children, the elderly) that may not have the ability to recognize and self-treat hypoglycemia. […] More notable clinical symptoms begin at blood glucose levels <54 mg/dL (3.0 mmol/L) (19,20). As the body’s primary utilizer of glucose, the brain is particularly sensitive to decreases in blood glucose concentrations. Both experimental and clinical evidence has shown that, at these levels, neurogenic and neuroglycopenic symptoms including impairments in reaction times, information processing, psychomotor function, and executive function begin to emerge. These neurological symptoms correlate to altered brain activity in multiple brain areas including the prefrontal cortex and medial temporal lobe (2124). At these levels, individuals may experience confusion, dizziness, blurred or double vision, tremors, and tingling sensations (25). Hypoglycemia at this glycemic level may also increase proinflammatory and prothrombotic markers (26). Left untreated, these symptoms can become severe to the point that an individual will require assistance from others to move or function. Prolonged untreated hypoglycemia that continues to drop below 50 mg/dL (2.8 mmol/L) increases the risk of seizures, coma, and death (27,28). Hypoglycemia that affects cognition and stamina may also increase the risk of accidents and falls, which is a particular concern for older adults with diabetes (29,30).

The glycemic thresholds at which these symptoms occur, as well as the severity with which they manifest themselves, may vary in individuals with type 1 diabetes depending on the number of hypoglycemic episodes they have experienced (3133). Counterregulatory physiological responses may evolve in patients with type 1 diabetes who endure repeated hypoglycemia over time (34,35).”

“The Steering Committee defined three levels of hypoglycemia […] Level 1 hypoglycemia is defined as a measurable glucose concentration <70 mg/dL (3.9 mmol/L) but ≥54 mg/dL (3.0 mmol/L) that can alert a person to take action. A blood glucose concentration of 70 mg/dL (3.9 mmol/L) has been recognized as a marker of physiological hypoglycemia in humans, as it approximates the glycemic threshold for neuroendocrine responses to falling glucose levels in individuals without diabetes. As such, blood glucose in individuals without diabetes is generally 70–100 mg/dL (3.9–5.6 mmol/L) upon waking and 70–140 mg/dL (3.9–7.8 mmol/L) after meals, and any excursions beyond those levels are typically countered with physiological controls (16,37). However, individuals with diabetes who have impaired or altered counterregulatory hormonal and neurological responses do not have the same internal regulation as individuals without diabetes to avoid dropping below 70 mg/dL (3.9 mmol/L) and becoming hypoglycemic. Recurrent episodes of hypoglycemia lead to increased hypoglycemia unawareness, which can become dangerous as individuals cease to experience symptoms of hypoglycemia, allowing their blood glucose levels to continue falling. Therefore, glucose levels <70 mg/dL (3.9 mmol/L) are clinically important, independent of the severity of acute symptoms.

Level 2 hypoglycemia is defined as a measurable glucose concentration <54 mg/dL (3.0 mmol/L) that needs immediate action. At ∼54 mg/dL (3.0 mmol/L), neurogenic and neuroglycopenic hypoglycemic symptoms begin to occur, ultimately leading to brain dysfunction at levels <50 mg/dL (2.8 mmol/L) (19,20). […] Level 3 hypoglycemia is defined as a severe event characterized by altered mental and/or physical status requiring assistance. Severe hypoglycemia captures events during which the symptoms associated with hypoglycemia impact a patient to such a degree that the patient requires assistance from others (27,28). […] Hypoglycemia that sets in relatively rapidly, such as in the case of a significant insulin overdose, may induce level 2 or level 3 hypoglycemia with little warning (38).”

“The data regarding the effects of chronic hyperglycemia on long-term outcomes is conclusive, indicating that chronic hyperglycemia is a major contributor to morbidity and mortality in type 1 diabetes (41,4345). […] Although the correlation between long-term poor glucose control and type 1 diabetes complications is well established, the impact of short-term hyperglycemia is not as well understood. However, hyperglycemia has been shown to have physiological effects and in an acute-care setting is linked to morbidity and mortality in people with and without diabetes. Short-term hyperglycemia, regardless of diabetes diagnosis, has been shown to reduce survival rates among patients admitted to the hospital with stroke or myocardial infarction (47,48). In addition to increasing mortality, short-term hyperglycemia is correlated with stroke severity and poststroke disability (49,50).

The effects of short-term hyperglycemia have also been observed in nonacute settings. Evidence indicates that hyperglycemia alters retinal cell firing through sensitization in patients with type 1 diabetes (51). This finding is consistent with similar findings showing increased oxygen consumption and blood flow in the retina during hyperglycemia. Because retinal cells absorb glucose through an insulin-independent process, they respond more strongly to increases in glucose in the blood than other cells in patients with type 1 diabetes. The effects of acute hyperglycemia on retinal response may underlie part of the development of retinopathy known to be a long-term complication of type 1 diabetes.”

“The Steering Committee defines hyperglycemia for individuals with type 1 diabetes as the following:

  • Level 1—elevated glucose: glucose >180 mg/dL (10 mmol/L) and glucose ≤250 mg/dL (13.9 mmol/L)

  • Level 2—very elevated glucose: glucose >250 mg/dL (13.9 mmol/L) […]

Elevated glucose is defined as a glucose concentration >180 mg/dL (10.0 mmol/L) but ≤250 mg/dL (13.9 mmol/L). In clinical practice, measures of hyperglycemia differ based on time of day (e.g., pre- vs. postmeal). This program, however, focused on defining outcomes for use in product development that are universally applicable. Glucose profiles and postprandial blood glucose data for individuals without diabetes suggest that 140 mg/dL (7.8 mmol/L) is the appropriate threshold for defining hyperglycemia. However, data demonstrate that the majority of individuals without diabetes exceed this threshold every day. Moreover, people with diabetes spend >60% of their day above this threshold, which suggests that 140 mg/dL (7.8 mmol/L) is too low of a threshold for measuring hyperglycemia in individuals with diabetes. Current clinical guidelines for people with diabetes indicate that peak prandial glucose should not exceed 180 mg/dL (10.0 mmol/L). As such, the Steering Committee identified 180 mg/dL (10.0 mmol/L) as the initial threshold defining elevated glucose. […]

Very elevated glucose is defined as a glucose concentration >250 mg/dL (13.9 mmol/L). Evidence examining the impact of hyperglycemia does not examine the incremental effects of increasing blood glucose. However, blood glucose values exceeding 250 mg/dL (13.9 mmol/L) increase the risk for DKA (58), and HbA1c readings at that level have been associated with a high likelihood of complications.”

“An individual whose blood glucose levels rarely extend beyond the thresholds defined for hypo- and hyperglycemia is less likely to be subject to the short-term or long-term effects experienced by those with frequent excursions beyond one or both thresholds. It is also evident that if the intent of a given intervention is to safely manage blood glucose but the intervention does not reliably maintain blood glucose within safe levels, then the intervention should not be considered effective.

The time in range outcome is distinguished from traditional HbA1c testing in several ways (4,59). Time in range captures fluctuations in glucose levels continuously, whereas HbA1c testing is done at static points in time, usually months apart (60). Furthermore, time in range is more specific and sensitive than traditional HbA1c testing; for example, a treatment that addresses acute instances of hypo- or hyperglycemia may be detected in a time in range assessment but not necessarily in an HbA1c assessment. As a percentage, time in range is also more likely to be comparable across patients than HbA1c values, which are more likely to have patient-specific variations in significance (61). Finally, time in range may be more likely than HbA1c levels to correlate with PROs, such as quality of life, because the outcome is more representative of the whole patient experience (62). Table 3 illustrates how the concept of time in range differs from current HbA1c testing. […] [V]ariation in what is considered “normal” glucose fluctuations across populations, as well as what is realistically achievable for people with type 1 diabetes, must be taken into account so as not to make the target range definition too restrictive.”

“The Steering Committee defines time in range for individuals with type 1 diabetes as the following:

  • Percentage of readings in the range of 70–180 mg/dL (3.9–10.0 mmol/L) per unit of time

The Steering Committee considered it important to keep the time in range definition wide in order to accommodate variations across the population with type 1 diabetes — including different age-groups — but limited enough to preclude the possibility of negative outcomes. The upper and lower bounds of the time in range definition are consistent with the definitions for hypo- and hyperglycemia defined above. For individuals without type 1 diabetes, 70–140 mg/dL (3.9–7.8 mmol/L) represents a normal glycemic range (66). However, spending most of the day in this range is not generally achievable for people with type 1 diabetes […] To date, there is limited research correlating time in range with positive short-term and long-term type 1 diabetes outcomes, as opposed to the extensive research demonstrating the negative consequences of excursions into hyper- or hypoglycemia. More substantial evidence demonstrating a correlation or a direct causative relationship between time in range for patients with type 1 diabetes and positive health outcomes is needed.”

“DKA is often associated with hyperglycemia. In most cases, in an individual with diabetes, the cause of hyperglycemia is also the cause of DKA, although the two conditions are distinct. DKA develops when a lack of glucose in cells prompts the body to begin breaking down fatty acid reserves. This increases the levels of ketones in the body (ketosis) and causes a drop in blood pH (acidosis). At its most severe, DKA can cause cerebral edema, acute respiratory distress, thromboembolism, coma, and death (69,70). […] Although the current definition for DKA includes a list of multiple criteria that must be met, not all information currently included in the accepted definition is consistently gathered or required to diagnose DKA. The Steering Committee defines DKA in individuals with type 1 diabetes in a clinical setting as the following:

  • Elevated serum or urine ketones (greater than the upper limit of the normal range), and

  • Serum bicarbonate <15 mmol/L or blood pH <7.3

Given the seriousness of DKA, it is unnecessary to stratify DKA into different levels or categories, as the presence of DKA—regardless of the differences observed in the separate biochemical tests—should always be considered serious. In individuals with known diabetes, plasma glucose values are not necessary to diagnose DKA. Further, new therapeutic agents, specifically sodium–glucose cotransporter 2 inhibitors, have been linked to euglycemic DKA, or DKA with blood glucose values <250 mg/dL (13.9 mmol/L).”

“In guidance released in 2009 (72), the U.S. Food and Drug Administration (FDA) defined PROs as “any report of the status of a patient’s health condition that comes directly from the patient, without interpretation of the patient’s response by a clinician or anyone else.” In the same document, the FDA clearly acknowledged the importance of PROs, advising that they be used to gather information that is “best known by the patient or best measured from the patient perspective.”

Measuring and using PROs is increasingly seen as essential to evaluating care from a patient-centered perspective […] Given that type 1 diabetes is a chronic condition primarily treated on an outpatient basis, much of what people with type 1 diabetes experience is not captured through standard clinical measurement. Measures that capture PROs can fill these important information gaps. […] The use of validated PROs in type 1 diabetes clinical research is not currently widespread, and challenges to effectively measuring some PROs, such as quality of life, continue to confront researchers and developers.”


February 20, 2018 Posted by | Cardiology, Diabetes, Medicine, Neurology, Ophthalmology, Studies | Leave a comment

Systems Biology (III)

Some observations from chapter 4 below:

The need to maintain a steady state ensuring homeostasis is an essential concern in nature while negative feedback loop is the fundamental way to ensure that this goal is met. The regulatory system determines the interdependences between individual cells and the organism, subordinating the former to the latter. In trying to maintain homeostasis, the organism may temporarily upset the steady state conditions of its component cells, forcing them to perform work for the benefit of the organism. […] On a cellular level signals are usually transmitted via changes in concentrations of reaction substrates and products. This simple mechanism is made possible due to limited volume of each cell. Such signaling plays a key role in maintaining homeostasis and ensuring cellular activity. On the level of the organism signal transmission is performed by hormones and the nervous system. […] Most intracellular signal pathways work by altering the concentrations of selected substances inside the cell. Signals are registered by forming reversible complexes consisting of a ligand (reaction product) and an allosteric receptor complex. When coupled to the ligand, the receptor inhibits the activity of its corresponding effector, which in turn shuts down the production of the controlled substance ensuring the steady state of the system. Signals coming from outside the cell are usually treated as commands (covalent modifications), forcing the cell to adjust its internal processes […] Such commands can arrive in the form of hormones, produced by the organism to coordinate specialized cell functions in support of general homeostasis (in the organism). These signals act upon cell receptors and are usually amplified before they reach their final destination (the effector).”

“Each concentration-mediated signal must first be registered by a detector. […] Intracellular detectors are typically based on allosteric proteins. Allosteric proteins exhibit a special property: they have two stable structural conformations and can shift from one form to the other as a result of changes in ligand concentrations. […] The concentration of a product (or substrate) which triggers structural realignment in the allosteric protein (such as a regulatory enzyme) depends on the genetically-determined affinity of the active site to its ligand. Low affinity results in high target concentration of the controlled substance while high affinity translates into lower concentration […]. In other words, high concentration of the product is necessary to trigger a low-affinity receptor (and vice versa). Most intracellular regulatory mechanisms rely on noncovalent interactions. Covalent bonding is usually associated with extracellular signals, generated by the organism and capable of overriding the cell’s own regulatory mechanisms by modifying the sensitivity of receptors […]. Noncovalent interactions may be compared to requests while covalent signals are treated as commands. Signals which do not originate in the receptor’s own feedback loop but modify its affinity are known as steering signals […] Hormones which act upon cells are, by their nature, steering signals […] Noncovalent interactions — dependent on substance concentrations — impose spatial restrictions on regulatory mechanisms. Any increase in cell volume requires synthesis of additional products in order to maintain stable concentrations. The volume of a spherical cell is given as V = 4/3 π r3, where r indicates cell radius. Clearly, even a slight increase in r translates into a significant increase in cell volume, diluting any products dispersed in the cytoplasm. This implies that cells cannot expand without incurring great energy costs. It should also be noted that cell expansion reduces the efficiency of intracellular regulatory mechanisms because signals and substrates need to be transported over longer distances. Thus, cells are universally small, regardless of whether they make up a mouse or an elephant.”

An effector is an element of a regulatory loop which counteracts changes in the regulated quantity […] Synthesis and degradation of biological compounds often involves numerous enzymes acting in sequence. The product of one enzyme is a substrate for another enzyme. With the exception of the initial enzyme, each step of this cascade is controlled by the availability of the supplied substrate […] The effector consists of a chain of enzymes, each of which depends on the activity of the initial regulatory enzyme […] as well as on the activity of its immediate predecessor which supplies it with substrates. The function of all enzymes in the effector chain is indirectly dependent on the initial enzyme […]. This coupling between the receptor and the first link in the effector chain is a universal phenomenon. It can therefore be said that the initial enzyme in the effector chain is, in fact, a regulatory enzyme. […] Most cell functions depend on enzymatic activity. […] It seems that a set of enzymes associated with a specific process which involves a negative feedback loop is the most typical form of an intracellular regulatory effector. Such effectors can be controlled through activation or inhibition of their associated enzymes.”

“The organism is a self-contained unit represented by automatic regulatory loops which ensure homeostasis. […] Effector functions are conducted by cells which are usually grouped and organized into tissues and organs. Signal transmission occurs by way of body fluids, hormones or nerve connections. Cells can be treated as automatic and potentially autonomous elements of regulatory loops, however their specific action is dependent on the commands issued by the organism. This coercive property of organic signals is an integral requirement of coordination, allowing the organism to maintain internal homeostasis. […] Activities of the organism are themselves regulated by their own negative feedback loops. Such regulation differs however from the mechanisms observed in individual cells due to its place in the overall hierarchy and differences in signal properties, including in particular:
• Significantly longer travel distances (compared to intracellular signals);
• The need to maintain hierarchical superiority of the organism;
• The relative autonomy of effector cells. […]
The relatively long distance travelled by organism’s signals and their dilution (compared to intracellular ones) calls for amplification. As a consequence, any errors or random distortions in the original signal may be drastically exacerbated. A solution to this problem comes in the form of encoding, which provides the signal with sufficient specificity while enabling it to be selectively amplified. […] a loudspeaker can […] assist in acoustic communication, but due to the lack of signal encoding it cannot compete with radios in terms of communication distance. The same reasoning applies to organism-originated signals, which is why information regarding blood glucose levels is not conveyed directly by glucose but instead by adrenalin, glucagon or insulin. Information encoding is handled by receptors and hormone-producing cells. Target cells are capable of decoding such signals, thus completing the regulatory loop […] Hormonal signals may be effectively amplified because the hormone itself does not directly participate in the reaction it controls — rather, it serves as an information carrier. […] strong amplification invariably requires encoding in order to render the signal sufficiently specific and unambiguous. […] Unlike organisms, cells usually do not require amplification in their internal regulatory loops — even the somewhat rare instances of intracellular amplification only increase signal levels by a small amount. Without the aid of an amplifier, messengers coming from the organism level would need to be highly concentrated at their source, which would result in decreased efficiency […] Most signals originated on organism’s level travel with body fluids; however if a signal has to reach its destination very rapidly (for instance in muscle control) it is sent via the nervous system”.

“Two types of amplifiers are observed in biological systems:
1. cascade amplifier,
2. positive feedback loop. […]
A cascade amplifier is usually a collection of enzymes which perform their action by activation in strict sequence. This mechanism resembles multistage (sequential) synthesis or degradation processes, however instead of exchanging reaction products, amplifier enzymes communicate by sharing activators or by directly activating one another. Cascade amplifiers are usually contained within cells. They often consist of kinases. […] Amplification effects occurring at each stage of the cascade contribute to its final result. […] While the kinase amplification factor is estimated to be on the order of 103, the phosphorylase cascade results in 1010-fold amplification. It is a stunning value, though it should also be noted that the hormones involved in this cascade produce particularly powerful effects. […] A positive feedback loop is somewhat analogous to a negative feedback loop, however in this case the input and output signals work in the same direction — the receptor upregulates the process instead of inhibiting it. Such upregulation persists until the available resources are exhausted.
Positive feedback loops can only work in the presence of a control mechanism which prevents them from spiraling out of control. They cannot be considered self-contained and only play a supportive role in regulation. […] In biological systems positive feedback loops are sometimes encountered in extracellular regulatory processes where there is a need to activate slowly-migrating components and greatly amplify their action in a short amount of time. Examples include blood coagulation and complement factor activation […] Positive feedback loops are often coupled to negative loop-based control mechanisms. Such interplay of loops may impart the signal with desirable properties, for instance by transforming a flat signals into a sharp spike required to overcome the activation threshold for the next stage in a signalling cascade. An example is the ejection of calcium ions from the endoplasmic reticulum in the phospholipase C cascade, itself subject to a negative feedback loop.”

“Strong signal amplification carries an important drawback: it tends to “overshoot” its target activity level, causing wild fluctuations in the process it controls. […] Nature has evolved several means of signal attenuation. The most typical mechanism superimposes two regulatory loops which affect the same parameter but act in opposite directions. An example is the stabilization of blood glucose levels by two contradictory hormones: glucagon and insulin. Similar strategies are exploited in body temperature control and many other biological processes. […] The coercive properties of signals coming from the organism carry risks associated with the possibility of overloading cells. The regulatory loop of an autonomous cell must therefore include an “off switch”, controlled by the cell. An autonomous cell may protect itself against excessive involvement in processes triggered by external signals (which usually incur significant energy expenses). […] The action of such mechanisms is usually timer-based, meaning that they inactivate signals following a set amount of time. […] The ability to interrupt signals protects cells from exhaustion. Uncontrolled hormone-induced activity may have detrimental effects upon the organism as a whole. This is observed e.g. in the case of the vibrio cholerae toxin which causes prolonged activation of intestinal epithelial cells by locking protein G in its active state (resulting in severe diarrhea which can dehydrate the organism).”

“Biological systems in which information transfer is affected by high entropy of the information source and ambiguity of the signal itself must include discriminatory mechanisms. These mechanisms usually work by eliminating weak signals (which are less specific and therefore introduce ambiguities). They create additional obstacles (thresholds) which the signals must overcome. A good example is the mechanism which eliminates the ability of weak, random antigens to activate lymphatic cells. It works by inhibiting blastic transformation of lymphocytes until a so-called receptor cap has accumulated on the surface of the cell […]. Only under such conditions can the activation signal ultimately reach the cell nucleus […] and initiate gene transcription. […] weak, reversible nonspecific interactions do not permit sufficient aggregation to take place. This phenomenon can be described as a form of discrimination against weak signals. […] Discrimination may also be linked to effector activity. […] Cell division is counterbalanced by programmed cell death. The most typical example of this process is apoptosis […] Each cell is prepared to undergo controlled death if required by the organism, however apoptosis is subject to tight control. Cells protect themselves against accidental triggering of the process via IAP proteins. Only strong proapoptotic signals may overcome this threshold and initiate cellular suicide”.

Simply knowing the sequences, structures or even functions of individual proteins does not provide sufficient insight into the biological machinery of living organisms. The complexity of individual cells and entire organisms calls for functional classification of proteins. This task can be accomplished with a proteome — a theoretical construct where individual elements (proteins) are grouped in a way which acknowledges their mutual interactions and interdependencies, characterizing the information pathways in a complex organism.
Most ongoing proteome construction projects focus on individual proteins as the basic building blocks […] [We would instead argue in favour of a model in which] [t]he basic unit of the proteome is one negative feedback loop (rather than a single protein) […]
Due to the relatively large number of proteins (between 25 and 40 thousand in the human organism), presenting them all on a single graph with vertex lengths corresponds to the relative duration of interactions would be unfeasible. This is why proteomes are often subdivided into functional subgroups such as the metabolome (proteins involved in metabolic processes), interactome (complex-forming proteins), kinomes (proteins which belong to the kinase family) etc.”


February 18, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine | Leave a comment

Prevention of Late-Life Depression (I)

Late-life depression is a common and highly disabling condition and is also associated with higher health care utilization and overall costs. The presence of depression may complicate the course and treatment of comorbid major medical conditions that are also highly prevalent among older adults — including diabetes, hypertension, and heart disease. Furthermore, a considerable body of evidence has demonstrated that, for older persons, residual symptoms and functional impairment due to depression are common — even when appropriate depression therapies are being used. Finally, the worldwide phenomenon of a rapidly expanding older adult population means that unprecedented numbers of seniors — and the providers who care for them — will be facing the challenge of late-life depression. For these reasons, effective prevention of late-life depression will be a critical strategy to lower overall burden and cost from this disorder. […] This textbook will illustrate the imperative for preventing late-life depression, introduce a broad range of approaches and key elements involved in achieving effective prevention, and provide detailed examples of applications of late-life depression prevention strategies”.

I gave the book two stars on goodreads. There are 11 chapters in the book, written by 22 different contributors/authors, so of course there’s a lot of variation in the quality of the material included; the two star rating was an overall assessment of the quality of the material, and the last two chapters – but in particular chapter 10 – did a really good job convincing me that the the book did not deserve a 3rd star (if you decide to read the book, I advise you to skip chapter 10). In general I think many of the authors are way too focused on statistical significance and much too hesitant to report actual effect sizes, which are much more interesting. Gender is mentioned repeatedly throughout the coverage as an important variable, to the extent that people who do not read the book carefully might think this is one of the most important variables at play; but when you look at actual effect sizes, you get reported ORs of ~1.4 for this variable, compared to e.g. ORs in the ~8-9 for the bereavement variable (see below). You can quibble about population attributable fraction and so on here, but if the effect size is that small it’s unlikely to be all that useful in terms of directing prevention efforts/resource allocation (especially considering that women make out the majority of the total population in these older age groups anyway, as they have higher life expectancy than their male counterparts).

Anyway, below I’ve added some quotes and observations from the first few chapters of the book.

Meta-analyses of more than 30 randomized trials conducted in the High Income Countries show that the incidence of new depressive and anxiety disorders can be reduced by 25–50 % over 1–2 years, compared to usual care, through the use of learning-based psychotherapies (such as interpersonal psychotherapy, cognitive behavioral therapy, and problem solving therapy) […] The case for depression prevention is compelling and represents the key rationale for this volume: (1) Major depression is both prevalent and disabling, typically running a relapsing or chronic course. […] (2) Major depression is often comorbid with other chronic conditions like diabetes, amplifying the disability associated with these conditions and worsening family caregiver burden. (3) Depression is associated with worse physical health outcomes, partly mediated through poor treatment adherence, and it is associated with excess mortality after myocardial infarction, stroke, and cancer. It is also the major risk factor for suicide across the life span and particularly in old age. (4) Available treatments are only partially effective in reducing symptom burden, sustaining remission, and averting years lived with disability.”

“[M]any people suffering from depression do not receive any care and approximately a third of those receiving care do not respond to current treatments. The risk of recurrence is high, also in older persons: half of those who have experienced a major depression will experience one or even more recurrences [4]. […] Depression increases the risk at death: among people suffering from depression the risk of dying is 1.65 times higher than among people without a depression [7], with a dose-response relation between severity and duration of depression and the resulting excess mortality [8]. In adults, the average length of a depressive episode is 8 months but among 20 % of people the depression lasts longer than 2 years [9]. […] It has been estimated that in Australia […] 60 % of people with an affective disorder receive treatment, and using guidelines and standards only 34 % receives effective treatment [14]. This translates in preventing 15 % of Years Lived with Disability [15], a measure of disease burden [14] and stresses the need for prevention [16]. Primary health care providers frequently do not recognize depression, in particular among elderly. Older people may present their depressive symptoms differently from younger adults, with more emphasis on physical complaints [17, 18]. Adequate diagnosis of late-life depression can also be hampered by comorbid conditions such as Parkinson and dementia that may have similar symptoms, or by the fact that elderly people as well as care workers may assume that “feeling down” is part of becoming older [17, 18]. […] Many people suffering from depression do not seek professional help or are not identied as depressed [21]. Almost 14 % of elderly people living in community-type living suffer from a severe depression requiring clinical attention [22] and more than 50 % of those have a chronic course [4, 23]. Smit et al. reported an incidence of 6.1 % of chronic or recurrent depression among a sample of 2,200 elderly people (ages 55–85) [21].”

“Prevention differs from intervention and treatment as it is aimed at general population groups who vary in risk level for mental health problems such as late-life depression. The Institute of Medicine (IOM) has introduced a prevention framework, which provides a useful model for comprehending the different objectives of the interventions [29]. The overall goal of prevention programs is reducing risk factors and enhancing protective factors.
The IOM framework distinguishes three types of prevention interventions: (1) universal preventive interventions, (2) selective preventive interventions, and (3) indicated preventive interventions. Universal preventive interventions are targeted at the general audience, regardless of their risk status or the presence of symptoms. Selective preventive interventions serve those sub-populations who have a significantly higher than average risk of a disorder, either imminently or over a lifetime. Indicated preventive interventions target identified individuals with minimal but detectable signs or symptoms suggesting a disorder. This type of prevention consists of early recognition and early intervention of the diseases to prevent deterioration [30]. For each of the three types of interventions, the goal is to reduce the number of new cases. The goal of treatment, on the other hand, is to reduce prevalence or the total number of cases. By reducing incidence you also reduce prevalence [5]. […] prevention research differs from treatment research in various ways. One of the most important differences is the fact that participants in treatment studies already meet the criteria for the illness being studied, such as depression. The intervention is targeted at improvement or remission of the specific condition quicker than if no intervention had taken place. In prevention research, the participants do not meet the specific criteria for the illness being studied and the overall goal of the intervention is to prevent the development of a clinical illness at a lower rate than a comparison group [5].”

A couple of risk factors [for depression] occur more frequently among the elderly than among young adults. The loss of a loved one or the loss of a social role (e.g., employment), decrease of social support and network, and the increasing change of isolation occur more frequently among the elderly. Many elderly also suffer from physical diseases: 64 % of elderly aged 65–74 has a chronic disease [36] […]. It is important to note that depression often co-occurs with other disorders such as physical illness and other mental health problems (comorbidity). Losing a spouse can have significant mental health effects. Almost half of all widows and widowers during the first year after the loss meet the criteria for depression according to the DSM-IV [37]. Depression after loss of a loved one is normal in times of mourning. However, when depressive symptoms persist during a longer period of time it is possible that a depression is developing. Zisook and Shuchter found that a year after the loss of a spouse 16 % of widows and widowers met the criteria of a depression compared to 4 % of those who did not lose their spouse [38]. […] People with a chronic physical disease are also at a higher risk of developing a depression. An estimated 12–36 % of those with a chronic physical illness also suffer from clinical depression [40]. […] around 25 % of cancer patients suffer from depression [40]. […] Depression is relatively common among elderly residing in hospitals and retirement- and nursing homes. An estimated 6–11 % of residents have a depressive illness and among 30 % have depressive symptoms [41]. […] Loneliness is common among the elderly. Among those of 60 years or older, 43 % reported being lonely in a study conducted by Perissinotto et al. […] Loneliness is often associated with physical and mental complaints; apart from depression it also increases the chance of developing dementia and excess mortality [43].”

From the public health perspective it is important to know what the potential health benefits would be if the harmful effect of certain risk factors could be removed. What health benefits would arise from this, at which efforts and costs? To measure this the population attributive fraction (PAF) can be used. The PAF is expressed in a percentage and demonstrates the decrease of the percentage of incidences (number of new cases) when the harmful effects of the targeted risk factors are fully taken away. For public health it would be more effective to design an intervention targeted at a risk factor with a high PAF than a low PAF. […] An intervention needs to be effective in order to be implemented; this means that it has to show a statistically significant difference with placebo or other treatment. Secondly, it needs to be effective; it needs to prove its benefits also in real life (“everyday care”) circumstances. Thirdly, it needs to be efficient. The measure to address this is the Number Needed to Be Treated (NNT). The NNT expresses how many people need to be treated to prevent the onset of one new case with the disorder; the lower the number, the more efficient the intervention [45]. To summarize, an indicated preventative intervention would ideally be targeted at a relatively small group of people with a high, absolute chance of developing the disease, and a risk profile that is responsible for a high PAF. Furthermore, there needs to be an intervention that is both effective and efficient. […] a more detailed and specific description of the target group results in a higher absolute risk, a lower NNT, and also a lower PAF. This is helpful in determining the costs and benefits of interventions aiming at more specific or broader subgroups in the population. […] Unfortunately very large samples are required to demonstrate reductions in universal or selected interventions [46]. […] If the incidence rate is higher in the target population, which is usually the case in selective and even more so in indicated prevention, the number of participants needed to prove an effect is much smaller [5]. This shows that, even though universal interventions may be effective, its effect is harder to prove than that of indicated prevention. […] Indicated and selective preventions appear to be the most successful in preventing depression to date; however, more research needs to be conducted in larger samples to determine which prevention method is really most effective.”

Groffen et al. [6] recently conducted an investigation among a sample of 4,809 participants from the Reykjavik Study (aged 66–93 years). Similar to the findings presented by Vink and colleagues [3], education level was related to depression risk: participants with lower education levels were more likely to report depressed mood in late-life than those with a college education (odds ratio [OR] = 1.87, 95 % confidence interval [CI] = 1.35–2.58). […] Results from a meta-analysis by Lorant and colleagues [8] showed that lower SES individuals had a greater odds of developing depression than those in the highest SES group (OR = 1.24, p= 0.004); however, the studies involved in this review did not focus on older populations. […] Cole and Dendukuri [10] performed a meta-analysis of studies involving middle-aged and older adult community residents, and determined that female gender was a risk factor for depression in this population (Pooled OR = 1.4, 95 % CI = 1.2–1.8), but not old age. Blazer and colleagues [11] found a significant positive association between older age and depressive symptoms in a sample consisting of community-dwelling older adults; however, when potential confounders such as physical disability, cognitive impairment, and gender were included in the analysis, the relationship between chronological age and depressive symptoms was reversed (p< 0.01). A study by Schoevers and colleagues [14] had similar results […] these findings suggest that higher incidence of depression observed among the oldest-old may be explained by other relevant factors. By contrast, the association of female gender with increased risk of late-life depression has been observed to be a highly consistent finding.”

In an examination of marital bereavement, Turvey et al. [16] analyzed data among 5,449 participants aged70 years […] recently bereaved participants had nearly nine times the odds of developing syndromal depression as married participants (OR = 8.8, 95 % CI = 5.1–14.9, p<0.0001), and they also had significantly higher risk of depressive symptoms 2 years after the spousal loss. […] Caregiving burden is well-recognized as a predisposing factor for depression among older adults [18]. Many older persons are coping with physically and emotionally challenging caregiving roles (e.g., caring for a spouse/partner with a serious illness or with cognitive or physical decline). Additionally, many caregivers experience elements of grief, as they mourn the loss of relationship with or the decline of valued attributes of their care recipients. […] Concepts of social isolation have also been examined with regard to late-life depression risk. For example, among 892 participants aged 65 years […], Gureje et al. [13] found that women with a poor social network and rural residential status were more likely to develop major depressive disorder […] Harlow and colleagues [21] assessed the association between social network and depressive symptoms in a study involving both married and recently widowed women between the ages of 65 and 75 years; they found that number of friends at baseline had an inverse association with CES-D (Centers for Epidemiologic Studies Depression Scale) score after 1 month (p< 0.05) and 12 months (p= 0.06) of follow-up. In a study that explicitly addressed the concept of loneliness, Jaremka et al. [22] conducted a study relating this factor to late-life depression; importantly, loneliness has been validated as a distinct construct, distinguishable among older adults from depression. Among 229 participants (mean age = 70 years) in a cohort of older adults caring for a spouse with dementia, loneliness (as measured by the NYU scale) significantly predicted incident depression (p<0.001). Finally, social support has been identified as important to late-life depression risk. For example, Cui and colleagues [23] found that low perceived social support significantly predicted worsening depression status over a 2-year period among 392 primary care patients aged 65 years and above.”

“Saunders and colleagues [26] reported […] findings with alcohol drinking behavior as the predictor. Among 701 community-dwelling adults aged 65 years and above, the authors found a significant association between prior heavy alcohol consumption and late-life depression among men: compared to those who were not heavy drinkers, men with a history of heavy drinking had a nearly fourfold higher odds of being diagnosed with depression (OR = 3.7, 95 % CI = 1.3–10.4, p< 0.05). […] Almeida et al. found that obese men were more likely than non-obese (body mass index [BMI] < 30) men to develop depression (HR = 1.31, 95 % CI = 1.05–1.64). Consistent with these results, presence of the metabolic syndrome was also found to increase risk of incident depression (HR = 2.37, 95 % CI = 1.60–3.51). Finally, leisure-time activities are also important to study with regard to late-life depression risk, as these too are readily modifiable behaviors. For example, Magnil et al. [30] examined such activities among a sample of 302 primary care patients aged 60 years. The authors observed that those who lacked leisure activities had an increased risk of developing depressive symptoms over the 2-year study period (OR = 12, 95 % CI = 1.1–136, p= 0.041). […] an important future direction in addressing social and behavioral risk factors in late-life depression is to make more progress in trials that aim to alter those risk factors that are actually modifiable.”


February 17, 2018 Posted by | Books, Epidemiology, Health Economics, Medicine, Psychiatry, Psychology, Statistics | Leave a comment

Peripheral Neuropathy (II)

Chapter 3 included a great new (…new to me, that is…) chemical formula which I can’t not share here: (R)-(+)-[2,3-dihydro-5-methyl-3-(4-morpholinylmethyl)pyrrolo[1,2,3-de]-1,4-benzoxazin-6-yl]-1-naphthalenylmethanonemesylate. It’s a cannabinoid receptor agonist, the properties of which are briefly discussed in the book‘s chapter 3.

Anyway, some more observations from the book below:

Injuries affecting either the peripheral or the central nervous system (PNS, CNS) leads to neuropathic pain characterized by spontaneous pain and distortion or exaggeration of pain sensation. Peripheral nerve pathologies are considered generally easier to treat compared to those affecting the CNS, however peripheral neuropathies still remain a challenge to therapeutic treatment. […] Although first being thought as a disease of purely neuronal nature, several pre-clinical studies indicate that the mechanisms at the basis of the development and maintenance of neuropathic pain involve substantial contributions from the nonneuronal cells of both the PNS and CNS [22]. After peripheral nerve injury, microglia in the normal conditions (usually defined ‘‘resting’’ microglia) in the spinal dorsal horn proliferate and change their phenotype to an “activated” state through a series of cellular and molecular changes. Microglia shift their phenotype to the hypertrophic “activated” form following altered expression of several molecules including cell surface receptors, intracellular signalling molecules and diffusible factors. The activation process consists of distinct cellular functions aimed at repairing damaged neural cells and eliminating debris from the damaged area [23]. Damaged cells release chemo-attractant molecules that both increase the motility (i.e. chemo‐kinesis) and stimulate the migration (i.e. chemotaxis) of microglia, the combination of which recruits the microglia much closer to the damaged cells […] Once microglia become activated, they can exert both proinflammatory or anti-inflammatory/neuroprotective functions depending on the combination of the stimulation of several receptors and the expression of specific genes [31]. Thus, the activation of microglia following a peripheral injury can be considered as an adaptation to tissue stress and malfunction [32] that contribute to the development and subsequent maintenance of chronic pain [33, 34]. […] The signals responsible for neuron-microglia and/or astrocyte communication are being extensively investigated since they may represent new targets for chronic pain management.”

“In the past two decades a notable increase in the incidence of [upper extremity compression neuropathies] has occurred. […] it is mandatory to achieve a prompt diagnosis because they can produce important motor and sensory deficiencies that need to be treated before the development of complications, since, despite the capacity for regeneration bestowed on the peripheral nervous system, functions lost as a result of denervation are never fully restored. […] There are many different situations that may be a direct cause of nerve compression. Anatomically, nerves can be compressed when traversing fibro-osseous tunnels, passing between muscle layers, through traction as they cross joints or buckling during certain movements of the wrist and elbow. Other causes include trauma, direct pressure and space-occupying lesions at any level in the upper extremity. There are other situations that are not a direct cause of nerve compression, but may increase the risk and may predispose the nerve to be compressed specially when the soft tissues are swollen like synovitis, pregnancy, hypothyroidism, diabetes or alcoholism [1]. […] When nerve fibers undergo compression, the response depends on the force applied at the site and the duration. Acute, brief compression results in a focal conduction block as a result of local ischemia, being reversible if the duration of compression is transient. On the other hand, if the focal compression is prolonged, ischemic changes appear, followed by endoneurial edema and secondary perineurial thickening. These histological alterations will aggravate the changes in the microneural circulation and will increase the sensitivity of the neuron sheath to ischemia. If the compression continues, we will find focal demyelination, which typically results in a greater involvement of motor than sensory nerve fibers. […] As the duration of compression increases beyond several hours, more diffuse demyelination will appear […] This process begins at the distal end of compression or injury, a process termed wallerian degeneration. These neural changes may not appear at a uniform fashion among the whole neural sheath depending on the distribution of the compressive forces, causing mixed demyelinating and axonal injury resulting from a combination of mechanical distortion of the nerve, ischemic injury, and impaired axonal flow [2].”

Electrophysiologic testing is part of the evaluation [of compression neuropathies], but it never substitutes a complete history and a thorough physical examination. These tests can detect physiologic abnormalities in the course of motor and sensory axons. There are two main electrophysiologic tests: needle electromyography and nerve conduction […] The electromyography detects the voluntary or spontaneous generated electrical activity. The registry of this activity is made through the needle insertion, at rest and during muscular activity to assess duration, amplitude, configuration and recruitment after injury. […] Nerve conduction assesses for both sensory and motor nerves. This study consists in applying a voltage simulator to the skin over different points of the nerve in order to record the muscular action potential, analyzing the amplitude, duration, area, latency and conduction velocity. The amplitude indicates the number of available nerve fibers.”

There are three well-described entrapment syndromes involving the median nerve or its branches, namely pronator teres syndrome, anterior interosseous syndrome and carpal tunnel syndrome according to the level of entrapment. Each one of these syndromes presents with different clinical signs and symptoms, electrophysiologic results and requires different techniques for their release. […] [In pronator teres syndrome] [t]he onset is insidious and is suggested when the early sensory disturbances are greater on the thumb and index finger, mainly tingling, numbness and dysaesthesia in the median nerve distribution. Patients will also complain of increased pain in the proximal forearm and greater hand numbness with sustained power gripping or rotation […] Surgical decompression is the definitive treatment. […] [Anterior interosseous syndrome] presents principally as weakness of the index finger and thumb, and the patient may complain of diffuse pain in the proximal forearm, which may be exacerbated during exercise and diminished with rest. The vast majority of patients begin with pain in the upper arm, elbow and forearm, often preceding the motor symptoms. […] During physical exam, the patient will be unable to bend the tip of the thumb and tip of index finger. The typical symptom is the inability to form an “O” with the thumb and index finger. […] If the onset was spontaneous and there is no evident lesion on MRI, supportive care and corticosteroid injections with observation for 4 to 6 weeks is usually accepted management. The degree of recovery is unpredictable.”

“[Carpal tunnel syndrome] is the most frequently encountered compression neuropathy in the upper limb. It is a mechanical compression of the median nerve through the fixed space of the rigid carpal tunnel. The incidence in the United States has been estimated at 1 to 3 cases per 1,000 subjects per year, with a prevalence of 50 cases per 1,000 subjects per year. [10] It is more common in women than in men (2:1), perhaps because the carpal tunnel itself may be smaller in women than in men. The dominant hand is usually affected first and produces the most severe pain. It usually occurs in adults […] Abnormalities on electrophysiologic testing, in association with specific symptoms and signs, are considered the criterion standard for carpal tunnel syndrome diagnosis. Electrophysiologic testing also can provide an accurate assessment of how severe the damage to the nerve is, thereby directing management and providing objective criteria for the determination of prognosis. Carpal tunnel syndrome is usually divided into mild, moderate and severe. In general, patients with mild carpal tunnel syndrome have sensory abnormalities alone on electrophysiologic testing, and patients with sensory plus motor abnormalities have moderate carpal tunnel syndrome. However, any evidence of axonal loss is classified as severe carpal tunnel syndrome. […] No imaging studies are considered routine in the diagnosis of carpal tunnel syndrome. […] nonoperative treatment is based in splintage of the wrist in a neutral position for three weeks and steroid injections. This therapy has variable results, with a success rate up to 76% during one year, but with a recurrence rate as high as 94%. Non-operative treatment is indicated in patients with intermittent symptoms, initial stages and during pregnancy [17]. The only definitive treatment for carpal tunnel syndrome is surgical expansion of the carpal tunnel by transection of the transverse carpal ligament.”

Postural control can be defined as the control of the body’s position in space for the purposes of balance and orientation. Balance is the ability to maintain or return the body’s centre of gravity within the limits of stability that are determined by the base of support. Spatial orientation defines our natural ability to maintain our body orientation in relation to the surrounding environment, in static and dynamic conditions. The representation of the body’s static and dynamic geometry may be largely based on muscle proprioceptive inputs that continuously inform the central nervous system about the position of each part of the body in relation to the others. Posture is built up by the sum of several basic mechanisms. […] Postural balance is dependent upon integration of signals from the somatosensory, visual and vestibular systems, to generate motor responses, with cognitive demands that vary according to the task, the age of the individuals and their ability to balance. Descending postural commands are multivariate in nature, and the motion at each joint is affected uniquely by input from multiple sensors.
The proprioceptive system provides information on joint angles, changes in joint angles, joint position and muscle length and tension; while the tactile system is associated mainly with sensations of touch, pressure and vibration. Visual influence on postural control results from a complex synergy that receives multimodal inputs. Vestibular inputs tonically activate the anti-gravity leg muscles and, during dynamic tasks, vestibular information contributes to head stabilization to enable successful gaze control, providing a stable reference frame from which to generate postural responses. In order to assess instability or walking difficulty, it is essential to identify the affected movements and circumstances in which they occur (i.e. uneven surfaces, environmental light, activity) as well as any other associated clinical manifestation that could be related
to balance, postural control, motor control, muscular force, movement limitations or sensory deficiency. The clinical evaluation should include neurological examination; special care should be taken to identify visual and vestibular disorders, and to assess static and dynamic postural control and gait.”

Polyneuropathy modify the amount and the quality of the sensorial information that is necessary for motor control, with increased instability during both, upright stance and gait. Patients with peripheral neuropathy may have decreased stability while standing and when subjected to dynamic balance conditions. […] Balance and gait difficulties are the most frequently cited cause of falling […] Patients with polyneuropathy who have ankle weakness are more likely to experience multiple and injurious falls than are those without specific muscle weakness. […] During upright stance, compared to healthy subjects, recordings of the centre of pressure in patients with diabetic neuropathy have shown larger sway [95-96, 102], as well as increased oscillation […] Compared to healthy subjects, diabetic patients may have poorer balance during standing in diminished light compared to full light and no light conditions [105] […] compared to patients with diabetes but no peripheral neuropathy, patients with diabetic peripheral neuropathy are more likely to report an injury during walking or standing, which may be more frequent when walking on irregular surfaces [110]. Epidemiological surveys have established that a reduction of leg proprioception is a risk factor for falls in the elderly [111-112]. Symptoms and signs of peripheral neuropathy are frequently found during physical examination of older subjects. These clinical manifestations may be related to diabetes mellitus, alcoholism, nutritional deficiencies, autoimmune diseases, among other causes. In this group of patients, loss of plantar sensation may be an important contributor to the dynamic balance deficits and increased risk of falls [34, 109]. […] Apart from sensorymotor compromise, fear of falling may relate to restriction and avoidance of activities, which results in loss of strength especially in the lower extremities, and may also be predictive for future falls [117-119].”

“In patients with various forms of peripheral neuropathy, the use of a cane, ankle orthoses or touching a wall [has been shown to improve] spatial and temporal measures of gait regularity while walking under challenging conditions. Additional hand contact of external objects may reduce postural instability caused by a deficiency of one or more senses. […] Contact of the index finger with a stationary surface can greatly attenuate postural instability during upright stance, even when the level of force applied is far below that necessary to provide mechanical support [42]. […] haptic information about postural sway derived from contact with other parts of the body can also increase stability […] Studies evaluating preventive and treatment strategies through excercise [sic – US] that could improve balance in patients with polyneuropathy are scarce. However, evidence support that physical activity interventions that increase activity probably do not increase the risk of falling in patients with diabetic peripheral neuropathy, and in this group of patients, specific training may improve gait speed, balance, muscle strength and joint mobility.”

“Postherpetic neuralgia (PHN) is a form of refractory chronic neuralgia that […] currently lacks any effective prophylaxis. […] PHN has a variety of symptoms and significantly affects patient quality of life [3-12]. Various studies have statistically analyzed predictive factors for PHN [13-23], but neither obvious pathogenesis nor established treatment has been clarified or established. We designed and conducted a study on the premise that statistical identification of significant predictors for PHN would contribute to the establishment of an evidence-based medicine approach to the optimal treatment of PHN. […] Previous studies have shown that older age, female sex, presence of a prodrome, greater rash severity, and greater acute pain severity are predictors of increased PHN [14-18, 25]. Some other potential predictors (ophthalmic localization, presence of anxiety and depression, presence of allodynia, and serological/virological factors) have also been studied [14, 18]. […] The participants were 73 patients with herpes zoster who had been treated at the pain clinic of our hospital between January 2008 and June 2010. […] Multivariate ordered logistic regression analysis was performed to identify predictive factors for PHN. […] advanced age and deep pain at first visit were identified as predictive factors for PHN. DM [diabetes mellitus – US] and pain reduced by bathing should also be considered as potential predictors of PHN [24].”


February 14, 2018 Posted by | Books, Diabetes, Infectious disease, Medicine, Neurology | Leave a comment

Peripheral Neuropathy (I)

The objective of this book is to update health care professionals on recent advances in the pathogenesis, diagnosis and treatment of peripheral neuropathy. This work was written by a group of clinicians and scientists with large expertise in the field.

The book is not the first book about this topic I’ve read, so a lot of the stuff included was of course review – however it’s a quite decent text, and I decided to blog it in at least some detail anyway. It’s somewhat technical and it’s probably not a very good introduction to this topic if you know next to nothing about neurology – in that case I’m certain Said’s book (see the ‘not’-link above) is a better option.

I have added some observations from the first couple of chapters below. As InTech publications like these explicitly encourage people to share the ideas and observations included in these books, I shall probably cover the book in more detail than I otherwise would have.

“Within the developing world, infectious diseases [2-4] and trauma [5] are the most common sources of neuropathic pain syndromes. The developed world, in contrast, suffers more frequently from diabetic polyneuropathy (DPN) [6, 7], post herpetic neuralgia (PHN) from herpes zoster infections [8], and chemotherapy-induced peripheral neuropathy (CIPN) [9, 10]. There is relatively little epidemiological data regarding the prevalence of neuropathic pain within the general population, but a few estimates suggest it is around 7-8% [11, 12]. Despite the widespread occurrence of neuropathic pain, treatment options are limited and often ineffective […] Neuropathic pain can present as on-going or spontaneous discomfort that occurs in the absence of any observable stimulus or a painful hypersensitivity to temperature and touch. […] people with chronic pain have increased incidence of anxiety and depression and reduced scores in quantitative measures of health related quality of life [15]. Despite significant progress in chronic and neuropathic pain research, which has led to the discovery of several efficacious treatments in rodent models, pain management in humans remains ineffective and insufficient [16]. The lack of translational efficiency may be due to inadequate animal models that do not faithfully recapitulate human disease or from biological differences between rodents and humans […] In an attempt to increase the efficacy of medical treatment for neuropathic pain, clinicians and researchers have been moving away from an etiology based classification towards one that is mechanism based. It is current practice to diagnose a person who presents with neuropathic pain according to the underlying etiology and lesion topography [17]. However, this does not translate to effective patient care as these classification criteria do not suggest efficacious treatment. A more apt diagnosis might include a description of symptoms and the underlying pathophysiology associated with those symptoms.”

Neuropathic pain has been defined […] as “pain arising as the direct consequence of a lesion or disease affecting the somatosensory system” [18]. This is distinct from nociceptive pain – which signals tissue damage through an intact nervous system – in underlying pathophysiology, severity, and associated psychological comorbidities [13]. Individuals who suffer from neuropathic pain syndromes report pain of higher intensity and duration than individuals with non-neuropathic chronic pain and have significantly increased incidence of depression, anxiety, and sleep disorders [13, 19]. […] individuals with seemingly identical diseases who both develop neuropathic pain may experience distinct abnormal sensory phenotypes. This may include a loss of sensory perception in some modalities and increased activity in others. Often a reduction in the perception of vibration and light touch is coupled with positive sensory symptoms such as paresthesia, dysesthesia, and pain[20]. Pain may manifest as either spontaneous, with a burning or shock-like quality, or as a hypersensitivity to mechanical or thermal stimuli [21]. This hypersensitivity takes two forms: allodynia, pain that is evoked from a normally non-painful stimulus, and hyperalgesia, an exaggerated pain response from a moderately painful stimulus. […] Noxious stimuli are perceived by small diameter peripheral neurons whose free nerve endings are distributed throughout the body. These neurons are distinct from, although anatomically proximal to, the low threshold mechanoreceptors responsible for the perception of vibration and light touch.”

In addition to hypersensitivity, individuals with neuropathic pain frequently experience ongoing spontaneous pain as a major source of discomfort and distress. […] In healthy individuals, a quiescent neuron will only generate an action potential when presented with a stimulus of sufficient magnitude to cause membrane depolarization. Following nerve injury, however, significant changes in ion channel expression, distribution, and kinetics lead to disruption of the homeostatic electric potential of the membrane resulting in oscillations and burst firing. This manifests as spontaneous pain that has a shooting or burning quality […] There is reasonable evidence to suggest that individual ion channels contribute to specific neuropathic pain symptoms […] [this observation] provides an intriguing therapeutic possibility: unambiguous pharmacologic ion channel blockers to relieve individual sensory symptoms with minimal unintended effects allowing pain relief without global numbness. […] Central sensitization leads to painful hypersensitivity […] Functional and structural changes of dorsal horn circuitry lead to pain hypersensitivity that is maintained independent of peripheral sensitization [38]. This central sensitization provides a mechanistic explanation for the sensory abnormalities that occur in both acute and chronic pain states, such as the expansion of hypersensitivity beyond the innervation territory of a lesion site, repeated stimulation of a constant magnitude leading to an increasing pain response, and pain outlasting a peripheral stimulus [39-41]. In healthy individuals, acute pain triggers central sensitization, but homeostatic sensitivity returns following clearance of the initial insult. In some individuals who develop neuropathic pain, genotype and environmental factors contribute to maintenance of central sensitization leading to spontaneous pain, hyperalgesia, and allodynia. […] Similarly, facilitation also results in a lowered activation threshold in second order neurons”.

“Chronic pain conditions are associated with vast functional and structural changes of the brain, when compared to healthy controls, but it is currently unclear which comes first: does chronic pain cause distortions of brain circuitry and anatomy or do cerebral abnormalities trigger and/or maintain the perception of chronic pain? […] Brain abnormalities in chronic pain states include modification of brain activity patterns, localized decreases in gray matter volume, and circuitry rerouting [53]. […] Chronic pain conditions are associated with localized reduction in gray matter volume, and the topography of gray matter volume reduction is dictated, at least in part, by the particular pathology. […] These changes appear to represent a form of plasticity as they are reversible when pain is effectively managed [63, 67, 68].”

“By definition, neuropathic pain indicates direct pathology of the nervous system while nociceptive pain is an indication of real or potential tissue damage. Due to the distinction in pathophysiology, conventional treatments prescribed for nociceptive pain are not very effective in treating neuropathic pain and vice versa [78]. Therefore the first step towards meaningful pain relief is an accurate diagnosis. […] Treating neuropathic pain requires a multifaceted approach that aims to eliminate the underlying etiology, when possible, and manage the associated discomforts and emotional distress. Although in some cases it is possible to directly treat the cause of neuropathic pain, for example surgery to alleviate a constricted nerve, it is more likely that the primary cause is untreatable, as is the case with singular traumatic events such as stroke and spinal cord injury and diseases like diabetes. When this is the case, symptom management and pain reduction become the primary focus. Unfortunately, in most cases complete elimination of pain is not a feasible endpoint; a pain reduction of 30% is considered to be efficacious [21]. Additionally, many pharmacological treatments require careful titration and tapering to prevent adverse effects and toxicity. This process may take several weeks to months, and ultimately the drug may be ineffective, necessitating another trial with a different medication. It is therefore necessary that both doctor and patient begin treatment with realistic expectations and goals.”

First-line medications for the treatment of neuropathic pain are those that have proven efficacy in randomized clinical trials (RCTs) and are consistent with pooled clinical observations [81]. These include antidepressants, calcium channel ligands, and topical lidocaine [15]. Tricyclic antidepressants (TCAs) have demonstrated efficacy in treating neuropathic pain with positive results in RCTs for central post-stroke pain, PHN, painful diabetic and non-diabetic polyneuropathy, and post-mastectomy pain syndrome [82]. However they do not seem to be effective in treating painful HIV-neuropathy or CIPN [82]. Duloxetine and venlafaxine, two selective serotonin norepinephrine reuptake inhibitors (SSNRIs), have been found to be effective in DPN and both DPN and painful polyneuropathies, respectively [81]. […] Gabapentin and pregabalin have also demonstrated efficacy in several neuropathic pain conditions including DPN and PHN […] Topical lidocaine (5% patch or gel) has significantly reduced allodynia associated with PHN and other neuropathic pain syndromes in several RCTs [81, 82]. With no reported systemic adverse effects and mild skin irritation as the only concern, lidocaine is an appropriate choice for treating localized peripheral neuropathic pain. In the event that first line medications, alone or in combination, are not effective at achieving adequate pain relief, second line medications may be considered. These include opioid analgesics and tramadol, pharmaceuticals which have proven efficacy in RCTs but are associated with significant adverse effects that warrant cautious prescription [15]. Although opioid analgesics are effective pain relievers in several types of neuropathic pain [81, 82, 84], they are associated with misuse or abuse, hypogonadism, constipation, nausea, and immunological changes […] Careful consideration should be given when prescribing opiates to patients who have a personal or family history of drug or alcohol abuse […] Deep brain stimulation, a neurosurgical technique by which an implanted electrode delivers controlled electrical impulses to targeted brain regions, has demonstrated some efficacy in treating chronic pain but is not routinely employed due to a high risk-to-benefit ratio [91]. […] A major challenge in treating neuropathic pain is the heterogeneity of disease pathogenesis within an individual etiological classification. Patients with seemingly identical diseases may experience completely different neuropathic pain phenotypes […] One of the biggest barriers to successful management of neuropathic pain has been the lack of understanding in the underlying pathophysiology that produces a pain phenotype. To that end, significant progress has been made in basic science research.”

In diabetes mellitus, nerves and their supporting cells are subjected to prolonged hyperglycemia and metabolic disturbances and this culminates in reversible/irreversible nervous system dysfunction and damage, namely diabetic peripheral neuropathy (DPN). Due to the varying compositions and extents of neurological involvements, it is difficult to obtain accurate and thorough prevalence estimates of DPN, rendering this microvascular complication vastly underdiagnosed and undertreated [1-4]. According to American Diabetes Association, DPN occurs to 60-70% of diabetic individuals [5] and represents the leading cause of peripheral neuropathies among all cases [6, 7].”

A quick remark: This number seems really high to me. I won’t rule out that it’s accurate if you go with highly sensitive measures of neuropathy, but the number of patients who will experience significant clinical sequelae as a result of DPN is in my opinion likely to be significantly lower than that. On a peripherally related note, it should however on the other hand also be kept in mind that although diabetes-related neurological complications may display some clustering in patient groups – which will necessarily decrease the magnitude of the problem – no single test will ever completely rule out neurological complications in a diabetic; a patient with a negative Semmes-Weinstein monofilament test may still have autonomic neuropathy. So assessing the full disease burden in the context of diabetes-related neurological complications cannot be done using only a single instrument, and the full disease burden is likely to be higher than individual estimates encountered in the literature (unless a full neurological workup was done, which is unlikely to be the case). They do go into more detail about subgroups, clinical significance, etc. below, but I thought this observation was important to add early on in this part of the coverage.

Because diverse anatomic distributions and fiber types may be differentially affected in patients with diabetes, the disease manifestations, courses and pathologies of clinical and subclinical DPN are rather heterogeneous and encompass a broad spectrum […] Current consensus divides diabetes-associated somatic neuropathic syndromes into the focal/multifocal and diffuse/generalized neuropathies [6, 14]. The first category comprises a group of asymmetrical, acute-in-onset and self-limited single lesion(s) of nerve injury or impairment largely resulting from the increased vulnerability of diabetic nerves to mechanical insults (Carpal Tunnel Syndrome) […]. Such mononeuropathies occur idiopathically and only become a clinical problem in association with aging in 5-10% of those affected. Therefore, focal neuropathies are not extensively covered in this chapter [16]. The rest of the patients frequently develop diffuse neuropathies characterized by symmetrical distribution, insidious onset and chronic progression. In particular, a distal symmetrical sensorimotor polyneuropathy accounts for 90% of all DPN diagnoses in type 1 and type 2 diabetics and affects all types of peripheral sensory and motor fibers in a temporally non-uniform manner [6, 17].
Symptoms begin with prickling, tingling, numbness, paresthesia, dysesthesia and various qualities of pain associated with small sensory fibers at the very distal end (toes) of lower extremities [1, 18]. Presence of the above symptoms together with abnormal nociceptive response of epidermal C and A-δ fibers to pain/temperature (as revealed by clinical examination) constitute the diagnosis of small fiber sensory neuropathy, which produces both painful and insensate phenotypes [19]. Painful diabetic neuropathy is a prominent, distressing and chronic experience in at least 10-30% of DPN populations [20, 21]. Its occurrence does not necessarily correlate with impairment in electrophysiological or quantitative sensory testing (QST). […] Large myelinated sensory fibers that innervate the dermis, such as Aβ, also become involved later on, leading to impaired proprioception, vibration and tactile detection, and mechanical hypoalgesia [19]. Following this “stocking-glove”, length-dependent and dying-back evolvement, neurodegeneration gradually proceeds to proximal muscle sensory and motor nerves. Its presence manifests in neurological testings as reduced nerve impulse conductions, diminished ankle tendon reflex, unsteadiness and muscle weakness [1, 24].
Both the absence of protective sensory response and motor coordination predispose neuropathic foot to impaired wound healing and gangrenous ulceration — often ensued by limb amputation in severe and/or advanced cases […]. Although symptomatic motor deficits only appear in later stages of DPN [25], motor denervation and distal atrophy can increase the rate of fractures by causing repetitive minor trauma or falls [24, 28]. Other unusual but highly disabling late sequelae of DPN include limb ischemia and joint deformity [6]; the latter also being termed Charcot’s neuroarthropathy or Charcot’s joints [1]. In addition to significant morbidities, several separate cohort studies provided evidence that DPN [29], diabetic foot ulcers [30] and increased toe vibration perception threshold (VPT) [31] are all independent risk factors for mortality.”

Unfortunately, current therapy for DPN is far from effective and at best only delays the onset and/or progression of the disease via tight glucose control […] Even with near normoglycemic control, a substantial proportion of patients still suffer the debilitating neurotoxic consequences of diabetes [34]. On the other hand, some with poor glucose control are spared from clinically evident signs and symptoms of neuropathy for a long time after diagnosis [37-39]. Thus, other etiological factors independent of hyperglycemia are likely to be involved in the development of DPN. Data from a number of prospective, observational studies suggested that older age, longer diabetes duration, genetic polymorphism, presence of cardiovascular disease markers, malnutrition, presence of other microvascular complications, alcohol and tobacco consumption, and higher constitutional indexes (e.g. weight and height) interact with diabetes and make for strong predictors of neurological decline [13, 32, 40-42]. Targeting some of these modifiable risk factors in addition to glycemia may improve the management of DPN. […] enormous efforts have been devoted to understanding and intervening with the molecular and biochemical processes linking the metabolic disturbances to sensorimotor deficits by studying diabetic animal models. In return, nearly 2,200 articles were published in PubMed central and at least 100 clinical trials were reported evaluating the efficacy of a number of pharmacological agents; the majority of them are designed to inhibit specific pathogenic mechanisms identified by these experimental approaches. Candidate agents have included aldose reductase inhibitors, AGE inhibitors, γ-linolenic acid, α-lipoic acid, vasodilators, nerve growth factor, protein kinase Cβ inhibitors, and vascular endothelial growth factor. Notwithstanding a fruitful of knowledge and promising results in animals, none has translated into definitive clinical success […] Based on the records published by National Institute of Neurological Disorders and Stroke (NINDS), a main source of DPN research, about 16,488 projects were funded at the expense of over $8 billion for the fiscal years of 2008 through 2012. Of these projects, an estimated 72,200 animals were used annually to understand basic physiology and disease pathology as well as to evaluate potential drugs [255]. As discussed above, however, the usefulness of these pharmaceutical agents developed through such a pipeline in preventing or reducing neuronal damage has been equivocal and usually halted at human trials due to toxicity, lack of efficacy or both […]. Clearly, the pharmacological translation from our decades of experimental modeling to clinical practice with regard to DPN has thus far not even [been] close to satisfactory.”

Whereas a majority of the drugs investigated during preclinical testing executed experimentally desired endpoints without revealing significant toxicity, more than half that entered clinical evaluation for treating DPN were withdrawn as a consequence of moderate to severe adverse events even at a much lower dose. Generally, using other species as surrogates for human population inherently encumbers the accurate prediction of toxic reactions for several reasons […] First of all, it is easy to dismiss drug-induced non-specific effects in animals – especially for laboratory rodents who do not share the same size, anatomy and physical activity with humans. […]  Second, some physiological and behavioral phenotypes observable in humans are impossible for animals to express. In this aspect, photosensitive skin rash and pain serve as two good examples of non-translatable side effects. Rodent skin differs from that of humans in that it has a thinner and hairier epidermis and distinct DNA repair abilities [260]. Therefore, most rodent stains used in diabetes modeling provide poor estimates for the probability of cutaneous hypersensitivity reactions to pharmacological treatments […] Another predicament is to assess pain in rodents. The reason for this is simple: these animals cannot tell us when, where or even whether they are experiencing pain […]. Since there is not any specific type of behavior to which painful reaction can be unequivocally associated, this often leads to underestimation of painful side effects during preclinical drug screening […] The third problem is that animals and humans have different pharmacokinetic and toxicological responses.”

“Genetic or chemical-induced diabetic rats or mice have been a major tool for preclinical pharmacological evaluation of potential DPN treatments. Yet, they do not faithfully reproduce many neuropathological manifestations in human diabetics. The difficulty of such begins with the fact that it is not possible to obtain in rodents a qualitative and quantitative expression of the clinical symptoms that are frequently presented in neuropathic diabetic patients, including spontaneous pain of different characteristics (e.g. prickling, tingling, burning, squeezing), paresthesia and numbness. As symptomatic changes constitute an important parameter of therapeutic outcome, this may well underlie the failure of some aforementioned drugs in clinical trials despite their good performance in experimental tests […] Development of nerve dysfunction in diabetic rodents also does not follow the common natural history of human DPN. […] Besides the lack of anatomical resemblance, the changes in disease severity are often missing in these models. […] importantly, foot ulcers that occur as a late complication to 15% of all individuals with diabetes [14] do not spontaneously develop in hyperglycemic rodents. Superimposed injury by experimental procedure in the foot pads of diabetic rats or mice may lend certain insight in the impaired wound healing in diabetes [278] but is not reflective of the chronic, accumulating pathological changes in diabetic feet of human counterparts. Another salient feature of human DPN that has not been described in animals is the predominant sensory and autonomic nerve damage versus minimal involvement of motor fibers [279]. This should elicit particular caution as the selective susceptibility is critical to our true understanding of the etiopathogenesis underlying distal sensorimotor polyneuropathy in diabetes. In addition to the lack of specificity, most animal models studied only cover a narrow spectrum of clinical DPN and have not successfully duplicated syndromes including proximal motor neuropathy and focal lesions [279].
Morphologically, fiber atrophy and axonal loss exist in STZ-rats and other diabetic rodents but are much milder compared to the marked degeneration and loss of myelinated and unmyelinated nerves readily observed in human specimens [280]. Of significant note, rodents are notoriously resistant to developing some of the histological hallmarks seen in diabetic patients, such as segmental and paranodal demyelination […] the simultaneous presence of degenerating and regenerating fibers that is characteristic of early DPN has not been clearly demonstrated in these animals [44]. Since such dynamic nerve degeneration/regeneration signifies an active state of nerve repair and is most likely to be amenable to therapeutic intervention, absence of this property makes rodent models a poor tool in both deciphering disease pathogenesis and designing treatment approaches […] With particular respect to neuroanatomy, a peripheral axon in humans can reach as long as one meter [296] whereas the maximal length of the axons innervating the hind limb is five centimeters in mice and twelve centimeters in rats. This short length makes it impossible to study in rodents the prominent length dependency and dying-back feature of peripheral nerve dysfunction that characterizes human DPN. […] For decades the cytoarchitecture of human islets was assumed to be just like those in rodents with a clear anatomical subdivision of β-cells and other cell types. By using confocal microscopy and multi-fluorescent labeling, it was finally uncovered that human islets have not only a substantially lower percentage of β-cell population, but also a mixed — rather than compartmentalized — organization of the different cell types [297]. This cellular arrangement was demonstrated to directly alter the functional performance of human islets as opposed to rodent islets. Although it is not known whether such profound disparities in cell composition and association also exist in the PNS, it might as well be anticipated considering the many sophisticated sensory and motor activities that are unique to humans. Considerable species difference also manifest at a molecular level. […] At least 80% of human genes have a counterpart in the mouse and rat genome. However, temporal and spatial expression of these genes can vary remarkably between humans and rodents, in terms of both extent and isoform specificity.”

“Ultimately, a fundamental problem associated with resorting to rodents in DPN research is to study a human disorder that takes decades to develop and progress in organisms with a maximum lifespan of 2-3 years. […] It is […] fair to say that a full clinical spectrum of the maturity-onset DPN likely requires a length of time exceeding the longevity of rodents to present and diabetic rodent models at best only help illustrate the very early aspects of the entire disease syndrome. Since none of the early pathogenetic pathways revealed in diabetic rodents will contribute to DPN in a quantitatively and temporally uniform fashion throughout the prolonged natural history of this disease, it is not surprising that a handful of inhibitors developed against these processes have not benefited patients with relatively long-standing neuropathy. As a matter of fact, any agents targeting single biochemical insults would be too little too late to treat a chronic neurological disorder with established nerve damage and pathogenetic heterogeneity […] It is important to point out that the present review does not argue against the ability of animal models to shed light on basic molecular, cellular and physiological processes that are shared among species. Undoubtedly, animal models of diabetes have provided abundant insights into the disease biology of DPN. Nevertheless, the lack of any meaningful advance in identifying a promising pharmacological target necessitates a reexamination of the validity of current DPN models as well as to offer a plausible alternative methodology to scientific approaches and disease intervention. […] we conclude that the fundamental species differences have led to misinterpretation of rodent data and overall failure of pharmacological investment. As more is being learned, it is becoming prevailing that DPN is a chronic, heterogeneous disease unlikely to benefit from targeting specific and early pathogenetic components revealed by animal studies.”


February 13, 2018 Posted by | Books, Diabetes, Genetics, Medicine, Neurology, Pharmacology | Leave a comment

Systems Biology (II)

Some observations from the book’s chapter 3 below:

“Without regulation biological processes would become progressively more and more chaotic. In living cells the primary source of information is genetic material. Studying the role of information in biology involves signaling (i.e. spatial and temporal transfer of information) and storage (preservation of information). Regarding the role of the genome we can distinguish three specific aspects of biological processes: steady-state genetics, which ensure cell-level and body homeostasis; genetics of development, which controls cell differentiation and genesis of the organism; and evolutionary genetics, which drives speciation. […] The ever growing demand for information, coupled with limited storage capacities, has resulted in a number of strategies for minimizing the quantity of the encoded information that must be preserved by living cells. In addition to combinatorial approaches based on noncontiguous genes structure, self-organization plays an important role in cellular machinery. Nonspecific interactions with the environment give rise to coherent structures despite the lack of any overt information store. These mechanisms, honed by evolution and ubiquitous in living organisms, reduce the need to directly encode large quantities of data by adopting a systemic approach to information management.”

Information is commonly understood as a transferable description of an event or object. Information transfer can be either spatial (communication, messaging or signaling) or temporal (implying storage). […] The larger the set of choices, the lower the likelihood [of] making the correct choice by accident and — correspondingly — the more information is needed to choose correctly. We can therefore state that an increase in the cardinality of a set (the number of its elements) corresponds to an increase in selection indeterminacy. This indeterminacy can be understood as a measure of “a priori ignorance”. […] Entropy determines the uncertainty inherent in a given system and therefore represents the relative difficulty of making the correct choice. For a set of possible events it reaches its maximum value if the relative probabilities of each event are equal. Any information input reduces entropy — we can therefore say that changes in entropy are a quantitative measure of information. […] Physical entropy is highest in a state of equilibrium, i.e. lack of spontaneity (G = 0,0) which effectively terminates the given reaction. Regulatory processes which counteract the tendency of physical systems to reach equilibrium must therefore oppose increases in entropy. It can be said that a steady inflow of information is a prerequisite of continued function in any organism. As selections are typically made at the entry point of a regulatory process, the concept of entropy may also be applied to information sources. This approach is useful in explaining the structure of regulatory systems which must be “designed” in a specific way, reducing uncertainty and enabling accurate, error-free decisions.

The fire ant exudes a pheromone which enables it to mark sources of food and trace its own path back to the colony. In this way, the ant conveys pathing information to other ants. The intensity of the chemical signal is proportional to the abundance of the source. Other ants can sense the pheromone from a distance of several (up to a dozen) centimeters and thus locate the source themselves. […] As can be expected, an increase in the entropy of the information source (i.e. the measure of ignorance) results in further development of regulatory systems — in this case, receptors capable of receiving signals and processing them to enable accurate decisions. Over time, the evolution of regulatory mechanisms increases their performance and precision. The purpose of various structures involved in such mechanisms can be explained on the grounds of information theory. The primary goal is to select the correct input signal, preserve its content and avoid or eliminate any errors.”

Genetic information stored in nucleotide sequences can be expressed and transmitted in two ways:
a. via replication (in cell division);
b. via transcription and translation (also called gene expression […]
Both processes act as effectors and can be triggered by certain biological signals transferred on request.
Gene expression can be defined as a sequence of events which lead to the synthesis of proteins or their products required for a particular function. In cell division, the goal of this process is to generate a copy of the entire genetic code (S phase), whereas in gene expression only selected fragments of DNA (those involved in the requested function) are transcribed and translated. […] Transcription calls for exposing a section of the cell’s genetic code and although its product (RNA) is short-lived, it can be recreated on demand, just like a carbon copy of a printed text. On the other hand, replication affects the entire genetic material contained in the cell and must conform to stringent precision requirements, particularly as the size of the genome increases.”

The magnitude of effort involved in replication of genetic code can be visualized by comparing the DNA chain to a zipper […]. Assuming that the zipper consists of three pairs of interlocking teeth per centimeter (300 per meter) and that the human genome is made up of 3 billion […] base pairs, the total length of our uncoiled DNA in “zipper form” would be equal to […] 10,000 km […] If we were to unfasten the zipper at a rate of 1 m per second, the entire unzipping process would take approximately 3 months […]. This comparison should impress upon the reader the length of the DNA chain and the precision with which individual nucleotides must be picked to ensure that the resulting code is an exact copy of the source. It should also be noted that for each base pair the polymerase enzyme needs to select an appropriate matching nucleotide from among four types of nucleotides present in the solution, and attach it to the chain (clearly, no such problem occurs in zippers). The reliability of an average enzyme is on the order of 10-3–10-4, meaning that one error occurs for every 1,000–10,000 interactions between the enzyme and its substrate. Given this figure, replication of 3*109 base pairs would introduce approximately 3 million errors (mutations) per genome, resulting in a highly inaccurate copy. Since the observed reliability of replication is far higher, we may assume that some corrective mechanisms are involved. Really, the remarkable precision of genetic replication is ensured by DNA repair processes, and in particular by the corrective properties of polymerase itself.

Many mutations are caused by the inherent chemical instability of nucleic acids: for example, cytosine may spontaneously convert to uracil. In the human genome such an event occurs approximately 100 times per day; however uracil is not normally encountered in DNA and its presence alerts defensive mechanisms which correct the error. Another type of mutation is spontaneous depurination, which also triggers its own, dedicated error correction procedure. Cells employ a large number of corrective mechanisms […] DNA repair mechanisms may be treated as an “immune system” which protects the genome from loss or corruption of genetic information. The unavoidable mutations which sometimes occur despite the presence of error correction-mechanisms can be masked due to doubled presentation (alleles) of genetic information. Thus, most mutations are recessive and not expressed in the phenotype. As the length of the DNA chain increases, mutations become more probable. It should be noted that the number of nucleotides in DNA is greater than the relative number of aminoacids participating in polypeptide chains. This is due to the fact that each aminoacid is encoded by exactly three nucleotides — a general principle which applies to all living organisms. […] Fidelity is, of course, fundamentally important in DNA replication as any harmful mutations introduced in its course are automatically passed on to all successive generations of cells. In contrast, transcription and translation processes can be more error-prone as their end products are relatively short-lived. Of note is the fact that faulty transcripts appear in relatively low quantities and usually do not affect cell functions, since regulatory processes ensure continued synthesis of the required substances until a suitable level of activity is reached. Nevertheless, it seems that reliable transcription of genetic material is sufficiently significant for cells to have developed appropriate proofreading mechanisms, similar to those which assist replication. […] the entire information pathway — starting with DNA and ending with active proteins — is protected against errors. We can conclude that fallibility is an inherent property of genetic information channels, and that in order to perform their intended function, these channels require error correction mechanisms.”

The discrete nature of genetic material is an important property which distinguishes prokaryotes from eukaryotes. […] The ability to select individual nucleotide fragments and construct sequences from predetermined “building blocks” results in high adaptability to environmental stimuli and is a fundamental aspect of evolution. The discontinuous nature of genes is evidenced by the presence of fragments which do not convey structural information (introns), as opposed to structure-encoding fragments (exons). The initial transcript (pre-mRNA) contains introns as well as exons. In order to provide a template for protein synthesis, it must undergo further processing (also known as splicing): introns must be cleaved and exon fragments attached to one another. […] Recognition of intron-exon boundaries is usually very precise, while the reattachment of adjacent exons is subject to some variability. Under certain conditions, alternative splicing may occur, where the ordering of the final product does not reflect the order in which exon sequences appear in the source chain. This greatly increases the number of potential mRNA combinations and thus the variety of resulting proteins. […] While access to energy sources is not a major problem, sources of information are usually far more difficult to manage — hence the universal tendency to limit the scope of direct (genetic) information storage. Reducing the length of genetic code enables efficient packing and enhances the efficiency of operations while at the same time decreasing the likelihood of errors. […] The number of genes identified in the human genome is lower than the number of distinct proteins by a factor of 4; a difference which can be attributed to alternative splicing. […] This mechanism increases the variety of protein structures without affecting core information storage, i.e. DNA sequences. […] Primitive organisms often possess nearly as many genes as humans, despite the essential differences between both groups. Interspecies diversity is primarily due to the properties of regulatory sequences.”

The discontinuous nature of genes is evolutionarily advantageous but comes at the expense of having to maintain a nucleus where such splicing processes can be safely conducted, in addition to efficient transport channels allowing transcripts to penetrate the nuclear membrane. While it is believed that at early stages of evolution RNA was the primary repository of genetic information, its present function can best be described as an information carrier. Since unguided proteins cannot ensure sufficient specificity of interaction with nucleic acids, protein-RNA complexes are used often in cases where specific fragments of genetic information need to be read. […] The use of RNA in protein complexes is common across all domains of the living world as it bridges the gap between discrete and continuous storage of genetic information.”

Epigenetic differentiation mechanisms are particularly important in embryonic development. […] Unlike the function of mature organisms, embryonic programming refers to structures which do not yet exist but which need to be created through cell proliferation and differentiation. […] Differentiation of cells results in phenotypic changes. This phenomenon is the primary difference between development genetics and steady-state genetics. Functional differences are not, however, associated with genomic changes: instead they are mediated by the transcriptome where certain genes are preferentially selected for transcription while others are suppressed. […] In a mature, specialized cell only a small portion of the transcribable genome is actually expressed. The remainder of the cell’s genetic material is said to be silenced. Gene silencing is a permanent condition. Under normal circumstances mature cells never alter their function, although such changes may be forced in a laboratory setting […] Cells which make up the embryo at a very early stage of development are pluripotent, meaning that their purpose can be freely determined and that all of their genetic information can potentially be expressed (under certain conditions). […] At each stage of the development process the scope of pluripotency is reduced until, ultimately, the cell becomes monopotent. Monopotency implies that the final function of the cell has already been determined, although the cell itself may still be immature. […] functional dissimilarities between specialized cells are not associated with genetic mutations but rather with selective silencing of genes. […] Most genes which determine biological functions have a biallelic representation (i.e. a representation consisting of two alleles). The remainder (approximately 10 % of genes) is inherited from one specific parent, as a result of partial or complete silencing of their sister alleles (called paternal or maternal imprinting) which occurs during gametogenesis. The suppression of a single copy of the X chromosome is a special case of this phenomenon.”

Evolutionary genetics is subject to two somewhat contradictory criteria. On the one hand, there is clear pressure on accurate and consistent preservation of biological functions and structures while on the other hand it is also important to permit gradual but persistent changes. […] the observable progression of adaptive traits which emerge as a result of evolution suggests a mechanism which promotes constructive changes over destructive ones. Mutational diversity cannot be considered truly random if it is limited to certain structures or functions. […] Approximately 50 % of the human genome consists of mobile segments, capable of migrating to various positions in the genome. These segments are called transposons and retrotransposons […] The mobility of genome fragments not only promotes mutations (by increasing the variability of DNA) but also affects the stability and packing of chromatin strands wherever such mobile sections are reintegrated with the genome. Under normal circumstances the activity of mobile sections is tempered by epigenetic mechanisms […]; however in certain situations gene mobility may be upregulated. In particular, it seems that in “prehistoric” (remote evolutionary) times such events occurred at a much faster pace, accelerating the rate of genetic changes and promoting rapid evolution. Cells can actively promote mutations by way of the so-called AID process (activity-dependent cytosine deamination). It is an enzymatic mechanism which converts cytosine into uracil, thereby triggering repair mechanisms and increasing the likelihood of mutations […] The existence of AID proves that cells themselves may trigger evolutionary changes and that the role of mutations in the emergence of new biological structures is not strictly passive.”

Regulatory mechanisms which receive signals characterized by high degrees of uncertainty, must be able to make informed choices to reduce the overall entropy of the system they control. This property is usually associated with development of information channels. Special structures ought to be exposed within information channels connecting systems of different character as for example linking transcription to translation or enabling transduction of signals through the cellular membrane. Examples of structures which convey highly entropic information are receptor systems associated with blood coagulation and immune responses. The regulatory mechanism which triggers an immune response relies on relatively simple effectors (complement factor enzymes, phages and killer cells) coupled to a highly evolved receptor system, represented by specific antibodies and organized set of cells. Compared to such advanced receptors the structures which register the concentration of a given product (e.g. glucose in blood) are rather primitive. Advanced receptors enable the immune system to recognize and verify information characterized by high degrees of uncertainty. […] In sequential processes it is usually the initial stage which poses the most problems and requires the most information to complete successfully. It should come as no surprise that the most advanced control loops are those associated with initial stages of biological pathways.”


February 10, 2018 Posted by | Biology, Books, Chemistry, Evolutionary biology, Genetics, Immunology, Medicine | Leave a comment

Endocrinology (part 4 – reproductive endocrinology)

Some observations from chapter 4 of the book below.

“*♂. The whole process of spermatogenesis takes approximately 74 days, followed by another 12-21 days for sperm transport through the epididymis. This means that events which may affect spermatogenesis may not be apparent for up to three months, and successful induction of spermatogenesis treatment may take 2 years. *♀. From primordial follicle to primary follicle, it takes about 180 days (a continuous process). It is then another 60 days to form a preantral follicle which then proceeds to ovulation three menstrual cycles later. Only the last 2-3 weeks of this process is under gonadotrophin drive, during which time the follicle grows from 2 to 20mm.”

“Hirsutism (not a diagnosis in itself) is the presence of excess hair growth in ♀ as a result of androgen production and skin sensitivity to androgens. […] In ♀, testosterone is secreted primarily by the ovaries and adrenal glands, although a significant amount is produced by the peripheral conversion of androstenedione and DHEA. Ovarian androgen production is regulated by luteinizing hormone, whereas adrenal production is ACTH-dependent. The predominant androgens produced by the ovaries are testosterone and androstenedione, and the adrenal glands are the main source of DHEA. Circulating testosterone is mainly bound to sex hormone-binding globulin (SHBG), and it is the free testosterone which is biologically active. […] Slowly progressive hirsutism following puberty suggests a benign cause, whereas rapidly progressive hirsutism of recent onset requires further immediate investigation to rule out an androgen-secreting neoplasm. [My italics, US] […] Serum testosterone should be measured in all ♀ presenting with hirsutism. If this is <5nmol/L, then the risk of a sinister cause for her hirsutism is low.”

“Polycystic ovary syndrome (PCOS) *A heterogeneous clinical syndrome characterized by hyperandrogenism, mainly of ovarian origin, menstrual irregularity, and hyperinsulinaemia, in which other causes of androgen excess have been excluded […] *A distinction is made between polycystic ovary morphology on ultrasound (PCO which also occurs in congenital adrenal hyperplasia, acromegaly, Cushing’s syndrome, and testesterone-secreting tumours) and PCOS – the syndrome. […] PCOS is the most common endocrinopathy in ♀ of reproductive age; >95% of ♀ presenting to outpatients with hirsutism have PCOS. *The estimated prevalence of PCOS ranges from 5 to 10% on clinical criteria. Polycystic ovaries on US alone are present in 20-25% of ♀ of reproductive age. […] family history of type 2 diabetes mellitus is […] more common in ♀ with PCOS. […] Approximately 70% of ♀ with PCOS are insulin-resistant, depending on the definition. […] Type 2 diabetes mellitus is 2-4 x more common in ♀ with PCOS. […] Hyperinsulinaemia is exacerbated by obesity but can also be present in lean ♀ with PCOS. […] Insulin […] inhibits SHBG synthesis by the liver, with a consequent rise in free androgen levels. […] Symptoms often begin around puberty, after weight gain, or after stopping the oral contraceptive pill […] Oligo-/amenorrhoea [is present in] 70% […] Hirsutism [is present in] 66% […] Obesity [is present in] 50% […] *Infertility (30%). PCOS accounts for 75% of cases of anovulatory infertility. The risk of spontaneous miscarriage is also thought to be higher than the general population, mainly because of obesity. […] The aims of investigations [of PCOS] are mainly to exclude serious underlying disorders and to screen for complications, as the diagnosis is primarily clinical […] Studies have uniformly shown that weight reduction in obese ♀ with PCOS will improve insulin sensitivity and significantly reduce hyperandrogenaemia. Obese ♀ are less likely to respond to antiandrogens and infertility treatment.”

“Androgen-secreting tumours [are] [r]are tumours of the ovary or adrenal gland which may be benign or malignant, which cause virilization in ♀ through androgen production. […] Virilization […] [i]ndicates severe hyperandrogenism, is associated with clitoromegaly, and is present in 98% of ♀ with androgen-producing tumours. Not usually a feature of PCOS. […] Androgen-secreting ovarian tumours[:] *75% develop before the age of 40 years. *Account for 0.4% of all ovarian tumours; 20% are malignant. *Tumours are 5-25cm in size. The larger they are, the more likely they are to be malignant. They are rarely bilateral. […] Androgen-secreting adrenal tumours[:] *50% develop before the age of 50 years. *Larger tumours […] are more likely to be malignant. *Usually with concomitant cortisol secretion as a variant of Cushing’s syndrome. […] Symptoms and signs of Cushing’s syndrome are present in many of ♀ with adrenal tumours. […] Onset of symptoms. Usually recent onset of rapidly progressive symptoms. […] Malignant ovarian and adrenal androgen-secreting tumours are usually resistant to chemotherapy and radiotherapy. […] *Adrenal tumours. 20% 5-year survival. Most have metastatic disease at the time of surgery. *Ovarian tumours. 30% disease-free survival and 40% overall survival at 5 years. […] Benign tumours. *Prognosis excellent. *Hirsutism improves post-operatively, but clitoromegaly, male pattern balding, and deep voice may persist.”

*Oligomenorrhoea is defined as the reduction in the frequency of menses to <9 periods a year. *1° amenorrhoea is the failure of menarche by the age of 16 years. Prevalence ~0.3% *2° amenorrhoea refers to the cessation of menses for >6 months in ♀ who had previously menstruated. Prevalence ~3%. […] Although the list of causes is long […], the majority of cases of secondary amenorrhoea can be accounted for by four conditions: *Polycystic ovary syndrome. *Hypothalamic amenorrhoea. *Hyperprolactinaemia. *Ovarian failure. […] PCOS is the only common endocrine cause of amenorrhoea with normal oestrogenization – all other causes are oestrogen-deficient. Women with PCOS, therefore, are at risk of endometrial hyperplasia, and all others are at risk of osteoporosis. […] Anosmia may indicate Kallman’s syndrome. […] In routine practice, a common differential diagnosis is between mild version of PCOS and hypothalamic amenorrhoea. The distinction between these conditions may require repeated testing, as a single snapshot may not discriminate. The reason to be precise is that PCOS is oestrogen-replete and will, therefore, respond to clomiphene citrate (an antioestrogen) for fertility. HA will be oestrogen-deficient and will need HRT and ovulation induction with pulsatile GnRH or hMG [human Menopausal Gonadotropins – US]. […] […] 75% of ♀ who develop 2° amenorrhoea report hot flushes, night sweats, mood changes, fatigue, or dyspareunia; symptoms may precede the onset of menstrual disturbances.”

“POI [Premature Ovarian Insufficiency] is a disorder characterized by amenorrhoea, oestrogen deficiency, and elevated gonadotrophins, developing in ♀ <40 years, as a result of loss of ovarian follicular function. […] *Incidence – 0.1% of ♀ <30 years and 1% of those <40 years. *Accounts for 10% of all cases of 2° amenorrhoea. […] POI is the result of accelerated depletion of ovarian germ cells. […] POI is usually permanent and progressive, although a remitting course is also experienced and cannot be fully predicted, so all women must know that pregnancy is possible, even though fertility treatments are not effective (often a difficult paradox to describe). Spontaneous pregnancy has been reported in 5%. […] 80% of [women with Turner’s syndrome] have POI. […] All ♀ presenting with hypergonadotrophic amenorrhoea below age 40 should be karyotyped.”

“The menopause is the permanent cessation of menstruation as a result of ovarian failure and is a retrospective diagnosis made after 12 months of amenorrhoea. The average age of at the time of the menopause is ~50 years, although smokers reach the menopause ~2 years earlier. […] Cycles gradually become increasingly anovulatory and variable in length (often shorter) from about 4 years prior to the menopause. Oligomenorrhoea often precedes permanent amenorrhoea. in 10% of ♀, menses cease abruptly, with no preceding transitional period. […] During the perimenopausal period, there is an accelerated loss of bone mineral density (BMD), rendering post-menopausal more susceptible to osteoporotic fractures. […] Post-menopausal are 2-3 x more likely to develop IHD [ischaemic heart disease] than premenopausal , even after age adjustments. The menopause is associated with an increase in risk factors for atherosclerosis, including less favourable lipid profile, insulin sensitivity, and an ↑ thrombotic tendency. […] ♀ are 2-3 x more likely to develop Alzheimer’s disease than ♂. It is suggested that oestrogen deficiency may play a role in the development of dementia. […] The aim of treatment of perimenopausal ♀ is to alleviate menopausal symptoms and optimize quality of life. The majority of women with mild symptoms require no HRT. […] There is an ↑ risk of breast cancer in HRT users which is related to the duration of use. The risk increases by 35%, following 5 years of use (over the age of 50), and falls to never-used risk 5 years after discontinuing HRT. For ♀ aged 50 not using HRT, about 45 in every 1,000 will have cancer diagnosed over the following 20 years. This number increases to 47/1,000 ♀ using HRT for 5 years, 51/1,000 using HRT for 10 years, and 57/1,000 after 15 years of use. The risk is highest in ♀ on combined HRT compared with oestradiol alone. […] Oral HRT increases the risk [of venous thromboembolism] approximately 3-fold, resulting in an extra two cases/10,000 women-years. This risk is markedly ↑ in ♀ who already have risk factors for DVT, including previous DVT, cardiovascular disease, and within 90 days of hospitalization. […] Data from >30 observational studies suggest that HRT may reduce the risk of developing CVD [cardiovascular disease] by up to 50%. However, randomized placebo-controlled trials […] have failed to show that HRT protects against IHD. Currently, HRT should not be prescribed to prevent cardiovascular disease.”

“Any chronic illness may affect testicular function, in particular chronic renal failure, liver cirrhosis, and haemochromatosis. […] 25% of  who develop mumps after puberty have associated orchitis, and 25-50% of these will develop 1° testicular failure. […] Alcohol excess will also cause 1° testicular failure. […] Cytotoxic drugs, particularly alkylating agents, are gonadotoxic. Infertility occurs in 50% of patients following chemotherapy, and a significant number of  require androgen replacement therapy because of low testosterone levels. […] Testosterone has direct anabolic effects on skeletal muscle and has been shown to increase muscle mass and strength when given to hypogonadal men. Lean body mass is also with a reduction in fat mass. […] Hypogonadism is a risk factor for osteoporosis. Testosterone inhibits bone resorption, thereby reducing bone turnover. Its administration to hypogonadal has been shown to improve bone mineral density and reduce the risk of developing osteoporosis. […] *Androgens stimulate prostatic growth, and testosterone replacement therapy may therefore induce symptoms of bladder outflow obstruction in with prostatic hypertrophy. *It is unlikely that testosterone increases the risk of developing prostrate cancer, but it may promote the growth of an existing cancer. […] Testosterone replacement therapy may cause a fall in both LDL and HDL cholesterol levels, the significance of which remains unclear. The effect of androgen replacement therapy on the risk of developing coronary artery disease is unknown.”

“Erectile dysfunction [is] [t]he consistent inability to achieve or maintain an erect penis sufficient for satisfactory sexual intercourse. Affects approximately 10% of and >50% of >70 years. […] Erectile dysfunction may […] occur as a result of several mechanisms: *Neurological damage. *Arterial insufficiency. *Venous incompetence. *Androgen deficiency. *Penile abnormalities. […] *Abrupt onset of erectile dysfunction which is intermittent is often psychogenic in origin. *Progressive and persistent dysfunction indicates an organic cause. […] Absence of morning erections suggests an organic cause of erectile dysfunction.”

“*Infertility, defined as failure of pregnancy after 1 year of unprotected regular (2 x week) sexual intercourse, affects ~10% of all couples. *Couples who fail to conceive after 1 years of regular unprotected sexual intercourse should be investigated. […] Causes[:] *♀ factors (e.g. PCOS, tubal damage) 35%. *♂ factors (idiopathic gonadal failure in 60%) 25%. *Combined factors 25%. *Unexplained infertility 15%. […] [♀] Fertility declines rapidly after the age of 36 years. […] Each episode of acute PID causes infertility in 10-15% of cases. *Trachomatis is responsible for half the cases of PID in developed countries. […] Unexplained infertility [is] [i]nfertility despite normal sexual intercourse occurring at least twice weakly, normal semen analysis, documentation of ovulation in several cycles, and normal patent tubes (by laparoscopy). […] 30-50% will become pregnant within 3 years of expectant management. If not pregnant by then, chances that spontaneous pregnancy will occur are greatly reduced, and ART should be considered. In ♀>34 years of age, then expectant management is not an option, and up to six cycles of IUI or IVF should be considered.”


February 9, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Genetics, Medicine, Pharmacology | Leave a comment

Endocrinology (part 3 – adrenal glands)

Some observations from chapter 3 below.

“The normal adrenal gland weigh 4-5g. The cortex represents 90% of the normal gland and surrounds the medulla. […] Glucocorticoid (cortisol […]) production occurs from the zona fasciculata, and adrenal androgens arise from the zona reticularis. Both of these are under the control of ACTH [see also my previous post about the book – US], which regulates both steroid synthesis and also adrenocortical growth. […] Mineralocorticoid (aldosterone […]) synthesis occurs in zona glomerulosa, predominantly under the control of the renin-angiotensin system […], although ACTH also contributes to its regulation. […] The adrenal gland […] also produces sex steroids in the form of dehydroepiandrostenedione (DHEA) and androstenedione. The synthetic pathway is under the control of ACTH. Urinary steroid profiling provides quantitative information on the biosynthetic and catabolic pathways. […] CT is the most widely used modality for imaging the adrenal glands. […] MRI can also reliably detect adrenal masses >5-10mm in diameter and, in some circumstances, provides additional information to CT […] PET can be useful in locating tumours and metastases. […] Adrenal vein sampling (AVS) […] can be useful to lateralize an adenoma or to differentiate an adenoma from bilateral hyperplasia. […] AVS is of particular value in lateralizing small aldosterone-producing adenomas that cannot easily be visualized on CT or MRI. […] The procedure should only be undertaken in patients in whom surgery is feasible and desired […] [and] should be carried out in specialist centres only; centres with <20 procedures per year have been shown to have poor success rates”.

“The majority of cases of mineralocorticoid excess are due to excess aldosterone production, […] typically associated with hypertension and hypokalemia. *Primary hyperaldosteronism is a disorder of autonomous aldosterone hypersecretion with suppressed renin levels. *Secondary hyperaldosteronism occurs when aldosterone hypersecretion occurs 2° [secondary, US] to elevated circulating renin levels. This is typical of heart failure, cirrhosis, or nephrotic syndrome but can also be due to renal artery stenosis and, occasionally, a very rare renin-producing tumour (reninoma). […] Primary hyperaldosteronism is present in around 10% of hypertensive patients. It is the most prevalent form of secondary hypertension. […] Aldosterone causes renal sodium retention and potassium loss. This results in expansion of body sodium content, leading to suppression of renal renin synthesis. The direct action of aldosterone on the distal nephron causes sodium retention and loss and hydrogen and potassium ions, resulting in a hypokalaemic alkalosis, although serum potassium […] may be normal in up to 50% of cases. Aldosterone has pathophysiological effects on a range of other tissues, causing cardiac fibrosis, vascular endothelial dysfunction, and nephrosclerosis. […] hypertension […] is often resistant to conventional therapy. […] Hypokalaemia is usually asymptomatic. […] Occasionally, the clinical syndrome of hyperaldosteronism is not associated with excess aldosterone. […] These conditions are rare.”

“Bilateral adrenal hyperplasia [make up] 60% [of cases of primary hyperaldosteronism]. […] Conn’s syndrome (aldosterone-producing adrenal adenoma) [make up] 35%. […] The pathophysiology of bilateral adrenal hyperplasia is not understood, and it is possible that it represents an extreme end of the spectrum of low renin essential hypertension. […] Aldosterone-producing carcinoma[s] [are] [r]are and usually associated with excessive secretion of other corticosteroids (cortisol, androgen, oestrogen). […] Indications [for screening include:] *Patients resistant to conventional antihypertensive medication (i.e. not controlled on three agents). *Hypertension associated with hypokalaemia […] *Hypertension developing before age of 40 years. […] Confirmation of autonomous aldosterone production is made by demonstrating failure to suppress aldosterone in face of sodium/volume loading. […] A number of tests have been described that are said to differentiate between the various subtypes of 1° [primary, US] aldosteronism […]. However, none of these are sufficiently specific to influence management decisions”.

“Laparoscopic adrenalectomy is the treatment of choice for aldosterone-secreting adenomas […] and laparoscopic adrenalectomy […] has become the procedure of choice for removal of most adrenal tumours. *Hypertension is cured in about 70%. *If it persists […], it is more amenable to medical treatment. *Overall, 50% become normotensive in 1 month and 70% within 1 year. […] Medical therapy remains an option for patients with bilateral disease and those with a solitary adrenal adenoma who are unlikely to be cured by surgery, who are unfit for operation, or who express a preference for medical management. *The mineralocorticoid receptor antagonist spironolactone […] has been used successfully for many years to treat hypertension and hypokalaemia associated with bilateral adrenal hyperplasia […] Side effects are common – particularly gynaecomastia and impotence in ♂, menstrual irregularities in ♀, and GI effects. […] Eplerenone […] is a mineralocorticoid receptor antagonist without antiandrogen effects and hence greater selectivity and less side effects than spironolactone. *Alternative drugs include the potassium-sparing diuretics amiloride and triamterene.”

“Cushing’s syndrome results from chronic excess cortisol [see also my second post in this series] […] The causes may be classified as ACTH-dependent and ACTH-independent. […] ACTH-independent Cushing’s syndrome […] is due to adrenal tumours (benign and malignant), and is responsible for 10-15% of cases of Cushing’s syndrome. […] Benign adrenocortical adenomas (ACA) are usually encapsulated and <4cm in diameter. They are usually associated with pure glucocorticoid excess. *Adrenocortical carcinomas (ACC) are usually >6cm in diameter, […] and are not infrequently associated with local invasion and metastases at the time of diagnosis. Adrenal carcinomas are characteristically associated with the excess secretion of several hormones; most frequently found is the combination of cortisol and androgen (precursors) […] ACTH-dependent Cushing’s results in bilateral adrenal hyperplasia, thus one has to firmly differentiate between ACTH-dependent and independent causes of Cushing’s before assuming bilateral adrenal hyperplasia as the primary cause of disease. […] It is important to note that, in patients with adrenal carcinoma, there may also be features related to excessive androgen production in ♀ and also a relatively more rapid time course of development of the syndrome. […] Patients with ACTH-independent Cushing’s syndrome do not suppress cortisol […] on high-dose dexamethasone testing and fail to show a rise in cortisol and ACTH following administration of CRH. […] ACTH-independent causes are adrenal in origin, and the mainstay of further investigation is adrenal imaging by CT”.

“Adrenal adenomas, which are successfully treated with surgery, have a good prognosis, and recurrence is unlikely. […] Bilateral adrenalectomy [in the context of bilateral adrenal hyperplasia] is curative. Lifelong glucocorticoid and mineralocorticoid treatment is [however] required. […] The prognosis for adrenal carcinoma is very poor despite surgery. Reports suggest a 5-year survival of 22% and median survival time of 14 months […] Treatment of adrenocortical carcinoma (ACC) should be carried out in a specialist centre, with expert surgeons, oncologists, and endocrinologists with extensive treatment in treating ACC. This improves survival.”

“Adrenal insufficiency [AI, US] is defined by the lack of cortisol, i.e. glucocorticoid deficiency, may be due to destruction of the adrenal cortex (1°, Addison’s disease and congenital adrenal hyperplasia (CAH) […] or due to disordered pituitary and hypothalamic function (2°). […] *Permanent adrenal insufficiency is found in 5 in 10,000 population. *The most frequent cause is hypothalamic-pituitary damage, which is the cause of AI in 60% of affected patients. *The remaining 40% of cases are due to primary failure of the adrenal to synthesize cortisol, almost equal prevalence of Addison’s disease (mostly of autoimmune origin, prevalence 0.9-1.4 in 10,000) and congenital adrenal hyperplasia (0.7-1.0 in 10,000). *2° adrenal insufficiency due to suppression of pituitary-hypothalamic function by exogenously administered, supraphysiological glucocorticoid doses for treatment of, for example, COPD or rheumatoid arthritis, is much more common (50-200 in 10,000 population). However, adrenal function in these patients can recover”.

“[In primary AI] [a]drenal gland destruction or dysfunction occurs due to a disease process which usually involves all three zones of the adrenal cortex, resulting in inadequate glucocorticoid, mineralocorticoid, and adrenal androgen precursor secretion. The manifestations of insufficiency do not usually appear until at least 90% of the gland has been destroyed and are usually gradual in onset […] Acute adrenal insufficiency may occur in the context of acute septicaemia […] Mineralocorticoid deficiency leads to reduced sodium retention and hyponatraemia and hypotension […] Androgen deficiency presents in ♀ with reduced axillary and pubic hair and reduced libido. (Testicular production of androgens is more important in ♂). [In secondary AI] [i]nadequate ACTH results in deficient cortisol production (and ↓ androgens in ♀). […] Mineralocorticoid secretion remains normal […] The onset is usually gradual, with partial ACTH deficiency resulting in reduced response to stress. […] Lack of stimulation of skin MC1R due to ACTH deficiency results in pale skin appearance. […] [In 1° adrenal insufficiency] hyponatraemia is present in 90% and hyperkalaemia in 65%. […] Undetectable serum cortisol is diagnostic […], but the basal cortisol is often in the normal range. A cortisol >550nmol/L precludes the diagnosis. At times of acute stress, an inappropriately low cortisol is very suggestive of the diagnosis.”

“Autoimmune adrenalitis[:] Clinical features[:] *Anorexia and weight loss (>90%). *Tiredness. *Weakness – generalized, no particular muscle groups. […] Dizziness and postural hypotension. *GI symptoms – nausea and vomiting, abdominal pain, diarrhea. *Arthralgia and myalgia. […] *Mediated by humoral and cell-mediated immune mechanisms. Autoimmune insufficiency associated with polyglandular autoimmune syndrome is more common in ♀ (70%). *Adrenal cortex antibodies are present in the majority of patients at diagnosis, and […] they are still found in approximately 70% of patients 10 years later. Up to 20% patients/year with [positive] antibodies develop adrenal insufficiency. […] *Antiadrenal antibodies are found in <2% of patients with other autoimmune endocrine disease (Hashimoto’s thyroiditis, diabetes mellitus, autoimmune hypothyroidism, hypoparathyroidism, pernicious anemia). […] antibodies to other endocrine glands are commonly found in patients with autoimmune adrenal insufficiency […] However, the presence of antibodies does not predict subsequent manifestation of organ-specific autoimmunity. […] Patients with type 1 diabetes mellitus and autoimmune thyroid disease only rarely develop autoimmune adrenal insufficiency. Approximately 60% of patients with Addison’s disease have other autoimmune or endocrine disorders. […] The adrenals are small and atrophic in chronic autoimmune adrenalitis.”

“Autoimmune polyglandular syndrome (APS) type 1[:] *Also known as autoimmune polyendocrinopathy, candidiasis, and ectodermal dystrophy (APECED). […] [C]hildhood onset. *Chronic mucocutaneous candidiasis. *Hypoparathyroidism (90%), 1° adrenal insufficiency (60%). *1° gonadal failure (41%) – usually after Addison’s diagnosis. *1° hypothyroidism. *Rarely hypopituitarism, diabetes insipidus, type 1 diabetes mellitus. […] APS type 2[:] *Adult onset. *Adrenal insufficiency (100%). 1° autoimmune thyroid disease (70%) […] Type 1 diabetes mellitus (5-20%) – often before Addison’s diagnosis. *1° gonadal failure in affected women (5-20%). […] Schmidt’s syndrome: *Addison’s disease, and *Autoimmune hypothyroidism. *Carpenter syndrome: *Addison’s disease, and *Autoimmune hypothyroidism, and/or *Type 1 diabetes mellitus.”

“An adrenal incidentaloma is an adrenal mass that is discovered incidentally upon imaging […] carried out for reasons other than a suspected adrenal pathology.  […] *Autopsy studies suggest incidence prevalence of adrenal masses of 1-6% in the general population. *Imagining studies suggest that adrenal masses are present 2-3% in the general population. Incidence increases with ageing, and 8-10% of 70-year olds harbour an adrenal mass. […] It is important to determine whether the incidentally discovered adrenal mass is: *Malignant. *Functioning and associated with excess hormonal secretion.”


January 17, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Immunology, Medicine, Nephrology, Pharmacology | Leave a comment

Endocrinology (part 2 – pituitary)

Below I have added some observations from the second chapter of the book, which covers the pituitary gland.

“The pituitary gland is centrally located at the base of the brain in the sella turcica within the sphenoid bone. It is attached to the hypothalamus by the pituitary stalk and a fine vascular network. […] The pituitary measures around 13mm transversely, 9mm anteroposteriorly, and 6mm vertically and weighs approximately 100mg. It increases during pregnancy to almost twice its normal size, and it decreases in the elderly. *Magnetic resonance imaging (MRI) currently provides the optimal imaging of the pituitary gland. *Computed tomography (CT) scans may still be useful in demonstrating calcification in tumours […] and hyperostosis in association with meningiomas or evidence of bone destruction. […] T1– weighted images demonstrate cerebrospinal fluid (CSF) as dark grey and brain as much whiter. This imagining is useful for demonstrating anatomy clearly. […] On T1– weighted images, pituitary adenomas are of lower signal intensity than the remainder of the normal gland. […] The presence of microadenomas may be difficult to demonstrate.”

“Hypopituitarism refers to either partial or complete deficiency of anterior and/or posterior pituitary hormones and may be due to [primary] pituitary disease or to hypothalamic pathology which interferes with the hypothalamic control of the pituitary. Causes: *Pituitary tumours. *Parapituitary tumours […] *Radiotherapy […] *Pituitary infarction (apoplexy), Sheehan’s syndrome. *Infiltration of the pituitary gland […] *infection […] *Trauma […] *Subarachnoid haemorrhage. *Isolated hypothalamic-releasing hormone deficiency, e.g. Kallmann’s syndrome […] *Genetic causes [Let’s stop here: Point is, lots of things can cause pituitary problems…] […] The clinical features depend on the type and degree of hormonal deficits, and the rate of its development, in addition to whether there is intercurrent illness. In the majority of cases, the development of hypopituitarism follows a characteristic order, which secretion of GH [growth hormone, US], then gonadotrophins being affected first, followed by TSH [Thyroid-Stimulating Hormone, US] and ACTH [Adrenocorticotropic Hormone, US] secretion at a later stage. PRL [prolactin, US] deficiency is rare, except in Sheehan’s syndrome associated with failure of lactation. ADH [antidiuretic hormone, US] deficiency is virtually unheard of with pituitary adenomas but may be seen rarely with infiltrative disorders and trauma. The majority of the clinical features are similar to those occurring when there is target gland insufficiency. […] NB Houssay phenomenon. Amelioration of diabetes mellitus in patients with hypopituitarism due to reduction in counter-regulatory hormones. […] The aims of investigation of hypopituitarism are to biochemically assess the extent of pituitary hormone deficiency and also to elucidate the cause. […] Treatment involves adequate and appropriate hormone replacement […] and management of the underlying cause.”

“Apoplexy refers to infarction of the pituitary gland due to either haemorrhage or ischaemia. It occurs most commonly in patients with pituitary adenomas, usually macroadenomas […] It is a medical emergency, and rapid hydrocortisone replacement can be lifesaving. It may present with […] sudden onset headache, vomiting, meningism, visual disturbance, and cranial nerve palsy.”

“Anterior pituitary hormone replacement therapy is usually performed by replacing the target hormone rather than the pituitary or hypothalamic hormone that is actually deficient. The exceptions to this are GH replacement […] and when fertility is desired […] [In the context of thyroid hormone replacement:] In contrast to replacement in [primary] hypothyroidism, the measurement of TSH cannot be used to assess adequacy of replacment in TSH deficiency due to hypothalamo-pituitary disease. Therefore, monitoring of treatment in order to avoid under- and over-replacement should be via both clinical assessment and by measuring free thyroid hormone concentrations […] [In the context of sex hormone replacement:] Oestrogen/testosterone administration is the usual method of replacement, but gonadotrophin therapy is required if fertility is desired […] Patients with ACTH deficiency usually need glucocorticoid replacement only and do not require mineralcorticoids, in contrast to patients with Addison’s disease. […] Monitoring of replacement [is] important to avoid over-replacement which is associated with BP, elevated glucose and insulin, and reduced bone mineral density (BMD). Under-replacement leads to the non-specific symptoms, as seen in Addison’s disease […] Conventional replacement […] may overtreat patients with partial ACTH deficiency.”

“There is now a considerable amount of evidence that there are significant and specific consequences of GH deficiency (GDH) in adults and that many of these features improve with GH replacement therapy. […] It is important to differentiate between adult and childhood onset GDH. […] the commonest cause in childhood is an isolated variable deficiency of GH-releasing hormone (GHRH) which may resolve in adult life […] It is, therefore, important to retest patients with childhood onset GHD when linear growth is completed (50% recovery of this group). Adult onset. GHD usually occurs [secondarily] to a structural pituitary or parapituitary condition or due to the effects of surgical treatment or radiotherapy. Prevalence[:] *Adult onset GHD 1/10,000 *Adult GHD due to adult and childhood onset GHD 3/10,000. Benefits of GH replacement[:] *Improved QoL and psychological well-being. *Improved exercise capacity. *↑ lean body mass and reduced fat mass. *Prolonged GH replacement therapy (>12-24 months) has been shown to increase BMD, which would be expected to reduce fracture rate. *There are, as yet, no outcome studies in terms of cardiovascular mortality. However, GH replacement does lead to a reduction (~15%) in cholesterol. GH replacement also leads to improved ventricular function and ↑ left ventricular mass. […] All patients with GHD should be considered for GH replacement therapy. […] adverse effects experienced with GH replacement usually resolve with dose reduction […] GH treatment may be associated with impairment of insulin sensitivity, and therefore markers of glycemia should be monitored. […] Contraindications to GH replacement[:] *Active malignancy. *Benign intracranial hypertension. *Pre-proliferative/proliferative retinopathy in diabetes mellitus.”

“*Pituitary adenomas are the most common pituitary disease in adults and constitute 10-15% of primary brain tumours. […] *The incidence of clinically apparent pituitary disease is 1 in 10,000. *Pituitary carcinoma is very rare (<0.1% of all tumours) and is most commonly ACTH- or prolactin-secreting. […] *Microadenoma <1cm. *Macroadenoma >1cm. [In terms of the functional status of tumours, the break-down is as follows:] *Prolactinoma 35-40%. *Non-functioning 30-35%. Growth hormone (acromegaly) 10-15%. *ACTH adenoma (Cushing’s disease) 5-10% *TSH adenoma <5%. […] Pituitary disease is associated with an increased mortality, predominantly due to vascular disease. This may be due to oversecretion of GH or ACTH, hormone deficiencies or excessive replacement (e.g. of hydrocortisone).”

“*Prolactinomas are the commonest functioning pituitary tumour. […] Malignant prolactinomas are very rare […] [Clinical features of hyperprolactinaemia:] *Galactorrhoea (up to 90%♀, <10% ♂). *Disturbed gonadal function [menstrual disturbance, infertility, reduced libido, ED in ♂] […] Hyperprolactinaemia is associated with a long-term risk of BMD. […] Hypothyroidism and chronic renal failure are causes of hyperprolactinaemia. […] Antipsychotic agents are the most likely psychotrophic agents to cause hyperprolactinaemia. […] Macroadenomas are space-occupying tumours, often associated with bony erosion and/or cavernous sinus invasion. […] *Invasion of the cavernous sinus may lead to cranial nerve palsies. *Occasionally, very invasive tumours may erode bone and present with a CSF leak or [secondary] meningitis. […] Although microprolactinomas may expand in size without treatment, the vast majority do not. […] Macroprolactinomas, however, will continue to expand and lead to pressure effects. Definite treatment of the tumour is, therefore, necessary.”

“Dopamine agonist treatment […] leads to suppression of PRL in most patients [with prolactinoma], with [secondary] effects of normalization of gonadal function and termination of galactorrhoea. Tumour shrinkage occurs at a variable rate (from 24h to 6-12 months) and extent and must be carefully monitored. Continued shrinkage may occur for years. Slow chiasmal decompression will correct visual field defects in the majority of patients, and immediate surgical decompression is not necessary. […] Cabergoline is more effective in normalization of PRL in microprolactinoma […], with fewer side effects than bromocriptine. […] Tumour enlargement following initial shrinkage on treatment is usually due to non-compliance. […] Since the introduction of dopamine agonist treatment, transsphenoidal surgery is indicated only for patients who are resistant to, or intolerant of, dopamine agonist treatment. The cure rate for macroprolactinomas treated with surgery is poor (30%), and, therefore, drug treatment is first-line in tumours of all size. […] Standard pituitary irradiation leads to slow reduction (over years) of PRL in the majority of patients. […] Radiotherapy is not indicated in the management of patients with microprolactinomas. It is useful in the treatment of macroprolactinomas once the tumour has been shrunken away from the chiasm, only if the tumour is resistant.”

“Acromegaly is the clinical condition resulting from prolonged excessive GH and hence IGF-1 secretion in adults. GH secretion is characterized by blunting of pulsatile secretion and failure of GH to become undetectable during the 24h day, unlike normal controls. […] *Prevalence 40-86 cases/million population. Annual incidence of new cases in the UK is 4/million population. *Onset is insidious, and there is, therefore, often a considerable delay between onset of clinical features and diagnosis. Most cases are diagnosed at 40-60 years. […] Pituitary gigantism [is] [t]he clinical syndrome resulting from excess GH secretion in children prior to fusion of the epiphyses. […] growth velocity without premature pubertal manifestations should arouse suspicion of pituitary gigantism. […] Causes of acromegaly[:] *Pituitary adenoma (>99% of cases). Macroadenomas 60-80%, microadenomas 20-40%. […] The clinical features arise from the effects of excess GH/IGF-1, excess PRL in some (as there is co-secretion of PRL in a minority (30%) of tumours […] and the tumour mass. [Signs and symptoms:] * sweating -> 80% of patients. *Headaches […] *Tiredness and lethargy. *Joint pains. *Change in ring or shoe size. *Facial appearance. Coarse features […] enlarged nose […] prognathism […] interdental separation. […] Enlargement of hands and feet […] [Complications:] *Hypertension (40%). *Insulin resistance and impaired glucose tolerance (40%)/diabetes mellitus (20%). *Obstructive sleep apnea – due to soft tissue swelling […] Ischaemic heart disease and cerebrovascular disease.”

“Management of acromegaly[:] The management strategy depends on the individual patient and also on the tumour size. Lowering of GH is essential in all situations […] Transsphenoidal surgery […] is usually the first line for treatment in most centres. *Reported cure rates vary: 40-91% for microadenomas and 10-48% for macroadenomas, depending on surgical expertise. […] Using the definition of post-operative cure as mean GH <2.5 micrograms/L, the reported recurrence rate is low (6% at 5 years). Radiotherapy […] is usually reserved for patients following unsuccessful transsphenoidal surgery, only occasionally is it used as [primary] therapy. […] normalization of mean GH may take several years and, during this time, adjunctive medical treatment (usually with somatostatin analogues) is required. […] Radiotherapy can induce GH deficiency which may need GH therapy. […] Somatostatin analogues lead to suppresion of GH secretion in 20-60% of patients with acromegaly. […] some patients are partial responders, and although somatostatin analogues will lead to lowering of mean GH, they do not suppress to normal despite dose escalation. These drugs may be used as [primary] therapy where the tumour does not cause mass effects or in patients who have received surgery and/or radiotherapy who have elevated mean GH. […] Dopamine agonists […] lead to lowering of GH levels but, very rarely, lead to normalization of GH or IGF-1 (<30%). They may be helpful, particularly if there is coexistent secretion of PRL, and, in these cases, there may be significant tumour shrinkage. […] GH receptor antagonists [are] [i]ndicated for somatostatin non-responders.”

“Cushing’s syndrome is an illness resulting from excess cortisol secretion, which has a high mortality if left untreated. There are several causes of hypercortisolaemia which must be differentiated, and the commonest cause is iatrogenic (oral, inhaled, or topical steroids). […] ACTH-dependent Cushing’s must be differentiated from ACTH-independent disease (usually due to an adrenal adenoma, or, rarely, carcinoma […]). Once a diagnosis of ACTH-dependent disease has been established, it is important to differentiate between pituitary-dependent (Cushing’s disease) and ectopic secretion. […] [Cushing’s disease is rare;] annual incidence approximately 2/million. The vast majority of Cushing’s syndrome is due to a pituitary ACTH-secreting corticotroph microadenoma. […] The features of Cushing’s syndrome are progressive and may be present for several years prior to diagnosis. […] *Facial appearance – round plethoric complexion, acne and hirsutism, thinning of scalp hair. *Weight gain – truncal obesity, buffalo hump […] *Skin – thin and fragile […] easy bruising […] *Proximal muscle weakness. *Mood disturbance – labile, depression, insomnia, psychosis. *Menstrual disturbance. *Low libido and impotence. […] Associated features [include:] *Hypertension (>50%) due to mineralocorticoid effects of cortisol […] *Impaired glucose tolerance/diabetes mellitus (30%). *Osteopenia and osteoporosis […] *Vascular disease […] *Susceptibility to infections. […] Cushing’s is associated with a hypercoagulable state, with increased cardiovascular thrombotic risks. […] Hypercortisolism suppresses the thyroidal, gonadal, and GH axes, leading to lowered levels of TSH and thyroid hormones as well as reduced gonadotrophins, gonadal steroids, and GH.”

“Treatment of Cushing’s disease[:] Transsphenoidal surgery [is] the first-line option in most cases. […] Pituitary radiotherapy [is] usually administered as second-line treatment, following unsuccessful transsphenoidal surgery. […] Medical treatment [is] indicated during the preoperative preparation of patients or while awaiting radiotherapy to be effective or if surgery or radiotherapy are contraindicated. *Inhibitors of steroidogenesis: metyrapone is usually used first-line, but ketoconazole should be used as first-line in children […] Disadvantage of these agents inhibiting steroidogenesis is the need to increase the dose to maintain control, as ACTH secretion will increase as cortisol concentrations decrease. […] Successful treatment (surgery or radiotherapy) of Cushing’s disease leads to cortisol deficiency and, therefore, glucocorticoid replacement therapy is essential. […] *Untreated [Cushing’s] disease leads to an approximately 30-50% mortality at 5 years, owing to vascular disease and susceptibility to infections. *Treated Cushing’s syndrome has a good prognosis […] *Although the physical features and severe psychological disorders associated with Cushing’s improve or resolve within weeks or months of successful treatment, more subtle mood disturbance may persist for longer. Adults may also have impaired cognitive function. […] it is likely that there is an cardiovascular risk. *Osteoporosis will usually resolve in children but may not improve significantly in older patients. […] *Hypertension has been shown to resolve in 80% and diabetes mellitus in up to 70%. *Recent data suggests that mortality even with successful treatment of Cushing’s is increased significantly.”

“The term incidentaloma refers to an incidentally detected lesion that is unassociated with hormonal hyper- or hyposecretion and has a benign natural history. The increasingly frequent detection of these lesions with technological improvements and more widespread use of sophisticated imaging has led to a management challenge – which, if any, lesions need investigation and/or treatment, and what is the optimal follow-up strategy (if required at all)? […] *Imaging studies using MRI demonstrate pituitary microadenomas in approximately 10% of normal volunteers. […] Clinically significant pituitary tumours are present in about 1 in 1,000 patients. […] Incidentally detected microadenomas are very unlikely (<10%) to increase in size whereas larger incidentally detected meso- and macroadenomas are more likely (40-50%) to enlarge. Thus, conservative management in selected patients may be appropriate for microadenomas which are incidentally detected […]. Macroadenomas should be treated, if possible.”

“Non-functioning pituitary tumours […] are unassociated with clinical syndromes of anterior pituitary hormone excess. […] Non-functioning pituitary tumours (NFA) are the commonest pituitary macroadenoma. They represent around 28% of all pituitary tumours. […] 50% enlarge, if left untreated, at 5 years. […] Tumour behaviour is variable, with some tumours behaving in a very indolent, slow-growing manner and others invading the sphenoid and cavernous sinus. […] At diagnosis, approximately 50% of patients are gonadotrophin-deficient. […] The initial definitive management in virtually every case is surgical. This removes mass effects and may lead to some recovery of pituitary function in around 10%. […] The use of post-operative radiotherapy remains controversial. […] The regrowth rate at 10 years without radiotherapy approaches 45% […] administration of post-operative radiotherapy reduces this regrowth rate to <10%. […] however, there are sequelae to radiotherapy – with a significant long-term risk of hypopituitarism and a possible risk of visual deterioration and malignancy in the field of radiation. […] Unlike the case for GH- and PRL-secreting tumours, medical therapy for NFAs is usually unhelpful […] Gonadotrophinomas […] are tumours that arise from the gonadotroph cells of the pituitary gland and produce FSH, LH, or the α subunit. […] they are usually silent and unassociated with excess detectable secretion of LH and FSH […] [they] present in the same manner as other non-functioning pituitary tumours, with mass effects and hypopituitarism […] These tumours are managed as non-functioning tumours.”

“The posterior lobe of the pituitary gland arises from the forebrain and comprises up to 25% of the normal adult pituitary gland. It produces arginine vasopressin and oxytocin. […] Oxytoxin has no known role in ♂ […] In ♀, oxytoxin contracts the pregnant uterus and also causes breast duct smooth muscle contraction, leading to breast milk ejection during breastfeeding. […] However, oxytoxin deficiency has no known adverse effect on parturition or breastfeeding. […] Arginine vasopressin is the major determinant of renal water excretion and, therefore, fluid balance. It’s main action is to reduce free water clearance. […] Many substances modulate vasopressin secretion, including the catecholamines and opioids. *The main site of action of vasopressin is in the collecting duct and the thick ascending loop of Henle […] Diabetes Insipidus (DI) […] is defined as the passage of large volumes (>3L/24h) of dilute urine (osmolality <300mOsm/kg). [It may be] [d]ue to deficiency of circulating arginine vasopressin [or] [d]ue to renal resistance to vasopressin.” […lots of other causes as well – trauma, tumours, inflammation, infection, vascular, drugs, genetic conditions…]

Hyponatraemia […] Incidence *1-6% of hospital admissions Na<130mmol/L. *15-22% hospital admissions Na<135mmol/L. […] True clinically apparent hyponatraemia is associated with either excess water or salt deficiency. […] Features *Depend on the underlying cause and also on the rate of development of hyponatraemia. May develop once sodium reaches 115mmol/L or earlier if the fall is rapid. Level at 100mmol/L or less is life-threatening. *Features of excess water are mainly neurological because of brain injury […] They include confusion and headache, progressing to seizures and coma. […] SIADH [Syndrome of Inappropriate ADH, US] is a common cause of hyponatraemia. […] The elderly are more prone to SIADH, as they are unable to suppress ADH as efficiently […] ↑ risk of hyponatraemia with SSRIs. […] rapid overcorrection of hyponatraemia may cause central pontine myelinolysis (demyelination).”

“The hypothalamus releases hormones that act as releasing hormones at the anterior pituitary gland. […] The commonest syndrome to be associated with the hypothalamus is abnormal GnRH secretion, leading to reduced gonadotrophin secretion and hypogonadism. Common causes are stress, weight loss, and excessive exercise.”


January 14, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Ophthalmology, Pharmacology | Leave a comment

A few diabetes papers of interest

i. Type 2 Diabetes in the Real World: The Elusive Nature of Glycemic Control.

“Despite U.S. Food and Drug Administration (FDA) approval of over 40 new treatment options for type 2 diabetes since 2005, the latest data from the National Health and Nutrition Examination Survey show that the proportion of patients achieving glycated hemoglobin (HbA1c) <7.0% (<53 mmol/mol) remains around 50%, with a negligible decline between the periods 2003–2006 and 2011–2014. The Healthcare Effectiveness Data and Information Set reports even more alarming rates, with only about 40% and 30% of patients achieving HbA1c <7.0% (<53 mmol/mol) in the commercially insured (HMO) and Medicaid populations, respectively, again with virtually no change over the past decade. A recent retrospective cohort study using a large U.S. claims database explored why clinical outcomes are not keeping pace with the availability of new treatment options. The study found that HbA1c reductions fell far short of those reported in randomized clinical trials (RCTs), with poor medication adherence emerging as the key driver behind the disconnect. In this Perspective, we examine the implications of these findings in conjunction with other data to highlight the discrepancy between RCT findings and the real world, all pointing toward the underrealized promise of FDA-approved therapies and the critical importance of medication adherence. While poor medication adherence is not a new issue, it has yet to be effectively addressed in clinical practice — often, we suspect, because it goes unrecognized. To support the busy health care professional, innovative approaches are sorely needed.”

“To better understand the differences between usual care and clinical trial HbA1c results, multivariate regression analysis assessed the relative contributions of key biobehavioral factors, including baseline patient characteristics, drug therapy, and medication adherence (21). Significantly, the key driver was poor medication adherence, accounting for 75% of the gap […]. Adherence was defined […] as the filling of one’s diabetes prescription often enough to cover ≥80% of the time one was recommended to be taking the medication (34). By this metric, proportion of days covered (PDC) ≥80%, only 29% of patients were adherent to GLP-1 RA treatment and 37% to DPP-4 inhibitor treatment. […] These data are consistent with previous real-world studies, which have demonstrated that poor medication adherence to both oral and injectable antidiabetes agents is very common (3537). For example, a retrospective analysis [of] adults initiating oral agents in the DPP-4 inhibitor (n = 61,399), sulfonylurea (n = 134,961), and thiazolidinedione (n = 42,012) classes found that adherence rates, as measured by PDC ≥80% at the 1-year mark after the initial prescription, were below 50% for all three classes, at 47.3%, 41.2%, and 36.7%, respectively (36). Rates dropped even lower at the 2-year follow-up (36)”

“Our current ability to assess adherence and persistence is based primarily on review of pharmacy records, which may underestimate the extent of the problem. For example, using the definition of adherence of the Centers for Medicare & Medicaid Services — PDC ≥80% — a patient could miss up to 20% of days covered and still be considered adherent. In retrospective studies of persistence, the permissible gap after the last expected refill date often extends up to 90 days (39,40). Thus, a patient may have a gap of up to 90 days and still be considered persistent.

Additionally, one must also consider the issue of primary nonadherence; adherence and persistence studies typically only include patients who have completed a first refill. A recent study of e-prescription data among 75,589 insured patients found that nearly one-third of new e-prescriptions for diabetes medications were never filled (41). Finally, none of these measures take into account if the patient is actually ingesting or injecting the medication after acquiring his or her refills.”

“Acknowledging and addressing the problem of poor medication adherence is pivotal because of the well-documented dire consequences: a greater likelihood of long-term complications, more frequent hospitalizations, higher health care costs, and elevated mortality rates (4245). In patients younger than 65, hospitalization risk in one study (n = 137,277) was found to be 30% at the lowest level of adherence to antidiabetes medications (1–19%) versus 13% at the highest adherence quintile (80–100%) […]. In patients over 65, a separate study (n = 123,235) found that all-cause hospitalization risk was 37.4% in adherent cohorts (PDC ≥80%) versus 56.2% in poorly adherent cohorts (PDC <20%) (45). […] Furthermore, for every 1,000 patients who increased adherence to their antidiabetes medications by just 1%, the total medical cost savings was estimated to be $65,464 over 3 years (45). […] “for reasons that are still unclear, the N.A. [North American] patient groups tend to have lower compliance and adherence compared to global rates during large cardiovascular studies” (46,47).”

“There are many potential contributors to poor medication adherence, including depressive affect, negative treatment perceptions, lack of patient-physician trust, complexity of the medication regimen, tolerability, and cost (48). […] A recent review of interventions addressing problematic medication adherence in type 2 diabetes found that few strategies have been shown consistently to have a marked positive impact, particularly with respect to HbA1c lowering, and no single intervention was identified that could be applied successfully to all patients with type 2 diabetes (53). Additional evidence indicates that improvements resulting from the few effective interventions, such as pharmacy-based counseling or nurse-managed home telemonitoring, often wane once the programs end (54,55). We suspect that the efficacy of behavioral interventions to address medication adherence will continue to be limited until there are more focused efforts to address three common and often unappreciated patient obstacles. First, taking diabetes medications is a burdensome and often difficult activity for many of our patients. Rather than just encouraging patients to do a better job of tolerating this burden, more work is needed to make the process easier and more convenient. […] Second, poor medication adherence often represents underlying attitudinal problems that may not be a strictly behavioral issue. Specifically, negative beliefs about prescribed medications are pervasive among patients, and behavioral interventions cannot be effective unless these beliefs are addressed directly (35). […] Third, the issue of access to medications remains a primary concern. A study by Kurlander et al. (51) found that patients selectively forgo medications because of cost; however, noncost factors, such as beliefs, satisfaction with medication-related information, and depression, are also influential.”

ii. Diabetes Research and Care Through the Ages. An overview article which might be of interest especially to people who’re not much familiar with the history of diabetes research and -treatment (a topic which is also very nicely covered in Tattersall’s book). Despite including a historical review of various topics, it also includes many observations about e.g. current (and future?) practice. Some random quotes:

“Arnoldo Cantani established a new strict level of treatment (9). He isolated his patients “under lock and key, and allowed them absolutely no food but lean meat and various fats. In the less severe cases, eggs, liver, and shell-fish were permitted. For drink the patients received water, plain or carbonated, and dilute alcohol for those accustomed to liquors, the total fluid intake being limited to one and one-half to two and one-half liters per day” (6).

Bernhard Naunyn encouraged a strict carbohydrate-free diet (6,10). He locked patients in their rooms for 5 months when necessary for “sugar-freedom” (6).” […let’s just say that treatment options have changed slightly over time – US]

“The characteristics of insulin preparations include the purity of the preparation, the concentration of insulin, the species of origin, and the time course of action (onset, peak, duration) (25). From the 1930s to the early 1950s, one of the major efforts made was to develop an insulin with extended action […]. Most preparations contained 40 (U-40) or 80 (U-80) units of insulin per mL, with U-10 and U-20 eliminated in the early 1940s. U-100 was introduced in 1973 and was meant to be a standard concentration, although U-500 had been available since the early 1950s for special circumstances. Preparations were either of mixed beef and pork origin, pure beef, or pure pork. There were progressive improvements in the purity of preparations as chemical techniques improved. Prior to 1972, conventional preparations contained 8% noninsulin proteins. […] In the early 1980s, “human” insulins were introduced (26). These were made either by recombinant DNA technology in bacteria (Escherichia coli) or yeast (Saccharomyces cerevisiae) or by enzymatic conversion of pork insulin to human insulin, since pork differed by only one amino acid from human insulin. The powerful nature of recombinant DNA technology also led to the development of insulin analogs designed for specific effects. These include rapid-acting insulin analogs and basal insulin analogs.”

“Until 1996, the only oral medications available were biguanides and sulfonylureas. Since that time, there has been an explosion of new classes of oral and parenteral preparations. […] The management of type 2 diabetes (T2D) has undergone rapid change with the introduction of several new classes of glucose-lowering therapies. […] the treatment guidelines are generally clear in the context of using metformin as the first oral medication for T2D and present a menu approach with respect to the second and third glucose-lowering medication (3032). In order to facilitate this decision, the guidelines list the characteristics of each medication including side effects and cost, and the health care provider is expected to make a choice that would be most suited for patient comorbidities and health care circumstances. This can be confusing and contributes to the clinical inertia characteristic of the usual management of T2D (33).”

“Perhaps the most frustrating barrier to optimizing diabetes management is the frequent occurrence of clinical inertia (whenever the health care provider does not initiate or intensify therapy appropriately and in a timely fashion when therapeutic goals are not reached). More broadly, the failure to advance therapy in an appropriate manner can be traced to physician behaviors, patient factors, or elements of the health care system. […] Despite clear evidence from multiple studies, health care providers fail to fully appreciate that T2D is a progressive disease. T2D is associated with ongoing β-cell failure and, as a consequence, we can safely predict that for the majority of patients, glycemic control will deteriorate with time despite metformin therapy (35). Continued observation and reinforcement of the current therapeutic regimen is not likely to be effective. As an example of real-life clinical inertia for patients with T2D on monotherapy metformin and an HbA1c of 7 to <8%, it took on the average 19 months before additional glucose-lowering therapy was introduced (36). The fear of hypoglycemia and weight gain are appropriate concerns for both patient and physician, but with newer therapies these undesirable effects are significantly diminished. In addition, health care providers must appreciate that achieving early and sustained glycemic control has been demonstrated to have long-term benefits […]. Clinicians have been schooled in the notion of a stepwise approach to therapy and are reluctant to initiate combination therapy early in the course of T2D, even if the combination intervention is formulated as a fixed-dose combination. […] monotherapy metformin failure rates with a starting HbA1c >7% are ∼20% per year (35). […] To summarize the current status of T2D at this time, it should be clearly emphasized that, first and foremost, T2D is characterized by a progressive deterioration of glycemic control. A stepwise medication introduction approach results in clinical inertia and frequently fails to meet long-term treatment goals. Early/initial combination therapies that are not associated with hypoglycemia and/or weight gain have been shown to be safe and effective. The added value of reducing CV outcomes with some of these newer medications should elevate them to a more prominent place in the treatment paradigm.”

iii. Use of Adjuvant Pharmacotherapy in Type 1 Diabetes: International Comparison of 49,996 Individuals in the Prospective Diabetes Follow-up and T1D Exchange Registries.

“The majority of those with type 1 diabetes (T1D) have suboptimal glycemic control (14); therefore, use of adjunctive pharmacotherapy to improve control has been of clinical interest. While noninsulin medications approved for type 2 diabetes have been reported in T1D research and clinical practice (5), little is known about their frequency of use. The T1D Exchange (T1DX) registry in the U.S. and the Prospective Diabetes Follow-up (DPV) registry in Germany and Austria are two large consortia of diabetes centers; thus, they provide a rich data set to address this question.

For the analysis, 49,996 pediatric and adult patients with diabetes duration ≥1 year and a registry update from 1 April 2015 to 1 July 2016 were included (19,298 individuals from 73 T1DX sites and 30,698 individuals from 354 DPV sites). Adjuvant medication use (metformin, glucagon-like peptide 1 [GLP-1] receptor agonists, dipeptidyl peptidase 4 [DPP-4] inhibitors, sodium–glucose cotransporter 2 [SGLT2] inhibitors, and other noninsulin diabetes medications including pramlintide) was extracted from participant medical records. […] Adjunctive agents, whose proposed benefits may include the ability to improve glycemic control, reduce insulin doses, promote weight loss, and suppress dysregulated postprandial glucagon secretion, have had little penetrance as part of the daily medical regimen of those in the registries studied. […] The use of any adjuvant medication was 5.4% in T1DX and 1.6% in DPV (P < 0.001). Metformin was the most commonly reported medication in both registries, with 3.5% in the T1DX and 1.3% in the DPV (P < 0.001). […] Use of adjuvant medication was associated with older age, higher BMI, and longer diabetes duration in both registries […] it is important to note that registry data did not capture the intent of adjuvant medications, which may have been to treat polycystic ovarian syndrome in women […here’s a relevant link, US].”

iv. Prevalence of and Risk Factors for Diabetic Peripheral Neuropathy in Youth With Type 1 and Type 2 Diabetes: SEARCH for Diabetes in Youth Study. I recently covered a closely related paper here (paper # 2) but the two papers cover different data sets so I decided it would be worth including this one in this post anyway. Some quotes:

“We previously reported results from a small pilot study comparing the prevalence of DPN in a subset of youth enrolled in the SEARCH for Diabetes in Youth (SEARCH) study and found that 8.5% of 329 youth with T1D (mean ± SD age 15.7 ± 4.3 years and diabetes duration 6.2 ± 0.9 years) and 25.7% of 70 youth with T2D (age 21.6 ± 4.1 years and diabetes duration 7.6 ± 1.8 years) had evidence of DPN (9). […this is the paper I previously covered here, US] Recently, we also reported the prevalence of microvascular and macrovascular complications in youth with T1D and T2D in the entire SEARCH cohort (10).

In the current study, we examined the cross-sectional and longitudinal risk factors for DPN. The aims were 1) to estimate prevalence of DPN in youth with T1D and T2D, overall and by age and diabetes duration, and 2) to identify risk factors (cross-sectional and longitudinal) associated with the presence of DPN in a multiethnic cohort of youth with diabetes enrolled in the SEARCH study.”

“The SEARCH Cohort Study enrolled 2,777 individuals. For this analysis, we excluded participants aged <10 years (n = 134), those with no antibody measures for etiological definition of diabetes (n = 440), and those with incomplete neuropathy assessment […] (n = 213), which reduced the analysis sample size to 1,992 […] There were 1,734 youth with T1D and 258 youth with T2D who participated in the SEARCH study and had complete data for the variables of interest. […] Seven percent of the participants with T1D and 22% of those with T2D had evidence of DPN.”

“Among youth with T1D, those with DPN were older (21 vs. 18 years, P < 0.0001), had a longer duration of diabetes (8.7 vs. 7.8 years, P < 0.0001), and had higher DBP (71 vs. 69 mmHg, P = 0.02), BMI (26 vs. 24 kg/m2, P < 0.001), and LDL-c levels (101 vs. 96 mg/dL, P = 0.01); higher triglycerides (85 vs. 74 mg/dL, P = 0.005); and lower HDL-c levels (51 vs. 55 mg/dL, P = 0.01) compared to those without DPN. The prevalence of DPN was 5% among nonsmokers vs. 10% among the current and former smokers (P = 0.001). […] Among youth with T2D, those with DPN were older (23 vs. 22 years, P = 0.01), had longer duration of diabetes (8.6 vs. 7.6 years; P = 0.002), and had lower HDL-c (40 vs. 43 mg/dL, P = 0.04) compared with those without DPN. The prevalence of DPN was higher among males than among females: 30% of males had DPN compared with 18% of females (P = 0.02). The prevalence of DPN was twofold higher in current smokers (33%) compared with nonsmokers (15%) and former smokers (17%) (P = 0.01). […] [T]he prevalence of DPN was further assessed by 5-year increment of diabetes duration in individuals with T1D or T2D […]. There was an approximately twofold increase in the prevalence of DPN with an increase in duration of diabetes from 5–10 years to >10 years for both the T1D group (5–13%) (P < 0.0001) and the T2D group (19–36%) (P = 0.02). […] in an unadjusted logistic regression model, youth with T2D were four times more likely to develop DPN compared with those with T1D, and though this association was attenuated, it remained significant independent of age, sex, height, and glycemic control (OR 2.99 [1.91; 4.67], P < 0.001)”.

“The prevalence estimates for DPN found in our study for youth with T2D are similar to those in the Australian cohort (8) but lower for youth with T1D than those reported in the Danish (7) and Australian (8) cohorts. The nationwide Danish Study Group for Diabetes in Childhood reported a prevalence of 62% among 339 adolescents and youth with T1D (age 12–27 years, duration 9–25 years, and HbA1c 9.7 ± 1.7%) using the vibration perception threshold to assess DPN (7). The higher prevalence in this cohort compared with ours (62 vs. 7%) could be due to the longer duration of diabetes (9–25 vs. 5–13 years) and reliance on a single measure of neuropathy (vibration perception threshold) as opposed to our use of the MNSI, which includes vibration as well as other indicators of neuropathy. In the Australian study, Eppens et al. (8) reported abnormalities in peripheral nerve function in 27% of the 1,433 adolescents with T1D (median age 15.7 years, median diabetes duration 6.8 years, and mean HbA1c 8.5%) and 21% of the 68 adolescents with T2D (median age 15.3 years, median diabetes duration 1.3 years, and mean HbA1c 7.3%) based on thermal and vibration perception threshold. These data are thus reminiscent of the persistent inconsistencies in the definition of DPN, which are reflected in the wide range of prevalence estimates being reported.”

“The alarming rise in rates of DPN for every 5-year increase in duration, coupled with poor glycemic control and dyslipidemia, in this cohort reinforces the need for clinicians rendering care to youth with diabetes to be vigilant in screening for DPN and identifying any risk factors that could potentially be modified to alter the course of the disease (2830). The modifiable risk factors that could be targeted in this young population include better glycemic control, treatment of dyslipidemia, and smoking cessation (29,30) […]. The sharp increase in rates of DPN over time is a reminder that DPN is one of the complications of diabetes that must be a part of the routine annual screening for youth with diabetes.”

v. Diabetes and Hypertension: A Position Statement by the American Diabetes Association.

“Hypertension is common among patients with diabetes, with the prevalence depending on type and duration of diabetes, age, sex, race/ethnicity, BMI, history of glycemic control, and the presence of kidney disease, among other factors (13). Furthermore, hypertension is a strong risk factor for atherosclerotic cardiovascular disease (ASCVD), heart failure, and microvascular complications. ASCVD — defined as acute coronary syndrome, myocardial infarction (MI), angina, coronary or other arterial revascularization, stroke, transient ischemic attack, or peripheral arterial disease presumed to be of atherosclerotic origin — is the leading cause of morbidity and mortality for individuals with diabetes and is the largest contributor to the direct and indirect costs of diabetes. Numerous studies have shown that antihypertensive therapy reduces ASCVD events, heart failure, and microvascular complications in people with diabetes (48). Large benefits are seen when multiple risk factors are addressed simultaneously (9). There is evidence that ASCVD morbidity and mortality have decreased for people with diabetes since 1990 (10,11) likely due in large part to improvements in blood pressure control (1214). This Position Statement is intended to update the assessment and treatment of hypertension among people with diabetes, including advances in care since the American Diabetes Association (ADA) last published a Position Statement on this topic in 2003 (3).”

“Hypertension is defined as a sustained blood pressure ≥140/90 mmHg. This definition is based on unambiguous data that levels above this threshold are strongly associated with ASCVD, death, disability, and microvascular complications (1,2,2427) and that antihypertensive treatment in populations with baseline blood pressure above this range reduces the risk of ASCVD events (46,28,29). The “sustained” aspect of the hypertension definition is important, as blood pressure has considerable normal variation. The criteria for diagnosing hypertension should be differentiated from blood pressure treatment targets.

Hypertension diagnosis and management can be complicated by two common conditions: masked hypertension and white-coat hypertension. Masked hypertension is defined as a normal blood pressure in the clinic or office (<140/90 mmHg) but an elevated home blood pressure of ≥135/85 mmHg (30); the lower home blood pressure threshold is based on outcome studies (31) demonstrating that lower home blood pressures correspond to higher office-based measurements. White-coat hypertension is elevated office blood pressure (≥140/90 mmHg) and normal (untreated) home blood pressure (<135/85 mmHg) (32). Identifying these conditions with home blood pressure monitoring can help prevent overtreatment of people with white-coat hypertension who are not at elevated risk of ASCVD and, in the case of masked hypertension, allow proper use of medications to reduce side effects during periods of normal pressure (33,34).”

“Diabetic autonomic neuropathy or volume depletion can cause orthostatic hypotension (35), which may be further exacerbated by antihypertensive medications. The definition of orthostatic hypotension is a decrease in systolic blood pressure of 20 mmHg or a decrease in diastolic blood pressure of 10 mmHg within 3 min of standing when compared with blood pressure from the sitting or supine position (36). Orthostatic hypotension is common in people with type 2 diabetes and hypertension and is associated with an increased risk of mortality and heart failure (37).

It is important to assess for symptoms of orthostatic hypotension to individualize blood pressure goals, select the most appropriate antihypertensive agents, and minimize adverse effects of antihypertensive therapy.”

“Taken together, […] meta-analyses consistently show that treating patients with baseline blood pressure ≥140 mmHg to targets <140 mmHg is beneficial, while more intensive targets may offer additional though probably less robust benefits. […] Overall, compared with people without diabetes, the relative benefits of antihypertensive treatment are similar, and absolute benefits may be greater (5,8,40). […] Multiple-drug therapy is often required to achieve blood pressure targets, particularly in the setting of diabetic kidney disease. However, the use of both ACE inhibitors and ARBs in combination is not recommended given the lack of added ASCVD benefit and increased rate of adverse events — namely, hyperkalemia, syncope, and acute kidney injury (7173). Titration of and/or addition of further blood pressure medications should be made in a timely fashion to overcome clinical inertia in achieving blood pressure targets. […] there is an absence of high-quality data available to guide blood pressure targets in type 1 diabetes. […] Of note, diastolic blood pressure, as opposed to systolic blood pressure, is a key variable predicting cardiovascular outcomes in people under age 50 years without diabetes and may be prioritized in younger adults (46,47). Though convincing data are lacking, younger adults with type 1 diabetes might more easily achieve intensive blood pressure levels and may derive substantial long-term benefit from tight blood pressure control.”

“Lifestyle management is an important component of hypertension treatment because it lowers blood pressure, enhances the effectiveness of some antihypertensive medications, promotes other aspects of metabolic and vascular health, and generally leads to few adverse effects. […] Lifestyle therapy consists of reducing excess body weight through caloric restriction, restricting sodium intake (<2,300 mg/day), increasing consumption of fruits and vegetables […] and low-fat dairy products […], avoiding excessive alcohol consumption […] (53), smoking cessation, reducing sedentary time (54), and increasing physical activity levels (55). These lifestyle strategies may also positively affect glycemic and lipid control and should be encouraged in those with even mildly elevated blood pressure.”

“Initial treatment for hypertension should include drug classes demonstrated to reduce cardiovascular events in patients with diabetes: ACE inhibitors (65,66), angiotensin receptor blockers (ARBs) (65,66), thiazide-like diuretics (67), or dihydropyridine CCBs (68). For patients with albuminuria (urine albumin-to-creatinine ratio [UACR] ≥30 mg/g creatinine), initial treatment should include an ACE inhibitor or ARB in order to reduce the risk of progressive kidney disease […]. In the absence of albuminuria, risk of progressive kidney disease is low, and ACE inhibitors and ARBs have not been found to afford superior cardioprotection when compared with other antihypertensive agents (69). β-Blockers may be used for the treatment of coronary disease or heart failure but have not been shown to reduce mortality as blood pressure–lowering agents in the absence of these conditions (5,70).”

vi. High Illicit Drug Abuse and Suicide in Organ Donors With Type 1 Diabetes.

“Organ donors with type 1 diabetes represent a unique population for research. Through a combination of immunological, metabolic, and physiological analyses, researchers utilizing such tissues seek to understand the etiopathogenic events that result in this disorder. The Network for Pancreatic Organ Donors with Diabetes (nPOD) program collects, processes, and distributes pancreata and disease-relevant tissues to investigators throughout the world for this purpose (1). Information is also available, through medical records of organ donors, related to causes of death and psychological factors, including drug use and suicide, that impact life with type 1 diabetes.

We reviewed the terminal hospitalization records for the first 100 organ donors with type 1 diabetes in the nPOD database, noting cause, circumstance, and mechanism of death; laboratory results; and history of illicit drug use. Donors were 45% female and 79% Caucasian. Mean age at time of death was 28 years (range 4–61) with mean disease duration of 16 years (range 0.25–52).”

“Documented suicide was found in 8% of the donors, with an average age at death of 21 years and average diabetes duration of 9 years. […] Similarly, a type 1 diabetes registry from the U.K. found that 6% of subjects’ deaths were attributed to suicide (2). […] Additionally, we observed a high rate of illicit substance abuse: 32% of donors reported or tested positive for illegal substances (excluding marijuana), and multidrug use was common. Cocaine was the most frequently abused substance. Alcohol use was reported in 35% of subjects, with marijuana use in 27%. By comparison, 16% of deaths in the U.K. study were deemed related to drug misuse (2).”

“We fully recognize the implicit biases of an organ donor–based population, which may not be […’may not be’ – well, I guess that’s one way to put it! – US] directly comparable to the general population. Nevertheless, the high rate of suicide and drug use should continue to spur our energy and resources toward caring for the emotional and psychological needs of those living with type 1 diabetes. The burden of type 1 diabetes extends far beyond checking blood glucose and administering insulin.”


January 10, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology, Psychiatry, Studies | Leave a comment

Depression (II)

I have added some more quotes from the last half of the book as well as some more links to relevant topics below.

“The early drugs used in psychiatry were sedatives, as calming a patient was probably the only treatment that was feasible and available. Also, it made it easier to manage large numbers of individuals with small numbers of staff at the asylum. Morphine, hyoscine, chloral, and later bromide were all used in this way. […] Insulin coma therapy came into vogue in the 1930s following the work of Manfred Sakel […] Sakel initially proposed this treatment as a cure for schizophrenia, but its use gradually spread to mood disorders to the extent that asylums in Britain opened so-called insulin units. […] Recovery from the coma required administration of glucose, but complications were common and death rates ranged from 1–10 per cent. Insulin coma therapy was initially viewed as having tremendous benefits, but later re-examinations have highlighted that the results could also be explained by a placebo effect associated with the dramatic nature of the process or, tragically, because deprivation of glucose supplies to the brain may have reduced the person’s reactivity because it had induced permanent damage.”

“[S]ome respected scientists and many scientific journals remain ambivalent about the empirical evidence for the benefits of psychological therapies. Part of the reticence appears to result from the lack of very large-scale clinical trials of therapies (compared to international, multi-centre studies of medication). However, a problem for therapy research is that there is no large-scale funding from big business for therapy trials […] It is hard to implement optimum levels of quality control in research studies of therapies. A tablet can have the same ingredients and be prescribed in almost exactly the same way in different treatment centres and different countries. If a patient does not respond to this treatment, the first thing we can do is check if they receive the right medication in the correct dose for a sufficient period of time. This is much more difficult to achieve with psychotherapy and fuels concerns about how therapy is delivered and potential biases related to researcher allegiance (i.e. clinical centres that invent a therapy show better outcomes than those that did not) and generalizability (our ability to replicate the therapy model exactly in a different place with different therapists). […] Overall, the ease of prescribing a tablet, the more traditional evidence-base for the benefits of medication, and the lack of availability of trained therapists in some regions means that therapy still plays second fiddle to medications in the majority of treatment guidelines for depression. […] The mainstay of treatments offered to individuals with depression has changed little in the last thirty to forty years. Antidepressants are the first-line intervention recommended in most clinical guidelines”.

“[W]hilst some cases of mild–moderate depression can benefit from antidepressants (e.g. chronic mild depression of several years’ duration can often respond to medication), it is repeatedly shown that the only group who consistently benefit from antidepressants are those with severe depression. The problem is that in the real world, most antidepressants are actually prescribed for less severe cases, that is, the group least likely to benefit; which is part of the reason why the argument about whether antidepressants work is not going to go away any time soon.”

“The economic argument for therapy can only be sustained if it is shown that the long-term outcome of depression (fewer relapses and better quality of life) is improved by receiving therapy instead of medication or by receiving both therapy and medication. Despite claims about how therapies such as CBT, behavioural activation, IPT, or family therapy may work, the reality is that many of the elements included in these therapies are the same as elements described in all the other effective therapies (sometimes referred to as empirically supported therapies). The shared elements include forming a positive working alliance with the depressed person, sharing the model and the plan for therapy with the patient from day one, and helping the patient engage in active problem-solving, etc. Given the degree of overlap, it is hard to make a real case for using one empirically supported therapy instead of another. Also, there are few predictors (besides symptom severity and personal preference) that consistently show who will respond to one of these therapies rather than to medication. […] One of the reasons for some scepticism about the value of therapies for treating depression is that it has proved difficult to demonstrate exactly what mediates the benefits of these interventions. […] despite the enthusiasm for mindfulness, there were fewer than twenty high-quality research trials on its use in adults with depression by the end of 2015 and most of these studies had fewer than 100 participants. […] exercise improves the symptoms of depression compared to no treatment at all, but the currently available studies on this topic are less than ideal (with many problems in the design of the study or sample of participants included in the clinical trial). […] Exercise is likely to be a better option for those individuals whose mood improves from participating in the experience, rather than someone who is so depressed that they feel further undermined by the process or feel guilty about ‘not trying hard enough’ when they attend the programme.”

“Research […] indicates that treatment is important and a study from the USA in 2005 showed that those who took the prescribed antidepressant medications had a 20 per cent lower rate of absenteeism than those who did not receive treatment for their depression. Absence from work is only one half of the depression–employment equation. In recent times, a new concept ‘presenteeism’ has been introduced to try to describe the problem of individuals who are attending their place of work but have reduced efficiency (usually because their functioning is impaired by illness). As might be imagined, presenteeism is a common issue in depression and a study in the USA in 2007 estimated that a depressed person will lose 5–8 hours of productive work every week because the symptoms they experience directly or indirectly impair their ability to complete work-related tasks. For example, depression was associated with reduced productivity (due to lack of concentration, slowed physical and mental functioning, loss of confidence), and impaired social functioning”.

“Health economists do not usually restrict their estimates of the cost of a disorder simply to the funds needed for treatment (i.e. the direct health and social care costs). A comprehensive economic assessment also takes into account the indirect costs. In depression these will include costs associated with employment issues (e.g. absenteeism and presenteeism; sickness benefits), costs incurred by the patient’s family or significant others (e.g. associated with time away from work to care for someone), and costs arising from premature death such as depression-related suicides (so-called mortality costs). […] Studies from around the world consistently demonstrate that the direct health care costs of depression are dwarfed by the indirect costs. […] Interestingly, absenteeism is usually estimated to be about one-quarter of the costs of presenteeism.”

Jakob Klaesi. António Egas Moniz. Walter Jackson Freeman II.
Electroconvulsive therapy.
Vagal nerve stimulation.
Chlorpromazine. Imipramine. Tricyclic antidepressant. MAOIs. SSRIs. John CadeMogens Schou. Lithium carbonate.
Psychoanalysis. CBT.
Thomas Szasz.
Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration (Kirsch et al.).
Chronobiology. Chronobiotics. Melatonin.
Eric Kandel. BDNF.
The global burden of disease (Murray & Lopez) (the author discusses some of the data included in that publication).


January 8, 2018 Posted by | Books, Health Economics, Medicine, Pharmacology, Psychiatry, Psychology | Leave a comment

Endocrinology (part I – thyroid)

Handbooks like these are difficult to blog, but I decided to try anyway. The first 100 pages or so of the book deals with the thyroid gland. Some observations of interest below.

“Biosynthesis of thyroid hormones requires iodine as substrate. […] The thyroid is the only source of T4. The thyroid secretes 20% of circulating T3; the remainder is generated in extraglandular tissues by the conversion of T4 to T3 […] In the blood, T4 and T3 are almost entirely bound to plasma proteins. […] Only the free or unbound hormone is available to tissues. The metabolic state correlates more closely with the free than the total hormone concentration in the plasma. The relatively weak binding of T3 accounts for its more rapid onset and offset of action. […] The levels of thyroid hormone in the blood are tightly controlled by feedback mechanisms involved in the hypothalamo-pituitary-thyroid (HPT) axis“.

“Annual check of thyroid function [is recommended] in the annual review of diabetic patients.”

“The term thyrotoxicosis denotes the clinical, physiological, and biochemical findings that result when the tissues are exposed to excess thyroid hormone. It can arise in a variety of ways […] It is essential to establish a specific diagnosis […] The term hyperthyroidism should be used to denote only those conditions in which hyperfunction of the thyroid leads to thyrotoxicosis. […] [Thyrotoxicosis is] 10 x more common in ♀ than in ♂ in the UK. Prevalence is approximately 2% of the ♀ population. […] Subclinical hyperthyroidism is defined as low serum thyrotropin (TSH) concentration in patients with normal levels of T4 and T3. Subtle symptoms and signs of thyrotoxicosis may be present. […] There is epidemiological evidence that subclinical hyperthyroidism is a risk factor for the development of atrial fibrillation or osteoporosis.1 Meta-analyses suggest a 41% increase in all-cause mortality.2 […] Thyroid crisis [storm] represents a rare, but life-threatening, exacerbation of the manifestations of thyrotoxicosis. […] the condition is associated with a significant mortality (30-50%, depending on series) […]. Thyroid crisis develops in hyperthyroid patients who: *Have an acute infection. *Undergo thyroidal or non-thyroidal surgery or (rarely) radioiodine treatment.”

“[Symptoms and signs of hyperthyroidism (all forms):] *Hyperactivity, irritability, altered mood, insomnia. *Heat intolerance, sweating. […] *Fatigue, weakness. *Dyspnoea. *Weight loss with appetite (weight gain in 10% of patients). *Pruritus. […] *Thirst and polyuria. *Oligomenorrhoea or amenorrhoea, loss of libido, erectile dysfunction (50% of men may have sexual dysfunction). *Warm, moist skin. […] *Hair loss. *Muscle weakness and wasting. […] Manifestations of Graves’s disease (in addition to [those factors already mentioned include:]) *Diffuse goitre. *Ophthalmopathy […] A feeling of grittiness and discomfort in the eye. *Retrobulbar pressure or pain, eyelid lag or retraction. […] *Exophthalmos (proptosis) […] Optic neuropathy.”

“Two alternative regimens are practiced for Graves’s disease: dose titration and block and replace. […] The [primary] aim [of the dose titration regime] is to achieve a euthyroid state with relatively high drug doses and then to maintain euthyroidism with a low stable dose. […] This regimen has a lower rate of side effects than the block and replace regimen. The treatment is continued for 18 months, as this appears to represent the length of therapy which is generally optimal in producing the remission rate of up to 40% at 5 years after discontinuing therapy. *Relapses are most likely to occur within the first year […] Men have a higher recurrence rate than women. *Patients with multinodular goitres and thyrotoxicosis always relapse on cessation of antithyroid medication, and definite treatment with radioiodine or surgery is usually advised. […] Block and replace regimen *After achieving a euthyroid state on carbimazole alone, carbimazole at a dose of 40mg daily, together with T4 at a dose of 100 micrograms, can be prescribed. This is usually continued for 6 months. *The main advantages are fewer hospital visits for checks of thyroid function and shorter duration of treatment.”

“Radioiodine treatment[:] Indications: *Definite treatment of multinodular goitre or adenoma. *Relapsed Graves’s disease. […] *Radioactive iodine-131 is administered orally as a capsule or a drink. *There is no universal agreement regarding the optimal dose. […] The recommendation is to administer enough radioiodine to achieve euthyroidism, with the acceptance of a moderate rate of hypothyroidism, e.g. 15-20% at 2 years. […] In general, 50-70% of patients have restored normal thyroid function within 6-8 weeks of receiving radioiodine. […] The prevalence of hypothyroidism is about 50% at 10 years and continues to increase thereafter.”

“Thyrotoxicosis occurs in about 0.2% of pregnancies. […] *Diagnosis of thyrotoxicosis during pregnancy may be difficult or delayed. *Physiological changes of pregnancy are similar to those of hyperthyroidism. […] 5-7% of ♀ develop biochemical evidence of thyroid dysfunction after delivery. An incidence is seen in patients with type I diabetes mellitus (25%) […] One-third of affected ♀ with post-partum thyroiditis develop symptoms of hypothyroidism […] There is a suggestion of an risk of post-partum depression in those with hypothyroidism. […] *The use of iodides and radioiodine is contraindicated in pregnancy. *Surgery is rarely performed in pregnancy. It is reserved for patients not responding to ATDs [antithyroid drugs, US]. […] Hyperthyroid ♀ who want to conceive should attain euthyroidism before conception since uncontrolled hyperthyroidism is associated with an an risk of congenital abnormalities (stillbirth and cranial synostosis are the most serious complications).”

“Nodular thyroid disease denotes the presence of single or multiple palpable or non-palpable nodules within the thyroid gland. […] *Clinically apparent thyroid nodules are evident in ~5% of the UK population. […] Thyroid nodules always raise the concern of cancer, but <5% are cancerous. […] clinically detectable thyroid cancer is rare. It accounts for <1% of all cancer and <0.5% of cancer deaths. […] Thyroid cancers are commonest in adults aged 40-50 and rare in children [incidence of 0.2-5 per million per year] and adolescents. […] History should concentrate on: *An enlarging thyroid mass. *A previous history of radiation […] family history of thyroid cancer. *The development of hoarseness or dysphagia. *Nodules are more likely to be malignant in patients <20 or >60 years. *Thyroid nodules are more common in ♀ but more likely to be malignant in ♂. […] Physical findings suggestive of malignancy include a firm or hard, non-tender nodule, a recent history of enlargement, fixation to adjacent tissue, and the presence of regional lymphadenopathy. […] Thyroid nodules may be described as adenomas if the follicular cell differentiation is enclosed within a capsule; adenomatous when the lesions are circumscribed but not encapsulated. *The most common benign thyroid tumours are the nodules of multinodular goitres (colloid nodules) and follicular adenomas. […] Autonomously functioning thyroid adenomas (or nodules) are benign tumours that produce thyroid hormone. Clinically, they present as a single nodule that is hyperfunctioning […], sometimes causing hyperthyroidism.”

“Inflammation of the thyroid gland often leads to a transient thyrotoxicosis followed by hypothyroidism. Overt hypothyroidism caused by autoimmunity has two main forms: Hashimoto’s (goitrous) thyroiditis and atrophic thyroiditis. […] Hashimoto’s thyroiditis [is] [c]haracterized by a painless, variable-sized goitre with rubbery consistency and an irregular surface. […] Occasionally, patients present with thyrotoxicosis in association with a thyroid gland that is unusually firm […] Atrophic thyroiditis [p]robably indicates end-stage thyroid disease. These patients do not have goitre and are antibody [positive]. […] The long-term prognosis of patients with chronic thyroiditis is good because hypothyroidism can easily be corrected with T4 and the goitre is usually not of sufficient size to cause local symptoms. […] there is an association between this condition and thyroid lymphoma (rare, but risk by a factor of 70).”

“Hypothyroidism results from a variety of abnormalities that cause insufficient secretion of thyroid hormones […] The commonest cause is autoimmune thyroid disease. Myxoedema is severe hypothyroidism [which leads to] thickening of the facial features and a doughy induration of the skin. [The clinical picture of hypothyroidism:] *Insidious, non-specific onset. *Fatigue, lethargy, constipation, cold intolerance, muscle stiffness, cramps, carpal tunnel syndrome […] *Slowing of intellectual and motor activities. *↓ appetite and weight gain. *Dry skin; hair loss. […] [The term] [s]ubclinical hypothyroidism […] is used to denote raised TSH levels in the presence of normal concentrations of free thyroid hormones. *Treatment is indicated if the biochemistry is sustained in patients with a past history of radioiodine treatment for thyrotoxicosis or [positive] thyroid antibodies as, in these situations, progression to overt hypothyroidism is almost inevitable […] There is controversy over the advantages of T4 treatment in patients with [negative] thyroid antibodies and no previous radioiodine treatment. *If treatment is not given, follow-up with annual thyroid function tests is important. *There is no generally accepted consensus of when patients should receive treatment. […] *Thyroid hormone replacement with synthetic levothyroxine remains the treatment of choice in primary hypothyroidism. […] levothyroxine has a narrow therapeutic index […] Elevated TSH despite thyroxine replacement is common, most usually due to lack of compliance.”



January 8, 2018 Posted by | Books, Cancer/oncology, Diabetes, Medicine, Ophthalmology, Pharmacology | Leave a comment

Depression (I)

Below I have added some quotes and links related to the first half of this book.


“One of the problems encountered in any discussion of depression is that the word is used to mean different things by different people. For many members of the public, the term depression is used to describe normal sadness. In clinical practice, the term depression can be used to describe negative mood states, which are symptoms that can occur in a range of illnesses (e.g. individuals with psychosis may also report depressed mood). However, the term depression can also be used to refer to a diagnosis. When employed in this way it is meant to indicate that a cluster of symptoms have all occurred together, with the most common changes being in mood, thoughts, feelings, and behaviours. Theoretically, all these symptoms need to be present to make a diagnosis of depressive disorder.”

“The absence of any laboratory tests in psychiatry means that the diagnosis of depression relies on clinical judgement and the recognition of patterns of symptoms. There are two main problems with this. First, the diagnosis represents an attempt to impose a ‘present/absent’ or ‘yes/no’ classification on a problem that, in reality, is dimensional and varies in duration and severity. Also, many symptoms are likely to show some degree of overlap with pre-existing personality traits. Taken together, this means there is an ongoing concern about the point at which depression or depressive symptoms should be regarded as a mental disorder, that is, where to situate the dividing line on a continuum from health to normal sadness to illness. Second, for many years, there was a lack of consistent agreement on what combination of symptoms and impaired functioning would benefit from clinical intervention. This lack of consensus on the threshold for treatment, or for deciding which treatment to use, is a major source of problems to this day. […] A careful inspection of the criteria for identifying a depressive disorder demonstrates that diagnosis is mainly reliant on the cross-sectional assessment of the way the person presents at that moment in time. It is also emphasized that the current presentation should represent a change from the person’s usual state, as this step helps to begin the process of differentiating illness episodes from long-standing personality traits. Clarifying the longitudinal history of any lifetime problems can help also to establish, for example, whether the person has previously experienced mania (in which case their diagnosis will be revised to bipolar disorder), or whether they have a history of chronic depression, with persistent symptoms that may be less severe but are nevertheless very debilitating (this is usually called dysthymia). In addition, it is important to assess whether the person has another mental or physical disorder as well as these may frequently co-occur with depression. […] In the absence of diagnostic tests, the current classifications still rely on expert consensus regarding symptom profiles.”

“In summary, for a classification system to have utility it needs to be reliable and valid. If a diagnosis is reliable doctors will all make the same diagnosis when they interview patients who present with the same set of symptoms. If a diagnosis has predictive validity it means that it is possible to forecast the future course of the illness in individuals with the same diagnosis and to anticipate their likely response to different treatments. For many decades, the lack of reliability so undermined the credibility of psychiatric diagnoses that most of the revisions of the classification systems between the 1950s and 2010 focused on improving diagnostic reliability. However, insufficient attention has been given to validity and until this is improved, the criteria used for diagnosing depressive disorders will continue to be regarded as somewhat arbitrary […]. Weaknesses in the systems for the diagnosis and classification of depression are frequently raised in discussions about the existence of depression as a separate entity and concerns about the rationale for treatment. It is notable that general medicine uses a similar approach to making decisions regarding the health–illness dimension. For example, levels of blood pressure exist on a continuum. However, when an individual’s blood pressure measurement reaches a predefined level, it is reported that the person now meets the criteria specified for the diagnosis of hypertension (high blood pressure). Depending on the degree of variation from the norm or average values for their age and gender, the person will be offered different interventions. […] This approach is widely accepted as a rational approach to managing this common physical health problem, yet a similar ‘stepped care’ approach to depression is often derided.”

“There are few differences in the nature of the symptoms experienced by men and women who are depressed, but there may be gender differences in how their distress is expressed or how they react to the symptoms. For example, men may be more likely to become withdrawn rather than to seek support from or confide in other people, they may become more outwardly hostile and have a greater tendency to use alcohol to try to cope with their symptoms. It is also clear that it may be more difficult for men to accept that they have a mental health problem and they are more likely to deny it, delay seeking help, or even to refuse help. […] becoming unemployed, retirement, and loss of a partner and change of social roles can all be risk factors for depression in men. In addition, chronic physical health problems or increasing disability may also act as a precipitant. The relationship between physical illness and depression is complex. When people are depressed they may subjectively report that their general health is worse than that of other people; likewise, people who are ill or in pain may react by becoming depressed. Certain medical problems such as an under-functioning thyroid gland (hypothyroidism) may produce symptoms that are virtually indistinguishable from depression. Overall, the rate of depression in individuals with a chronic physical disease is almost three times higher than those without such problems.”

“A long-standing problem in gathering data about suicide is that many religions and cultures regard it as a sin or an illegal act. This has had several consequences. For example, coroners and other public officials often strive to avoid identifying suspicious deaths as a suicide, meaning that the actual rates of suicide may be under-reported.”

“In Beck’s [depression] model, it is proposed that an individual’s interpretations of events or experiences are encapsulated in automatic thoughts, which arise immediately following the event or even at the same time. […] Beck suggested that these automatic thoughts occur at a conscious level and can be accessible to the individual, although they may not be actively aware of them because they are not concentrating on them. The appraisals that occur in specific situations largely determine the person’s emotional and behavioural responses […] [I]n depression, the content of a person’s thinking is dominated by negative views of themselves, their world, and their future (the so-called negative cognitive triad). Beck’s theory suggests that the themes included in the automatic thoughts are generated via the activation of underlying cognitive structures, called dysfunctional beliefs (or cognitive schemata). All individuals develop a set of rules or ‘silent assumptions’ derived from early learning experiences. Whilst automatic thoughts are momentary, event-specific cognitions, the underlying beliefs operate across a variety of situations and are more permanent. Most of the underlying beliefs held by the average individual are quite adaptive and guide our attempts to act and react in a considered way. Individuals at risk of depression are hypothesized to hold beliefs that are maladaptive and can have an unhelpful influence on them. […] faulty information processing contributes to further deterioration in a person’s mood, which sets up a vicious cycle with more negative mood increasing the risk of negative interpretations of day-to-day life experiences and these negative cognitions worsening the depressed mood. Beck suggested that the underlying beliefs that render an individual vulnerable to depression may be broadly categorized into beliefs about being helpless or unlovable. […] Beliefs about ‘the self’ seem especially important in the maintenance of depression, particularly when connected with low or variable self-esteem.”

“[U]nidimensional models, such as the monoamine hypothesis or the social origins of depression model, are important building blocks for understanding depression. However, in reality there is no one cause and no single pathway to depression and […] multiple factors increase vulnerability to depression. Whether or not someone at risk of depression actually develops the disorder is partly dictated by whether they are exposed to certain types of life events, the perceived level of threat or distress associated with those events (which in turn is influenced by cognitive and emotional reactions and temperament), their ability to cope with these experiences (their resilience or adaptability under stress), and the functioning of their biological stress-sensitivity systems (including the thresholds for switching on their body’s stress responses).”

Some links:

Humorism. Marsilio Ficino. Thomas Willis. William Cullen. Philippe Pinel. Benjamin Rush. Emil Kraepelin. Karl Leonhard. Sigmund Freud.
Relation between depression and sociodemographic factors.
Bipolar disorder.
Postnatal depression. Postpartum psychosis.
Epidemiology of suicide. Durkheim’s typology of suicide.
Suicide methods.
Neuroendocrine hypothesis of depression. HPA (Hypothalamic–Pituitary–Adrenal) axis.
Cognitive behavioral therapy.
Coping responses.
Brown & Harris (1978).


January 5, 2018 Posted by | Books, Medicine, Psychiatry, Psychology | Leave a comment

Random stuff

I have almost stopped posting posts like these, which has resulted in the accumulation of a very large number of links and studies which I figured I might like to blog at some point. This post is mainly an attempt to deal with the backlog – I won’t cover the material in too much detail.

i. Do Bullies Have More Sex? The answer seems to be a qualified yes. A few quotes:

“Sexual behavior during adolescence is fairly widespread in Western cultures (Zimmer-Gembeck and Helfland 2008) with nearly two thirds of youth having had sexual intercourse by the age of 19 (Finer and Philbin 2013). […] Bullying behavior may aid in intrasexual competition and intersexual selection as a strategy when competing for mates. In line with this contention, bullying has been linked to having a higher number of dating and sexual partners (Dane et al. 2017; Volk et al. 2015). This may be one reason why adolescence coincides with a peak in antisocial or aggressive behaviors, such as bullying (Volk et al. 2006). However, not all adolescents benefit from bullying. Instead, bullying may only benefit adolescents with certain personality traits who are willing and able to leverage bullying as a strategy for engaging in sexual behavior with opposite-sex peers. Therefore, we used two independent cross-sectional samples of older and younger adolescents to determine which personality traits, if any, are associated with leveraging bullying into opportunities for sexual behavior.”

“…bullying by males signal the ability to provide good genes, material resources, and protect offspring (Buss and Shackelford 1997; Volk et al. 2012) because bullying others is a way of displaying attractive qualities such as strength and dominance (Gallup et al. 2007; Reijntjes et al. 2013). As a result, this makes bullies attractive sexual partners to opposite-sex peers while simultaneously suppressing the sexual success of same-sex rivals (Gallup et al. 2011; Koh and Wong 2015; Zimmer-Gembeck et al. 2001). Females may denigrate other females, targeting their appearance and sexual promiscuity (Leenaars et al. 2008; Vaillancourt 2013), which are two qualities relating to male mate preferences. Consequently, derogating these qualities lowers a rivals’ appeal as a mate and also intimidates or coerces rivals into withdrawing from intrasexual competition (Campbell 2013; Dane et al. 2017; Fisher and Cox 2009; Vaillancourt 2013). Thus, males may use direct forms of bullying (e.g., physical, verbal) to facilitate intersexual selection (i.e., appear attractive to females), while females may use relational bullying to facilitate intrasexual competition, by making rivals appear less attractive to males.”

The study relies on the use of self-report data, which I find very problematic – so I won’t go into the results here. I’m not quite clear on how those studies mentioned in the discussion ‘have found self-report data [to be] valid under conditions of confidentiality’ – and I remain skeptical. You’ll usually want data from independent observers (e.g. teacher or peer observations) when analyzing these kinds of things. Note in the context of the self-report data problem that if there’s a strong stigma associated with being bullied (there often is, or bullying wouldn’t work as well), asking people if they have been bullied is not much better than asking people if they’re bullying others.

ii. Some topical advice that some people might soon regret not having followed, from the wonderful Things I Learn From My Patients thread:

“If you are a teenage boy experimenting with fireworks, do not empty the gunpowder from a dozen fireworks and try to mix it in your mother’s blender. But if you do decide to do that, don’t hold the lid down with your other hand and stand right over it. This will result in the traumatic amputation of several fingers, burned and skinned forearms, glass shrapnel in your face, and a couple of badly scratched corneas as a start. You will spend months in rehab and never be able to use your left hand again.”

iii. I haven’t talked about the AlphaZero-Stockfish match, but I was of course aware of it and did read a bit about that stuff. Here’s a reddit thread where one of the Stockfish programmers answers questions about the match. A few quotes:

“Which of the two is stronger under ideal conditions is, to me, neither particularly interesting (they are so different that it’s kind of like comparing the maximum speeds of a fish and a bird) nor particularly important (since there is only one of them that you and I can download and run anyway). What is super interesting is that we have two such radically different ways to create a computer chess playing entity with superhuman abilities. […] I don’t think there is anything to learn from AlphaZero that is applicable to Stockfish. They are just too different, you can’t transfer ideas from one to the other.”

“Based on the 100 games played, AlphaZero seems to be about 100 Elo points stronger under the conditions they used. The current development version of Stockfish is something like 40 Elo points stronger than the version used in Google’s experiment. There is a version of Stockfish translated to hand-written x86-64 assembly language that’s about 15 Elo points stronger still. This adds up to roughly half the Elo difference between AlphaZero and Stockfish shown in Google’s experiment.”

“It seems that Stockfish was playing with only 1 GB for transposition tables (the area of memory used to store data about the positions previously encountered in the search), which is way too little when running with 64 threads.” [I seem to recall a comp sci guy observing elsewhere that this was less than what was available to his smartphone version of Stockfish, but I didn’t bookmark that comment].

“The time control was a very artificial fixed 1 minute/move. That’s not how chess is traditionally played. Quite a lot of effort has gone into Stockfish’s time management. It’s pretty good at deciding when to move quickly, and when to spend a lot of time on a critical decision. In a fixed time per move game, it will often happen that the engine discovers that there is a problem with the move it wants to play just before the time is out. In a regular time control, it would then spend extra time analysing all alternative moves and trying to find a better one. When you force it to move after exactly one minute, it will play the move it already know is bad. There is no doubt that this will cause it to lose many games it would otherwise have drawn.”

iv. Thrombolytics for Acute Ischemic Stroke – no benefit found.

“Thrombolysis has been rigorously studied in >60,000 patients for acute thrombotic myocardial infarction, and is proven to reduce mortality. It is theorized that thrombolysis may similarly benefit ischemic stroke patients, though a much smaller number (8120) has been studied in relevant, large scale, high quality trials thus far. […] There are 12 such trials 1-12. Despite the temptation to pool these data the studies are clinically heterogeneous. […] Data from multiple trials must be clinically and statistically homogenous to be validly pooled.14 Large thrombolytic studies demonstrate wide variations in anatomic stroke regions, small- versus large-vessel occlusion, clinical severity, age, vital sign parameters, stroke scale scores, and times of administration. […] Examining each study individually is therefore, in our opinion, both more valid and more instructive. […] Two of twelve studies suggest a benefit […] In comparison, twice as many studies showed harm and these were stopped early. This early stoppage means that the number of subjects in studies demonstrating harm would have included over 2400 subjects based on originally intended enrollments. Pooled analyses are therefore missing these phantom data, which would have further eroded any aggregate benefits. In their absence, any pooled analysis is biased toward benefit. Despite this, there remain five times as many trials showing harm or no benefit (n=10) as those concluding benefit (n=2), and 6675 subjects in trials demonstrating no benefit compared to 1445 subjects in trials concluding benefit.”

“Thrombolytics for ischemic stroke may be harmful or beneficial. The answer remains elusive. We struggled therefore, debating between a ‘yellow’ or ‘red’ light for our recommendation. However, over 60,000 subjects in trials of thrombolytics for coronary thrombosis suggest a consistent beneficial effect across groups and subgroups, with no studies suggesting harm. This consistency was found despite a very small mortality benefit (2.5%), and a very narrow therapeutic window (1% major bleeding). In comparison, the variation in trial results of thrombolytics for stroke and the daunting but consistent adverse effect rate caused by ICH suggested to us that thrombolytics are dangerous unless further study exonerates their use.”

“There is a Cochrane review that pooled estimates of effect. 17 We do not endorse this choice because of clinical heterogeneity. However, we present the NNT’s from the pooled analysis for the reader’s benefit. The Cochrane review suggested a 6% reduction in disability […] with thrombolytics. This would mean that 17 were treated for every 1 avoiding an unfavorable outcome. The review also noted a 1% increase in mortality (1 in 100 patients die because of thrombolytics) and a 5% increase in nonfatal intracranial hemorrhage (1 in 20), for a total of 6% harmed (1 in 17 suffers death or brain hemorrhage).”

v. Suicide attempts in Asperger Syndrome. An interesting finding: “Over 35% of individuals with AS reported that they had attempted suicide in the past.”

Related: Suicidal ideation and suicide plans or attempts in adults with Asperger’s syndrome attending a specialist diagnostic clinic: a clinical cohort study.

“374 adults (256 men and 118 women) were diagnosed with Asperger’s syndrome in the study period. 243 (66%) of 367 respondents self-reported suicidal ideation, 127 (35%) of 365 respondents self-reported plans or attempts at suicide, and 116 (31%) of 368 respondents self-reported depression. Adults with Asperger’s syndrome were significantly more likely to report lifetime experience of suicidal ideation than were individuals from a general UK population sample (odds ratio 9·6 [95% CI 7·6–11·9], p<0·0001), people with one, two, or more medical illnesses (p<0·0001), or people with psychotic illness (p=0·019). […] Lifetime experience of depression (p=0·787), suicidal ideation (p=0·164), and suicide plans or attempts (p=0·06) did not differ significantly between men and women […] Individuals who reported suicide plans or attempts had significantly higher Autism Spectrum Quotient scores than those who did not […] Empathy Quotient scores and ages did not differ between individuals who did or did not report suicide plans or attempts (table 4). Patients with self-reported depression or suicidal ideation did not have significantly higher Autism Spectrum Quotient scores, Empathy Quotient scores, or age than did those without depression or suicidal ideation”.

The fact that people with Asperger’s are more likely to be depressed and contemplate suicide is consistent with previous observations that they’re also more likely to die from suicide – for example a paper I blogged a while back found that in that particular (large Swedish population-based cohort-) study, people with ASD were more than 7 times as likely to die from suicide than were the comparable controls.

Also related: Suicidal tendencies hard to spot in some people with autism.

This link has some great graphs and tables of suicide data from the US.

Also autism-related: Increased perception of loudness in autism. This is one of the ‘important ones’ for me personally – I am much more sound-sensitive than are most people.

vi. Early versus Delayed Invasive Intervention in Acute Coronary Syndromes.

“Earlier trials have shown that a routine invasive strategy improves outcomes in patients with acute coronary syndromes without ST-segment elevation. However, the optimal timing of such intervention remains uncertain. […] We randomly assigned 3031 patients with acute coronary syndromes to undergo either routine early intervention (coronary angiography ≤24 hours after randomization) or delayed intervention (coronary angiography ≥36 hours after randomization). The primary outcome was a composite of death, myocardial infarction, or stroke at 6 months. A prespecified secondary outcome was death, myocardial infarction, or refractory ischemia at 6 months. […] Early intervention did not differ greatly from delayed intervention in preventing the primary outcome, but it did reduce the rate of the composite secondary outcome of death, myocardial infarction, or refractory ischemia and was superior to delayed intervention in high-risk patients.”

vii. Some wikipedia links:

Behrens–Fisher problem.
Sailing ship tactics (I figured I had to read up on this if I were to get anything out of the Aubrey-Maturin books).
Anatomical terms of muscle.
Phatic expression (“a phatic expression […] is communication which serves a social function such as small talk and social pleasantries that don’t seek or offer any information of value.”)
Three-domain system.
Beringian wolf (featured).
Subdural hygroma.
Cayley graph.
Schur polynomial.
Solar neutrino problem.
Hadamard product (matrices).
True polar wander.
Newton’s cradle.

viii. Determinant versus permanent (mathematics – technical).

ix. Some years ago I wrote a few English-language posts about some of the various statistical/demographic properties of immigrants living in Denmark, based on numbers included in a publication by Statistics Denmark. I did it by translating the observations included in that publication, which was only published in Danish. I was briefly considering doing the same thing again when the 2017 data arrived, but I decided not to do it as I recalled that it took a lot of time to write those posts back then, and it didn’t seem to me to be worth the effort – but Danish readers might be interested to have a look at the data, if they haven’t already – here’s a link to the publication Indvandrere i Danmark 2017.

x. A banter blitz session with grandmaster Peter Svidler, who recently became the first Russian ever to win the Russian Chess Championship 8 times. He’s currently shared-second in the World Rapid Championship after 10 rounds and is now in the top 10 on the live rating list in both classical and rapid – seems like he’s had a very decent year.

xi. I recently discovered Dr. Whitecoat’s blog. The patient encounters are often interesting.


December 28, 2017 Posted by | Astronomy, autism, Biology, Cardiology, Chess, Computer science, History, Mathematics, Medicine, Neurology, Physics, Psychiatry, Psychology, Random stuff, Statistics, Studies, Wikipedia, Zoology | Leave a comment

Analgesia and Procedural Sedation

I didn’t actually like this lecture all that much, in part because I obviously disagree to some extent with the ideas expressed, but I try to remember to blog lectures I watch these days even if I don’t think they’re all that great. It’s a short lecture, but why not at least add a comment about urine drug screening and monitoring or patient selection/segmentation when you’re talking about patients whom you’re considering discharging with an opioid prescription? Recommending acupuncture in a pain management context? Etc.

Anyway, below a few links to stuff related to the coverage:

Pain Management in the Emergency Department.
WHO analgesic ladder.
Nonsteroidal anti-inflammatory drug.
Fentanyl (“This medication should not be used to treat pain other than chronic cancer pain, especially short-term pain such as migraines or other headaches, pain from an injury, or pain after a medical or dental procedure.” …to put it mildly, that’s not the impression you get from watching this lecture…)
Parenteral opioids in emergency medicine – A systematic review of efficacy and safety.
Procedural Sedation (medscape).


December 22, 2017 Posted by | Lectures, Medicine, Pharmacology | Leave a comment

The Periodic Table

“After evolving for nearly 150 years through the work of numerous individuals, the periodic table remains at the heart of the study of chemistry. This is mainly because it is of immense practical benefit for making predictions about all manner of chemical and physical properties of the elements and possibilities for bond formation. Instead of having to learn the properties of the 100 or more elements, the modern chemist, or the student of chemistry, can make effective predictions from knowing the properties of typical members of each of the eight main groups and those of the transition metals and rare earth elements.”

I wasn’t very impressed with this book, but it wasn’t terrible. It didn’t include a lot of new stuff I didn’t already know and it focused in my opinion excessively on historical aspects; some of those things were interesting, for example the problems that confronted chemists trying to make sense of how best to categorize chemical elements in the late 19th century before the discovery of the neutron (the number of protons in the nucleus is not the same thing as the atomic weight of an atom – which was highly relevant because: “when it came to deciding upon the most important criterion for classifying the elements, Mendeleev insisted that atomic weight ordering would tolerate no exceptions”), but I’d have liked to learn a lot more about e.g. some of the chemical properties of the subgroups, instead of just revisiting stuff I’d learned earlier in other publications in the series. However I assume people who are new to chemistry – or who have forgot a lot, and would like to rectify this – might feel differently about the book and the way it covers the material included. However I don’t think this is one of the best publications in the physics/chemistry categories of this OUP series.

Some quotes and links below.

“Lavoisier held that an element should be defined as a material substance that has yet to be broken down into any more fundamental components. In 1789, Lavoisier published a list of 33 simple substances, or elements, according to this empirical criterion. […] the discovery of electricity enabled chemists to isolate many of the more reactive elements, which, unlike copper and iron, could not be obtained by heating their ores with charcoal (carbon). There have been a number of major episodes in the history of chemistry when half a dozen or so elements were discovered within a period of a few years. […] Following the discovery of radioactivity and nuclear fission, yet more elements were discovered. […] Today, we recognize about 90 naturally occurring elements. Moreover, an additional 25 or so elements have been artificially synthesized.”

“Chemical analogies between elements in the same group are […] of great interest in the field of medicine. For example, the element beryllium sits at the top of group 2 of the periodic table and above magnesium. Because of the similarity between these two elements, beryllium can replace the element magnesium that is essential to human beings. This behaviour accounts for one of the many ways in which beryllium is toxic to humans. Similarly, the element cadmium lies directly below zinc in the periodic table, with the result that cadmium can replace zinc in many vital enzymes. Similarities can also occur between elements lying in adjacent positions in rows of the periodic table. For example, platinum lies next to gold. It has long been known that an inorganic compound of platinum called cis-platin can cure various forms of cancer. As a result, many drugs have been developed in which gold atoms are made to take the place of platinum, and this has produced some successful new drugs. […] [R]ubidium […] lies directly below potassium in group 1 of the table. […] atoms of rubidium can mimic those of potassium, and so like potassium can easily be absorbed into the human body. This behaviour is exploited in monitoring techniques, since rubidium is attracted to cancers, especially those occurring in the brain.”

“Each horizontal row represents a single period of the table. On crossing a period, one passes from metals such as potassium and calcium on the left, through transition metals such as iron, cobalt, and nickel, then through some semi-metallic elements like germanium, and on to some non-metals such as arsenic, selenium, and bromine, on the right side of the table. In general, there is a smooth gradation in chemical and physical properties as a period is crossed, but exceptions to this general rule abound […] Metals themselves can vary from soft dull solids […] to hard shiny substances […]. Non-metals, on the other hand, tend to be solids or gases, such as carbon and oxygen respectively. In terms of their appearance, it is sometimes difficult to distinguish between solid metals and solid non-metals. […] The periodic trend from metals to non-metals is repeated with each period, so that when the rows are stacked, they form columns, or groups, of similar elements. Elements within a single group tend to share many important physical and chemical properties, although there are many exceptions.”

“There have been quite literally over 1,000 periodic tables published in print […] One of the ways of classifying the periodic tables that have been published is to consider three basic formats. First of all, there are the originally produced short-form tables published by the pioneers of the periodic table like Newlands, Lothar Meyer, and Mendeleev […] These tables essentially crammed all the then known elements into eight vertical columns or groups. […] As more information was gathered on the properties of the elements, and as more elements were discovered, a new kind of arrangement called the medium-long-form table […] began to gain prominence. Today, this form is almost completely ubiquitous. One odd feature is that the main body of the table does not contain all the elements. […] The ‘missing’ elements are grouped together in what looks like a separate footnote that lies below the main table. This act of separating off the rare earth elements, as they have traditionally been called, is performed purely for convenience. If it were not carried out, the periodic table would appear much wider, 32 elements wide to be precise, instead of 18 elements wide. The 32-wide element format does not lend itself readily to being reproduced on the inside cover of chemistry textbooks or on large wall-charts […] if the elements are shown in this expanded form, as they sometimes are, one has the long-form periodic table, which may be said to be more correct than the familiar medium-long form, in the sense that the sequence of elements is unbroken […] there are many forms of the periodic table, some designed for different uses. Whereas a chemist might favour a form that highlights the reactivity of the elements, an electrical engineer might wish to focus on similarities and patterns in electrical conductivities.”

“The periodic law states that after certain regular but varying intervals, the chemical elements show an approximate repetition in their properties. […] This periodic repetition of properties is the essential fact that underlies all aspects of the periodic system. […] The varying length of the periods of elements and the approximate nature of the repetition has caused some chemists to abandon the term ‘law’ in connection with chemical periodicity. Chemical periodicity may not seem as law-like as most laws of physics. […] A modern periodic table is much more than a collection of groups of elements showing similar chemical properties. In addition to what may be called ‘vertical relationships’, which embody triads of elements, a modern periodic table connects together groups of elements into an orderly sequence. A periodic table consists of a horizontal dimension, containing dissimilar elements, as well as a vertical dimension with similar elements.”

“[I]n modern terms, metals form positive ions by the loss of electrons, while non-metals gain electrons to form negative ions. Such oppositely charged ions combine together to form neutrally charged salts like sodium chloride or calcium bromide. There are further complementary aspects of metals and non-metals. Metal oxides or hydroxides dissolve in water to form bases, while non-metal oxides or hydroxides dissolve in water to form acids. An acid and a base react together in a ‘neutralization’ reaction to form a salt and water. Bases and acids, just like metals and non-metals from which they are formed, are also opposite but complementary.”

“[T]he law of constant proportion, [is] the fact that when two elements combine together, they do so in a constant ratio of their weights. […] The fact that macroscopic samples consist of a fixed ratio by weight of two elements reflects the fact that two particular atoms are combining many times over and, since they have particular masses, the product will also reflect that mass ratio. […] the law of multiple proportions [refers to the fact that] [w]hen one element A combines with another one, B, to form more than one compound, there is a simple ratio between the combining masses of B in the two compounds. For example, carbon and oxygen combine together to form carbon monoxide and carbon dioxide. The weight of combined oxygen in the dioxide is twice as much as the weight of combined oxygen in the monoxide.”

“One of his greatest triumphs, and perhaps the one that he is best remembered for, is Mendeleev’s correct prediction of the existence of several new elements. In addition, he corrected the atomic weights of some elements as well as relocating other elements to new positions within the periodic table. […] But not all of Mendeleev’s predictions were so dramatically successful, a feature that seems to be omitted from most popular accounts of the history of the periodic table. […] he was unsuccessful in as many as nine out of his eighteen published predictions […] some of the elements involved the rare earths which resemble each other very closely and which posed a major challenge to the periodic table for many years to come. […] The discovery of the inert gases at the end of the 19th century [also] represented an interesting challenge to the periodic system […] in spite of Mendeleev’s dramatic predictions of many other elements, he completely failed to predict this entire group of elements (He, Ne, Ar, Kr, Xe, Rn). Moreover, nobody else predicted these elements or even suspected their existence. The first of them to be isolated was argon, in 1894 […] Mendeleev […] could not accept the notion that elements could be converted into different ones. In fact, after the Curies began to report experiments that suggested the breaking up of atoms, Mendeleev travelled to Paris to see the evidence for himself, close to the end of his life. It is not clear whether he accepted this radical new notion even after his visit to the Curie laboratory.”

“While chemists had been using atomic weights to order the elements there had been a great deal of uncertainty about just how many elements remained to be discovered. This was due to the irregular gaps that occurred between the values of the atomic weights of successive elements in the periodic table. This complication disappeared when the switch was made to using atomic number. Now the gaps between successive elements became perfectly regular, namely one unit of atomic number. […] The discovery of isotopes […] came about partly as a matter of necessity. The new developments in atomic physics led to the discovery of a number of new elements such as Ra, Po, Rn, and Ac which easily assumed their rightful places in the periodic table. But in addition, 30 or so more apparent new elements were discovered over a short period of time. These new species were given provisional names like thorium emanation, radium emanation, actinium X, uranium X, thorium X, and so on, to indicate the elements which seemed to be producing them. […] To Soddy, the chemical inseparability [of such elements] meant only one thing, namely that these were two forms, or more, of the same chemical element. In 1913, he coined the term ‘isotopes’ to signify two or more atoms of the same element which were chemically completely inseparable, but which had different atomic weights.”

“The popular view reinforced in most textbooks is that chemistry is nothing but physics ‘deep down’ and that all chemical phenomena, and especially the periodic system, can be developed on the basis of quantum mechanics. […] This is important because chemistry books, especially textbooks aimed at teaching, tend to give the impression that our current explanation of the periodic system is essentially complete. This is just not the case […] the energies of the quantum states for any many-electron atom can be approximately calculated from first principles although there is extremely good agreement with observed energy values. Nevertheless, some global aspects of the periodic table have still not been derived from first principles to this day. […] We know where the periods close because we know that the noble gases occur at elements 2, 10, 18, 36, 54, etc. Similarly, we have a knowledge of the order of orbital filling from observations but not from theory. The conclusion, seldom acknowledged in textbook accounts of the explanation of the periodic table, is that quantum physics only partly explains the periodic table. Nobody has yet deduced the order of orbital filling from the principles of quantum mechanics. […] The situation that exists today is that chemistry, and in particular the periodic table, is regarded as being fully explained by quantum mechanics. Even though this is not quite the case, the explanatory role that the theory continues to play is quite undeniable. But what seems to be forgotten […] is that the periodic table led to the development of many aspects of modern quantum mechanics, and so it is rather short-sighted to insist that only the latter explains the former.”

“[N]uclei with an odd number of protons are invariably more unstable than those with an even number of protons. This difference in stability occurs because protons, like electrons, have a spin of one half and enter into energy orbitals, two by two, with opposite spins. It follows that even numbers of protons frequently produce total spins of zero and hence more stable nuclei than those with unpaired proton spins as occurs in nuclei with odd numbers of protons […] The larger the nuclear charge, the faster the motion of inner shell electrons. As a consequence of gaining relativistic speeds, such inner electrons are drawn closer to the nucleus, and this in turn has the effect of causing greater screening on the outermost electrons which determine the chemical properties of any particular element. It has been predicted that some atoms should behave chemically in a manner that is unexpected from their presumed positions in the periodic table. Relativistic effects thus pose the latest challenge to test the universality of the periodic table. […] The conclusion [however] seem to be that chemical periodicity is a remarkably robust phenomenon.”

Some links:

Periodic table.
History of the periodic table.
Jöns Jacob Berzelius.
Valence (chemistry).
Equivalent weight. Atomic weight. Atomic number.
Rare-earth element. Transuranium element. Glenn T. Seaborg. Island of stability.
Old quantum theory. Quantum mechanics. Electron configuration.
Benjamin Richter. John Dalton. Joseph Louis Gay-Lussac. Amedeo Avogadro. Leopold Gmelin. Alexandre-Émile Béguyer de Chancourtois. John Newlands. Gustavus Detlef Hinrichs. Julius Lothar Meyer. Dmitri Mendeleev. Henry Moseley. Antonius van den Broek.
Diatomic molecule.
Prout’s hypothesis.
Döbereiner’s triads.
Karlsruhe Congress.
Noble gas.
Einstein’s theory of Brownian motion. Jean Baptiste Perrin.
Quantum number. Molecular orbitals. Madelung energy ordering rule.
Gilbert N. Lewis. (“G. N. Lewis is possibly the most significant chemist of the 20th century not to have been awarded a Nobel Prize.”) Irving Langmuir. Niels Bohr. Erwin Schrödinger.
Ionization energy.
Synthetic element.
Alternative periodic tables.
Group 3 element.


December 18, 2017 Posted by | Books, Chemistry, Medicine, Physics | Leave a comment

Occupational Epidemiology (III)

This will be my last post about the book.

Some observations from the final chapters:

“Often there is confusion about the difference between systematic reviews and metaanalyses. A meta-analysis is a quantitative synthesis of two or more studies […] A systematic review is a synthesis of evidence on the effects of an intervention or an exposure which may also include a meta-analysis, but this is not a prerequisite. It may be that the results of the studies which have been included in a systematic review are reported in such a way that it is impossible to synthesize them quantitatively. They can then be reported in a narrative manner.10 However, a meta-analysis always requires a systematic review of the literature. […] There is a long history of debate about the value of meta-analysis for occupational cohort studies or other occupational aetiological studies. In 1994, Shapiro argued that ‘meta-analysis of published non-experimental data should be abandoned’. He reasoned that ‘relative risks of low magnitude (say, less than 2) are virtually beyond the resolving power of the epidemiological microscope because we can seldom demonstrably eliminate all sources of bias’.13 Because the pooling of studies in a meta-analysis increases statistical power, the pooled estimate may easily become significant and thus incorrectly taken as an indication of causality, even though the biases in the included studies may not have been taken into account. Others have argued that the method of meta-analysis is important but should be applied appropriately, taking into account the biases in individual studies.14 […] We believe that the synthesis of aetiological studies should be based on the same general principles as for intervention studies, and the existing methods adapted to the particular challenges of cohort and case-control studies. […] Since 2004, there is a special entity, the Cochrane Occupational Safety and Health Review Group, that is responsible for the preparing and updating of reviews of occupational safety and health interventions […]. There were over 100 systematic reviews on these topics in the Cochrane Library in 2012.”

“The believability of a systematic review’s results depends largely on the quality of the included studies. Therefore, assessing and reporting on the quality of the included studies is important. For intervention studies, randomized trials are regarded as of higher quality than observational studies, and the conduct of the study (e.g. in terms of response rate or completeness of follow-up) also influences quality. A conclusion derived from a few high-quality studies will be more reliable than when the conclusion is based on even a large number of low-quality studies. Some form of quality assessment is nowadays commonplace in intervention reviews but is still often missing in reviews of aetiological studies. […] It is tempting to use quality scores, such as the Jadad scale for RCTs34 and the Downs and Black scale for non-RCT intervention studies35 but these, in their original format, are insensitive to variation in the importance of risk areas for a given research question. The score system may give the same value to two studies (say, 10 out of 12) when one, for example, lacked blinding and the other did not randomize, thus implying that their quality is equal. This would not be a problem if randomization and blinding were equally important for all questions in all reviews, but this is not the case. For RCTs an important development in this regard has been the Cochrane risk of bias tool.36 This is a checklist of six important domains that have been shown to be important areas of bias in RCTs: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, and selective reporting.”

“[R]isks of bias tools developed for intervention studies cannot be used for reviews of aetiological studies without relevant modification. This is because, unlike interventions, exposures are usually more complicated to assess when we want to attribute the outcome to them alone. These scales do not cover all items that may need assessment in an aetiological study, such as confounding and information bias relating to exposures. […] Surprisingly little methodological work has been done to develop validated tools for aetiological epidemiology and most tools in use are not validated,38 […] Two separate checklists, for observational studies of incidence and prevalence and for risk factor assessment, have been developed and validated recently.40 […] Publication and other reporting bias is probably a much bigger issue for aetiological studies than for intervention studies. This is because, for clinical trials, the introduction of protocol registration, coupled with the regulatory system for new medications, has helped in assessing and preventing publication and reporting bias. No such checks exist for observational studies.”

“Most ill health that arises from occupational exposures can also arise from nonoccupational exposures, and the same type of exposure can occur in occupational and non-occupational settings. With the exception of malignant mesothelioma (which is essentially only caused by exposure to asbestos), there is no way to determine which exposure caused a particular disorder, nor where the causative exposure occurred. This means that usually it is not possible to determine the burden just by counting the number of cases. Instead, approaches to estimating this burden have been developed. There are also several ways to define burden and how best to measure it.”

“The population attributable fraction (PAF) is the proportion of cases that would not have occurred in the absence of an occupational exposure. It can be estimated by combining two measures — a risk estimate (usually relative risk (RR) or odds ratio) of the disorder of interest that is associated with exposure to the substance of concern; and an estimate of the proportion of the population exposed to the substance at work (p(E)). This approach has been used in several studies, particularly for estimating cancer burden […] There are several possible equations that can be used to calculate the PAF, depending on the available data […] PAFs cannot in general be combined by summing directly because: (1) summing PAFs for overlapping exposures (i.e. agents to which the same ‘ever exposed’ workers may have been exposed) may give an overall PAF exceeding 100%, and (2) summing disjoint (not concurrently occurring) exposures also introduces upward bias. Strategies to avoid this include partitioning exposed numbers between overlapping exposures […] or estimating only for the ‘dominant’ carcinogen with the highest risk. Where multiple exposures remain, one approach is to assume that the exposures are independent and their joint effects are multiplicative. The PAFs can then be combined to give an overall PAF for that cancer using a product sum. […] Potential sources of bias for PAFs include inappropriate choice of risk estimates, imprecision in the risk estimates and estimates of proportions exposed, inaccurate risk exposure period and latency assumptions, and a lack of separate risk estimates in some cases for women and/or cancer incidence. In addition, a key decision is the choice of which diseases and exposures are to be included.”

“The British Cancer Burden study is perhaps the most detailed study of occupationally related cancers in that it includes all those relevant carcinogens classified at the end of 2008 […] In the British study the attributable fractions ranged from less than 0.01% to 95% overall, the most important cancer sites for occupational attribution being, for men, mesothelioma (97%), sinonasal (46%), lung (21.1%), bladder (7.1%), and non-melanoma skin cancer (7.1%) and, for women, mesothelioma (83%), sinonasal (20.1%), lung (5.3%), breast (4.6%), and nasopharynx (2.5%). Occupation also contributed 2% or more overall to cancers of the larynx, oesophagus, and stomach, and soft tissue sarcoma with, in addition for men, melanoma of the eye (due to welding), and non-Hodgkin lymphoma. […] The overall results from the occupational risk factors component of the Global Burden of Disease 2010 study illustrate several important aspects of burden studies.14 Of the estimated 850 000 occupationally related deaths worldwide, the top three causes were: (1) injuries (just over a half of all deaths); (2) particulate matter, gases, and fumes leading to COPD; and (3) carcinogens. When DALYs were used as the burden measure, injuries still accounted for the highest proportion (just over one-third), but ergonomic factors leading to low back pain resulted in almost as many DALYs, and both were almost an order of magnitude higher than the DALYs from carcinogens. The difference in relative contributions of the various risk factors between deaths and DALYs arises because of the varying ages of those affected, and the differing chronicity of the resulting conditions. Both measures are valid, but they represent a different aspect of the burden arising from the hazardous exposures […]. Both the British and Global Burden of Disease studies draw attention to the important issues of: (1) multiple occupational carcinogens causing specific types of cancer, for example, the British study evaluated 21 lung carcinogens; and (2) specific carcinogens causing several different cancers, for example, IARC now defines asbestos as a group 1 or 2A carcinogen for seven cancer sites. These issues require careful consideration for burden estimation and for prioritizing risk reduction strategies. […] The long latency of many cancers means that estimates of current burden are based on exposures occurring in the past, often much higher than those existing today. […] long latency [also] means that risk reduction measures taken now will take a considerable time to be reflected in reduced disease incidence.”

“Exposures and effects are linked by dynamic processes occurring across time. These processes can often be usefully decomposed into two distinct biological relationships, each with several components: 1. The exposure-dose relationship […] 2. The dose-effect relationship […] These two component relationships are sometimes represented by two different mathematical models: a toxicokinetic model […], and a disease process model […]. Depending on the information available, these models may be relatively simple or highly complex. […] Often the various steps in the disease process do not occur at the same rate, some of these processes are ‘fast’, such as cell killing, while others are ‘slow’, such as damage repair. Frequently a few slow steps in a process become limiting to the overall rate, which sets the temporal pattern for the entire exposure-response relationship. […] It is not necessary to know the full mechanism of effects to guide selection of an exposure-response model or exposure metric. Because of the strong influence of the rate-limiting steps, often it is only necessary to have observations on the approximate time course of effects. This is true whether the effects appear to be reversible or irreversible, and whether damage progresses proportionately with each unit of exposure (actually dose) or instead occurs suddenly, and seemingly without regard to the amount of exposure, such as an asthma attack.”

“In this chapter, we argue that formal disease process models have the potential to improve the sensitivity of epidemiology for detecting new and emerging occupational and environmental risks where there is limited mechanistic information. […] In our approach, these models are often used to create exposure or dose metrics, which are in turn used in epidemiological models to estimate exposure-disease associations. […] Our goal is a methodology to formulate strong tests of our exposure-disease hypotheses in which a hypothesis is developed in as much biological detail as it can be, expressed in a suitable dynamic (temporal) model, and tested by its fit with a rich data set, so that its flaws and misperceptions of reality are fully displayed. Rejecting such a fully developed biological hypothesis is more informative than either rejecting or failing to reject a generic or vaguely defined hypothesis.” For example, the hypothesis ‘truck drivers have more risk of lung cancer than non-drivers’13 is of limited usefulness for prevention […]. Hypothesizing that a particular chemical agent in truck exhaust is associated with lung cancer — whether the hypothesis is refuted or supported by data — is more likely to lead to successful prevention activities. […] we believe that the choice of models against which to compare the data should, so far as possible, be guided by explicit hypotheses about the underlying biological processes. In other words, you can get as much as possible from epidemiology by starting from well-thought-out hypotheses that are formalized as mathematical models into which the data will be placed. The disease process models can serve this purpose.2″

“The basic idea of empirical Bayes (EB) and semiBayes (SB) adjustments for multiple associations is that the observed variation of the estimated relative risks around their geometric mean is larger than the variation of the true (but unknown) relative risks. In SB adjustments, an a priori value for the extra variation is chosen which assigns a reasonable range of variation to the true relative risks and this value is then used to adjust the observed relative risks.7 The adjustment consists in shrinking outlying relative risks towards the overall mean (of the relative risks for all the different exposures being considered). The larger the individual variance of the relative risks, the stronger the shrinkage, so that the shrinkage is stronger for less reliable estimates based on small numbers. Typical applications in which SB adjustments are a useful alternative to traditional methods of adjustment for multiple comparisons are in large occupational surveillance studies, where many relative risks are estimated with few or no a priori beliefs about which associations might be causal.7″

“The advantage of [the SB adjustment] approach over classical Bonferroni corrections is that on the average it produces more valid estimates of the odds ratio for each occupation/exposure. If we do a study which involves assessing hundreds of occupations, the problem is not only that we get many ‘false positive’ results by chance. A second problem is that even the ‘true positives’ tend to have odds ratios that are too high. For example, if we have a group of occupations with true odds ratios around 1.5, then the ones that stand out in the analysis are those with the highest odds ratios (e.g. 2.5) which will be elevated partly because of real effects and partly by chance. The Bonferroni correction addresses the first problem (too many chance findings) but not the second, that the strongest odds ratios are probably too high. In contrast, SB adjustment addresses the second problem by correcting for the anticipated regression to the mean that would have occurred if the study had been repeated, and thereby on the average produces more valid odds ratio estimates for each occupation/exposure. […] most epidemiologists write their Methods and Results sections as frequentists and their Introduction and Discussion sections as Bayesians. In their Methods and Results sections, they ‘test’ their findings as if their data are the only data that exist. In the Introduction and Discussion, they discuss their findings with regard to their consistency with previous studies, as well as other issues such as biological plausibility. This creates tensions when a small study has findings which are not statistically significant but which are consistent with prior knowledge, or when a study finds statistically significant findings which are inconsistent with prior knowledge. […] In some (but not all) instances, things can be made clearer if we include Bayesian methods formally in the Methods and Results sections of our papers”.

“In epidemiology, risk is most often quantified in terms of relative risk — i.e. the ratio of the probability of an adverse outcome in someone with a specified exposure to that in someone who is unexposed, or exposed at a different specified level. […] Relative risks can be estimated from a wider range of study designs than individual attributable risks. They have the advantage that they are often stable across different groups of people (e.g. of different ages, smokers, and non-smokers) which makes them easier to estimate and quantify. Moreover, high relative risks are generally unlikely to be explained by unrecognized bias or confounding. […] However, individual attributable risks are a more relevant measure by which to quantify the impact of decisions in risk management on individuals. […] Individual attributable risk is the difference in the probability of an adverse outcome between someone with a specified exposure and someone who is unexposed, or exposed at a different specified level. It is the critical measure when considering the impact of decisions in risk management on individuals. […] Population attributable risk is the difference in the frequency of an adverse outcome between a population with a given distribution of exposures to a hazardous agent, and that in a population with no exposure, or some other specified distribution of exposures. It depends on the prevalence of exposure at different levels within the population, and on the individual attributable risk for each level of exposure. It is a measure of the impact of the agent at a population level, and is relevant to decisions in risk management for populations. […] Population attributable risks are highest when a high proportion of a population is exposed at levels which carry high individual attributable risks. On the other hand, an exposure which carries a high individual attributable risk may produce only a small population attributable risk if the prevalence of such exposure is low.”

“Hazard characterization entails quantification of risks in relation to routes, levels, and durations of exposure. […] The findings from individual studies are often used to determine a no observed adverse effect level (NOAEL), lowest observed effect level (LOEL), or benchmark dose lower 95% confidence limit (BMDL) for relevant effects […] [NOAEL] is the highest dose or exposure concentration at which there is no discernible adverse effect. […] [LOEL] is the lowest dose or exposure concentration at which a discernible effect is observed. If comparison with unexposed controls indicates adverse effects at all of the dose levels in an experiment, a NOAEL cannot be derived, but the lowest dose constitutes a LOEL, which might be used as a comparator for estimated exposures or to derive a toxicological reference value […] A BMDL is defined in relation to a specified adverse outcome that is observed in a study. Usually, this is the outcome which occurs at the lowest levels of exposure and which is considered critical to the assessment of risk. Statistical modelling is applied to the experimental data to estimate the dose or exposure concentration which produces a specified small level of effect […]. The BMDL is the lower 95% confidence limit for this estimate. As such, it depends both on the toxicity of the test chemical […], and also on the sample sizes used in the study (other things being equal, larger sample sizes will produce more precise estimates, and therefore higher BMDLs). In addition to accounting for sample size, BMDLs have the merit that they exploit all of the data points in a study, and do not depend so critically on the spacing of doses that is adopted in the experimental design (by definition a NOAEL or LOEL can only be at one of the limited number of dose levels used in the experiment). On the other hand, BMDLs can only be calculated where an adverse effect is observed. Even if there are no clear adverse effects at any dose level, a NOAEL can be derived (it will be the highest dose administered).”


December 8, 2017 Posted by | Books, Cancer/oncology, Epidemiology, Medicine, Statistics | Leave a comment

Occupational Epidemiology (II)

Some more observations from the book below.

“RD [Retinal detachment] is the separation of the neurosensory retina from the underlying retinal pigment epithelium.1 RD is often preceded by posterior vitreous detachment — the separation of the posterior vitreous from the retina as a result of vitreous degeneration and shrinkage2 — which gives rise to the sudden appearance of floaters and flashes. Late symptoms of RD may include visual field defects (shadows, curtains) or even blindness. The success rate of RD surgery has been reported to be over 90%;3 however, a loss of visual acuity is frequently reported by patients, particularly if the macula is involved.4 Since the natural history of RD can be influenced by early diagnosis, patients experiencing symptoms of posterior vitreous detachment are advised to undergo an ophthalmic examination.5 […] Studies of the incidence of RD give estimates ranging from 6.3 to 17.9 cases per 100 000 person-years.6 […] Age is a well-known risk factor for RD. In most studies the peak incidence was recorded among subjects in their seventh decade of life. A secondary peak at a younger age (20–30 years) has been identified […] attributed to RD among highly myopic patients.6 Indeed, depending on the severity,
myopia is associated with a four- to ten-fold increase in risk of RD.7 [Diabetics with retinopathy are also at increased risk of RD, US] […] While secondary prevention of RD is current practice, no effective primary prevention strategy is available at present. The idea is widespread among practitioners that RD is not preventable, probably the consequence of our historically poor understanding of the aetiology of RD. For instance, on the website of the Mayo Clinic — one of the top-ranked hospitals for ophthalmology in the US — it is possible to read that ‘There’s no way to prevent retinal detachment’.9

“Intraocular pressure […] is influenced by physical activity. Dynamic exercise causes an acute reduction in intraocular pressure, whereas physical fitness is associated with a lower baseline value.29 Conversely, a sudden rise in intraocular pressure has been reported during the Valsalva manoeuvre.30-32 […] Occupational physical activity may […] cause both short- and long-term variations in intraocular pressure. On the one hand, physically demanding jobs may contribute to decreased baseline levels by increasing physical fitness but, on the other hand, lifting tasks may cause an important acute increase in pressure. Moreover, the eye of a manual worker who performs repeated lifting tasks involving the Valsalva manoeuvre may undergo several dramatic changes in intraocular pressure within a single working shift. […] A case-control study was carried out to test the hypothesis that repeated lifting tasks involving the Valsalva manoeuvre could be a risk factor for RD. […] heavy lifting was a strong risk factor for RD (OR 4.4, 95% CI 1.6–13). Intriguingly, body mass index (BMI) also showed a clear association with RD (top quartile: OR 6.8, 95% CI 1.6–29). […] Based on their findings, the authors concluded that heavy occupational lifting (involving the Valsalva manoeuvre) may be a relevant risk factor for RD in myopics.

“The proportion of the world’s population over 60 is forecast to double from 11.6% in 2012 to 21.8% in 2050.1 […] the International Labour Organization notes that, worldwide, just 40% of the working age population has legal pension coverage, and only 26% of the working population is effectively covered by old-age pension schemes. […] in less developed regions, labour force participation in those over 65 is much higher than in more developed regions.8 […] Longer working lives increase cumulative exposures, as well as increasing the time since exposure — important when there is a long latency period between exposure and resultant disease. Further, some exposures may have a greater effect when they occur to older workers, e.g. carcinogens that are promoters rather than initiators. […] Older workers tend to have more chronic health conditions. […] Older workers have fewer injuries, but take longer to recover. […] For some ‘knowledge workers’, like physicians, even a relatively minor cognitive decline […] might compromise their competence. […]  Most past studies have treated age as merely a confounding variable and rarely, if ever, have considered it an effect modifier. […]  Jex and colleagues24 argue that conceptually we should treat age as the variable of interest so that other variables are viewed as moderating the impact of age. […] The single best improvement to epidemiological research on ageing workers is to conduct longitudinal studies, including follow-up of workers into retirement. Cross-sectional designs almost certainly incur the healthy survivor effect, since unhealthy workers may retire early.25 […] Analyses should distinguish ageing per se, genetic factors, work exposures, and lifestyle in order to understand their relative and combined effects on health.”

“Musculoskeletal disorders have long been recognized as an important source of morbidity and disability in many occupational populations.1,2 Most musculoskeletal disorders, for most people, are characterized by recurrent episodes of pain that vary in severity and in their consequences for work. Most episodes subside uneventfully within days or weeks, often without any intervention, though about half of people continue to experience some pain and functional limitations after 12 months.3,4 In working populations, musculoskeletal disorders may lead to a spell of sickness absence. Sickness absence is increasingly used as a health parameter of interest when studying the consequences of functional limitations due to disease in occupational groups. Since duration of sickness absence contributes substantially to the indirect costs of illness, interventions increasingly address return to work (RTW).5 […] The Clinical Standards Advisory Group in the United Kingdom reported RTW within 2 weeks for 75% of all low back pain (LBP) absence episodes and suggested that approximately 50% of all work days lost due to back pain in the working population are from the 85% of people who are off work for less than 7 days.6″

Any RTW curve over time can be described with a mathematical Weibull function.15 This Weibull function is characterized by a scale parameter λ and a shape parameter k. The scale parameter λ is a function of different covariates that include the intervention effect, preferably expressed as hazard ratio (HR) between the intervention group and the reference group in a Cox’s proportional hazards regression model. The shape parameter k reflects the relative increase or decrease in survival time, thus expressing how much the RTW rate will decrease with prolonged sick leave. […] a HR as measure of effect can be introduced as a covariate in the scale parameter λ in the Weibull model and the difference in areas under the curve between the intervention model and the basic model will give the improvement in sickness absence days due to the intervention. By introducing different times of starting the intervention among those workers still on sick leave, the impact of timing of enrolment can be evaluated. Subsequently, the estimated changes in total sickness absence days can be expressed in a benefit/cost ratio (BC ratio), where benefits are the costs saved due to a reduction in sickness absence and costs are the expenditures relating to the intervention.15″

“A crucial factor in understanding why interventions are effective or not is the timing of the enrolment of workers on sick leave into the intervention. The RTW pattern over time […] has important consequences for appropriate timing of the best window for effective clinical and occupational interventions. The evidence presented by Palmer and colleagues clearly suggests that [in the context of LBP] a stepped care approach is required. In the first step of rapid RTW, most workers will return to work even without specific interventions. Simple, short interventions involving effective coordination and cooperation between primary health care and the workplace will be sufficient to help the majority of workers to achieve an early RTW. In the second step, more expensive, structured interventions are reserved for those who are having difficulties returning, typically between 4 weeks and 3 months. However, to date there is little evidence on the optimal timing of such interventions for workers on sick leave due to LBP.14,15 […] the cost-benefits of a structured RTW intervention among workers on sick leave will be determined by the effectiveness of the intervention, the natural speed of RTW in the target population, the timing of the enrolment of workers into the intervention, and the costs of both the intervention and of a day of sickness absence. […] The cost-effectiveness of a RTW intervention will be determined by the effectiveness of the intervention, the costs of the intervention and of a day of sickness absence, the natural course of RTW in the target population, the timing of the enrolment of workers into the RTW intervention, and the time lag before the intervention takes effect. The latter three factors are seldom taken into consideration in systematic reviews and guidelines for management of RTW, although their impact may easily be as important  as classical measures of effectiveness, such as effect size or HR.”

“In order to obtain information of the highest quality and utility, surveillance schemes have to be designed, set up, and managed with the same methodological rigour as high-calibre prospective cohort studies. Whether surveillance schemes are voluntary or not, considerable effort has to be invested to ensure a satisfactory and sufficient denominator, the best numerator quality, and the most complete ascertainment. Although the force of statute is relied upon in some surveillance schemes, even in these the initial and continuing motivation of the reporters (usually physicians) is paramount. […] There is a surveillance ‘pyramid’ within which the patient’s own perception is at the base, the GP is at a higher level, and the clinical specialist is close to the apex. The source of the surveillance reports affects the numerator because case severity and case mix differ according to the level in the pyramid.19 Although incidence rate estimates may be expected to be lower at the higher levels in the surveillance pyramid this is not necessarily always the case. […] Although surveillance undertaken by physicians who specialize in the organ system concerned or in occupational disease (or in both aspects) may be considered to be the medical ‘gold standard’ it can suffer from a more limited patient catchment because of various referral filters. Surveillance by GPs will capture numerator cases as close to the base of the pyramid as possible, but may suffer from greater diagnostic variation than surveillance by specialists. Limiting recruitment to GPs with a special interest, and some training, in occupational medicine is a compromise between the two levels.20

“When surveillance is part of a statutory or other compulsory scheme then incident case identification is a continuous and ongoing process. However, when surveillance is voluntary, for a research objective, it may be preferable to sample over shorter, randomly selected intervals, so as to reduce the demands associated with the data collection and ‘reporting fatigue’. Evidence so far suggests that sampling over shorter time intervals results in higher incidence estimates than continuous sampling.21 […] Although reporting fatigue is an important consideration in tempering conclusions drawn from […] multilevel models, it is possible to take account of this potential bias in various ways. For example, when evaluating interventions, temporal trends in outcomes resulting from other exposures can be used to control for fatigue.23,24 The phenomenon of reporting fatigue may be characterized by an ‘excess of zeroes’ beyond what is expected of a Poisson distribution and this effect can be quantified.27 […] There are several considerations in determining incidence from surveillance data. It is possible to calculate an incidence rate based on the general population, on the population of working age, or on the total working population,19 since these denominator bases are generally readily available, but such rates are not the most useful in determining risk. Therefore, incidence rates are usually calculated in respect of specific occupations or industries.22 […] Ideally, incidence rates should be expressed in relation to quantitative estimates of exposure but most surveillance schemes would require additional data collection as special exercises to achieve this aim.” [for much more on these topics, see also M’ikanatha & Iskander’s book.]

“Estimates of lung cancer risk attributable to occupational exposures vary considerably by geographical area and depend on study design, especially on the exposure assessment method, but may account for around 5–20% of cancers among men, but less (<5%) among women;2 among workers exposed to (suspected) lung carcinogens, the percentage will be higher. […] most exposure to known lung carcinogens originates from occupational settings and will affect millions of workers worldwide.  Although it has been established that these agents are carcinogenic, only limited evidence is available about the risks encountered at much lower levels in the general population. […] One of the major challenges in community-based occupational epidemiological studies has been valid assessment of the occupational exposures experienced by the population at large. Contrary to the detailed information usually available for an industrial population (e.g. in a retrospective cohort study in a large chemical company) that often allows for quantitative exposure estimation, community-based studies […] have to rely on less precise and less valid estimates. The choice of method of exposure assessment to be applied in an epidemiological study depends on the study design, but it boils down to choosing between acquiring self-reported exposure, expert-based individual exposure assessment, or linking self-reported job histories with job-exposure matrices (JEMs) developed by experts. […] JEMs have been around for more than three decades.14 Their main distinction from either self-reported or expert-based exposure assessment methods is that exposures are no longer assigned at the individual subject level but at job or task level. As a result, JEMs make no distinction in assigned exposure between individuals performing the same job, or even between individuals performing a similar job in different companies. […] With the great majority of occupational exposures having a rather low prevalence (<10%) in the general population it is […] extremely important that JEMs are developed aiming at a highly specific exposure assessment so that only jobs with a high likelihood (prevalence) and intensity of exposure are considered to be exposed. Aiming at a high sensitivity would be disastrous because a high sensitivity would lead to an enormous number of individuals being assigned an exposure while actually being unexposed […] Combinations of the methods just described exist as well”.

“Community-based studies, by definition, address a wider range of types of exposure and a much wider range of encountered exposure levels (e.g. relatively high exposures in primary production but often lower in downstream use, or among indirectly exposed individuals). A limitation of single community-based studies is often the relatively low number of exposed individuals. Pooling across studies might therefore be beneficial. […] Pooling projects need careful planning and coordination, because the original studies were conducted for different purposes, at different time periods, using different questionnaires. This heterogeneity is sometimes perceived as a disadvantage but also implies variations that can be studied and thereby provide important insights. Every pooling project has its own dynamics but there are several general challenges that most pooling projects confront. Creating common variables for all studies can stretch from simple re-naming of variables […] or recoding of units […] to the re-categorization of national educational systems […] into years of formal education. Another challenge is to harmonize the different classification systems of, for example, diseases (e.g. International Classification of Disease (ICD)-9 versus ICD-10), occupations […], and industries […]. This requires experts in these respective fields as well as considerable time and money. Harmonization of data may mean losing some information; for example, ISCO-68 contains more detail than ISCO-88, which makes it possible to recode ISCO-68 to ISCO-88 with only a little loss of detail, but it is not possible to recode ISCO-88 to ISCO-68 without losing one or two digits in the job code. […] Making the most of the data may imply that not all studies will qualify for all analyses. For example, if a study did not collect data regarding lung cancer cell type, it can contribute to the overall analyses but not to the cell type-specific analyses. It is important to remember that the quality of the original data is critical; poor data do not become better by pooling.”


December 6, 2017 Posted by | Books, Cancer/oncology, Demographics, Epidemiology, Health Economics, Medicine, Ophthalmology, Statistics | Leave a comment