Big Data (II)

Below I have added a few observation from the last half of the book, as well as some coverage-related links to topics of interest.

“With big data, using correlation creates […] problems. If we consider a massive dataset, algorithms can be written that, when applied, return a large number of spurious correlations that are totally independent of the views, opinions, or hypotheses of any human being. Problems arise with false correlations — for example, divorce rate and margarine consumption […]. [W]hen the number of variables becomes large, the number of spurious correlations also increases. This is one of the main problems associated with trying to extract useful information from big data, because in doing so, as with mining big data, we are usually looking for patterns and correlations. […] one of the reasons Google Flu Trends failed in its predictions was because of these problems. […] The Google Flu Trends project hinged on the known result that there is a high correlation between the number of flu-related online searches and visits to the doctor’s surgery. If a lot of people in a particular area are searching for flu-related information online, it might then be possible to predict the spread of flu cases to adjoining areas. Since the interest is in finding trends, the data can be anonymized and hence no consent from individuals is required. Using their five-year accumulation of data, which they limited to the same time-frame as the CDC data, and so collected only during the flu season, Google counted the weekly occurrence of each of the fifty million most common search queries covering all subjects. These search query counts were then compared with the CDC flu data, and those with the highest correlation were used in the flu trends model. […] The historical data provided a baseline from which to assess current flu activity on the chosen search terms and by comparing the new real-time data against this, a classification on a scale from 1 to 5, where 5 signified the most severe, was established. Used in the 2011–12 and 2012–13 US flu seasons, Google’s big data algorithm famously failed to deliver. After the flu season ended, its predictions were checked against the CDC’s actual data. […] the Google Flu Trends algorithm over-predicted the number of flu cases by at least 50 per cent during the years it was used.” [For more details on why blind/mindless hypothesis testing/p-value hunting on big data sets is usually a terrible idea, see e.g. Burnham & Anderson, US]

“The data Google used [in the Google Flu Trends algorithm], collected selectively from search engine queries, produced results [with] obvious bias […] for example by eliminating everyone who does not use a computer and everyone using other search engines. Another issue that may have led to poor results was that customers searching Google on ‘flu symptoms’ would probably have explored a number of flu-related websites, resulting in their being counted several times and thus inflating the numbers. In addition, search behaviour changes over time, especially during an epidemic, and this should be taken into account by updating the model regularly. Once errors in prediction start to occur, they tend to cascade, which is what happened with the Google Flu Trends predictions: one week’s errors were passed along to the next week. […] [Similarly,] the Ebola prediction figures published by WHO [during the West African Ebola virus epidemic] were over 50 per cent higher than the cases actually recorded. The problems with both the Google Flu Trends and Ebola analyses were similar in that the prediction algorithms used were based only on initial data and did not take into account changing conditions. Essentially, each of these models assumed that the number of cases would continue to grow at the same rate in the future as they had before the medical intervention began. Clearly, medical and public health measures could be expected to have positive effects and these had not been integrated into the model.”

“Every time a patient visits a doctor’s office or hospital, electronic data is routinely collected. Electronic health records constitute legal documentation of a patient’s healthcare contacts: details such as patient history, medications prescribed, and test results are recorded. Electronic health records may also include sensor data such as Magnetic Resonance Imaging (MRI) scans. The data may be anonymized and pooled for research purposes. It is estimated that in 2015, an average hospital in the USA will store over 600 Tb of data, most of which is unstructured. […] Typically, the human genome contains about 20,000 genes and mapping such a genome requires about 100 Gb of data. […] The interdisciplinary field of bioinformatics has flourished as a consequence of the need to manage and analyze the big data generated by genomics. […] Cloud-based systems give authorized users access to data anywhere in the world. To take just one example, the NHS plans to make patient records available via smartphone by 2018. These developments will inevitably generate more attacks on the data they employ, and considerable effort will need to be expended in the development of effective security methods to ensure the safety of that data. […] There is no absolute certainty on the Web. Since e-documents can be modified and updated without the author’s knowledge, they can easily be manipulated. This situation could be extremely damaging in many different situations, such as the possibility of someone tampering with electronic medical records. […] [S]ome of the problems facing big data systems [include] ensuring they actually work as intended, [that they] can be fixed when they break down, and [that they] are tamper-proof and accessible only to those with the correct authorization.”

“With transactions being made through sales and auction bids, eBay generates approximately 50 Tb of data a day, collected from every search, sale, and bid made on their website by a claimed 160 million active users in 190 countries. […] Amazon collects vast amounts of data including addresses, payment information, and details of everything an individual has ever looked at or bought from them. Amazon uses its data in order to encourage the customer to spend more money with them by trying to do as much of the customer’s market research as possible. In the case of books, for example, Amazon needs to provide not only a huge selection but to focus recommendations on the individual customer. […] Many customers use smartphones with GPS capability, allowing Amazon to collect data showing time and location. This substantial amount of data is used to construct customer profiles allowing similar individuals and their recommendations to be matched. Since 2013, Amazon has been selling customer metadata to advertisers in order to promote their Web services operation […] Netflix collects and uses huge amounts of data to improve customer service, such as offering recommendations to individual customers while endeavouring to provide reliable streaming of its movies. Recommendation is at the heart of the Netflix business model and most of its business is driven by the data-based recommendations it is able to offer customers. Netflix now tracks what you watch, what you browse, what you search for, and the day and time you do all these things. It also records whether you are using an iPad, TV, or something else. […] As well as collecting search data and star ratings, Netflix can now keep records on how often users pause or fast forward, and whether or not they finish watching each programme they start. They also monitor how, when, and where they watched the programme, and a host of other variables too numerous to mention.”

“Data science is becoming a popular study option in universities but graduates so far have been unable to meet the demands of commerce and industry, where positions in data science offer high salaries to experienced applicants. Big data for commercial enterprises is concerned with profit, and disillusionment will set in quickly if an over-burdened data analyst with insufficient experience fails to deliver the expected positive results. All too often, firms are asking for a one-size-fits-all model of data scientist who is expected to be competent in everything from statistical analysis to data storage and data security.”

“In December 2016, Yahoo! announced that a data breach involving over one billion user accounts had occurred in August 2013. Dubbed the biggest ever cyber theft of personal data, or at least the biggest ever divulged by any company, thieves apparently used forged cookies, which allowed them access to accounts without the need for passwords. This followed the disclosure of an attack on Yahoo! in 2014, when 500 million accounts were compromised. […] The list of big data security breaches increases almost daily. Data theft, data ransom, and data sabotage are major concerns in a data-centric world. There have been many scares regarding the security and ownership of personal digital data. Before the digital age we used to keep photos in albums and negatives were our backup. After that, we stored our photos electronically on a hard-drive in our computer. This could possibly fail and we were wise to have back-ups but at least the files were not publicly accessible. Many of us now store data in the Cloud. […] If you store all your photos in the Cloud, it’s highly unlikely with today’s sophisticated systems that you would lose them. On the other hand, if you want to delete something, maybe a photo or video, it becomes difficult to ensure all copies have been deleted. Essentially you have to rely on your provider to do this. Another important issue is controlling who has access to the photos and other data you have uploaded to the Cloud. […] although the Internet and Cloud-based computing are generally thought of as wireless, they are anything but; data is transmitted through fibre-optic cables laid under the oceans. Nearly all digital communication between continents is transmitted in this way. My email will be sent via transatlantic fibre-optic cables, even if I am using a Cloud computing service. The Cloud, an attractive buzz word, conjures up images of satellites sending data across the world, but in reality Cloud services are firmly rooted in a distributed network of data centres providing Internet access, largely through cables. Fibre-optic cables provide the fastest means of data transmission and so are generally preferable to satellites.”


Health care informatics.
Electronic health records.
European influenza surveillance network.
Public Health Emergency of International Concern.
Virtual Physiological Human project.
Watson (computer).
Natural language processing.
Anthem medical data breach.
Electronic delay storage automatic calculator (EDSAC). LEO (computer). ICL (International Computers Limited).
E-commerce. Online shopping.
Pay-per-click advertising model. Google AdWords. Click fraud. Targeted advertising.
Recommender system. Collaborative filtering.
Anticipatory shipping.
BlackPOS Malware.
Data Encryption Standard algorithm. EFF DES cracker.
Advanced Encryption Standard.
Tempora. PRISM (surveillance program). Edward Snowden. WikiLeaks. Tor (anonymity network). Silk Road (marketplace). Deep web. Internet of Things.
Songdo International Business District. Smart City.
United Nations Global Pulse.


July 19, 2018 Posted by | Books, Computer science, Cryptography, Data, Engineering, Epidemiology, Statistics | Leave a comment

A few diabetes papers of interest

i. Clinical Inertia in Type 2 Diabetes Management: Evidence From a Large, Real-World Data Set.

Despite clinical practice guidelines that recommend frequent monitoring of HbA1c (every 3 months) and aggressive escalation of antihyperglycemic therapies until glycemic targets are reached (1,2), the intensification of therapy in patients with uncontrolled type 2 diabetes (T2D) is often inappropriately delayed. The failure of clinicians to intensify therapy when clinically indicated has been termed “clinical inertia.” A recently published systematic review found that the median time to treatment intensification after an HbA1c measurement above target was longer than 1 year (range 0.3 to >7.2 years) (3). We have previously reported a rather high rate of clinical inertia in patients uncontrolled on metformin monotherapy (4). Treatment was not intensified early (within 6 months of metformin monotherapy failure) in 38%, 31%, and 28% of patients when poor glycemic control was defined as an HbA1c >7% (>53 mmol/mol), >7.5% (>58 mmol/mol), and >8% (>64 mmol/mol), respectively.

Using the electronic health record system at Cleveland Clinic (2005–2016), we identified a cohort of 7,389 patients with T2D who had an HbA1c value ≥7% (≥53 mmol/mol) (“index HbA1c”) despite having been on a stable regimen of two oral antihyperglycemic drugs (OADs) for at least 6 months prior to the index HbA1c. This HbA1c threshold would generally be expected to trigger treatment intensification based on current guidelines. Patient records were reviewed for the 6-month period following the index HbA1c, and changes in diabetes therapy were evaluated for evidence of “intensification” […] almost two-thirds of patients had no evidence of intensification in their antihyperglycemic therapy during the 6 months following the index HbA1c ≥7% (≥53 mmol/mol), suggestive of poor glycemic control. Most alarming was the finding that even among patients in the highest index HbA1c category (≥9% [≥75 mmol/mol]), therapy was not intensified in 44% of patients, and slightly more than half (53%) of those with an HbA1c between 8 and 8.9% (64 and 74 mmol/mol) did not have their therapy intensified.”

“Unfortunately, these real-world findings confirm a high prevalence of clinical inertia with regard to T2D management. The unavoidable conclusion from these data […] is that physicians are not responding quickly enough to evidence of poor glycemic control in a high percentage of patients, even in those with HbA1c levels far exceeding typical treatment targets.

ii. Gestational Diabetes Mellitus and Diet: A Systematic Review and Meta-analysis of Randomized Controlled Trials Examining the Impact of Modified Dietary Interventions on Maternal Glucose Control and Neonatal Birth Weight.

“Medical nutrition therapy is a mainstay of gestational diabetes mellitus (GDM) treatment. However, data are limited regarding the optimal diet for achieving euglycemia and improved perinatal outcomes. This study aims to investigate whether modified dietary interventions are associated with improved glycemia and/or improved birth weight outcomes in women with GDM when compared with control dietary interventions. […]

From 2,269 records screened, 18 randomized controlled trials involving 1,151 women were included. Pooled analysis demonstrated that for modified dietary interventions when compared with control subjects, there was a larger decrease in fasting and postprandial glucose (−4.07 mg/dL [95% CI −7.58, −0.57]; P = 0.02 and −7.78 mg/dL [95% CI −12.27, −3.29]; P = 0.0007, respectively) and a lower need for medication treatment (relative risk 0.65 [95% CI 0.47, 0.88]; P = 0.006). For neonatal outcomes, analysis of 16 randomized controlled trials including 841 participants showed that modified dietary interventions were associated with lower infant birth weight (−170.62 g [95% CI −333.64, −7.60]; P = 0.04) and less macrosomia (relative risk 0.49 [95% CI 0.27, 0.88]; P = 0.02). The quality of evidence for these outcomes was low to very low. Baseline differences between groups in postprandial glucose may have influenced glucose-related outcomes. […] we were unable to resolve queries regarding potential concerns for sources of bias because of lack of author response to our queries. We have addressed this by excluding these studies in the sensitivity analysis. […] after removal of the studies with the most substantial methodological concerns in the sensitivity analysis, differences in the change in fasting plasma glucose were no longer significant. Although differences in the change in postprandial glucose and birth weight persisted, they were attenuated.”

“This review highlights limitations of the current literature examining dietary interventions in GDM. Most studies are too small to demonstrate significant differences in our primary outcomes. Seven studies had fewer than 50 participants and only two had more than 100 participants (n = 125 and 150). The short duration of many dietary interventions and the late gestational age at which they were started (38) may also have limited their impact on glycemic and birth weight outcomes. Furthermore, we cannot conclude if the improvements in maternal glycemia and infant birth weight are due to reduced energy intake, improved nutrient quality, or specific changes in types of carbohydrate and/or protein. […] These data suggest that dietary interventions modified above and beyond usual dietary advice for GDM have the potential to offer better maternal glycemic control and infant birth weight outcomes. However, the quality of evidence was judged as low to very low due to the limitations in the design of included studies, the inconsistency between their results, and the imprecision in their effect estimates.”

iii. Lifetime Prevalence and Prognosis of Prediabetes Without Progression to Diabetes.

Impaired fasting glucose, also termed prediabetes, is increasingly prevalent and is associated with adverse cardiovascular risk (1). The cardiovascular risks attributed to prediabetes may be driven primarily by the conversion from prediabetes to overt diabetes (2). Given limited data on outcomes among nonconverters in the community, the extent to which some individuals with prediabetes never go on to develop diabetes and yet still experience adverse cardiovascular risk remains unclear. We therefore investigated the frequency of cardiovascular versus noncardiovascular deaths in people who developed early- and late-onset prediabetes without ever progressing to diabetes.”

“We used data from the Framingham Heart Study collected on the Offspring Cohort participants aged 18–77 years at the time of initial fasting plasma glucose (FPG) assessment (1983–1987) who had serial FPG testing over subsequent examinations with continuous surveillance for outcomes including cause-specific mortality (3). As applied in prior epidemiological investigations (4), we used a case-control design focusing on the cause-specific outcome of cardiovascular death to minimize the competing risk issues that would be encountered in time-to-event analyses. To focus on outcomes associated with a given chronic glycemic state maintained over the entire lifetime, we restricted our analyses to only those participants for whom data were available over the life course and until death. […] We excluded individuals with unknown age of onset of glycemic impairment (i.e., age ≥50 years with prediabetes or diabetes at enrollment). […] We analyzed cause-specific mortality, allowing for relating time-varying exposures with lifetime risk for an event (4). We related glycemic phenotypes to cardiovascular versus noncardiovascular cause of death using a case-control design, where cases were defined as individuals who died of cardiovascular disease (death from stroke, heart failure, or other vascular event) or coronary heart disease (CHD) and controls were those who died of other causes.”

“The mean age of participants at enrollment was 42 ± 7 years (43% women). The mean age at death was 73 ± 10 years. […] In our study, approximately half of the individuals presented with glycemic impairment in their lifetime, of whom two-thirds developed prediabetes but never diabetes. In our study, these individuals had lower cardiovascular-related mortality compared with those who later developed diabetes, even if the prediabetes onset was early in life. However, individuals with early-onset prediabetes, despite lifelong avoidance of overt diabetes, had greater propensity for death due to cardiovascular or coronary versus noncardiovascular disease compared with those who maintained lifelong normal glucose status. […] Prediabetes is a heterogeneous entity. Whereas some forms of prediabetes are precursors to diabetes, other types of prediabetes never progress to diabetes but still confer increased propensity for death from a cardiovascular cause.”

iv. Learning From Past Failures of Oral Insulin Trials.

Very recently one of the largest type 1 diabetes prevention trials using daily administration of oral insulin or placebo was completed. After 9 years of study enrollment and follow-up, the randomized controlled trial failed to delay the onset of clinical type 1 diabetes, which was the primary end point. The unfortunate outcome follows the previous large-scale trial, the Diabetes Prevention Trial–Type 1 (DPT-1), which again failed to delay diabetes onset with oral insulin or low-dose subcutaneous insulin injections in a randomized controlled trial with relatives at risk for type 1 diabetes. These sobering results raise the important question, “Where does the type 1 diabetes prevention field move next?” In this Perspective, we advocate for a paradigm shift in which smaller mechanistic trials are conducted to define immune mechanisms and potentially identify treatment responders. […] Mechanistic trials will allow for better trial design and patient selection based upon molecular markers prior to large randomized controlled trials, moving toward a personalized medicine approach for the prevention of type 1 diabetes.

“Before a disease can be prevented, it must be predicted. The ability to assess risk for developing type 1 diabetes (T1D) has been well documented over the last two decades (1). Using genetic markers, human leukocyte antigen (HLA) DQ and DR typing (2), islet autoantibodies (1), and assessments of glucose tolerance (intravenous or oral glucose tolerance tests) has led to accurate prediction models for T1D development (3). Prospective birth cohort studies Diabetes Autoimmunity Study in the Young (DAISY) in Colorado (4), Type 1 Diabetes Prediction and Prevention (DIPP) study in Finland (5), and BABYDIAB studies in Germany have followed genetically at-risk children for the development of islet autoimmunity and T1D disease onset (6). These studies have been instrumental in understanding the natural history of T1D and making T1D a predictable disease with the measurement of antibodies in the peripheral blood directed against insulin and proteins within β-cells […]. Having two or more islet autoantibodies confers an ∼85% risk of developing T1D within 15 years and nearly 100% over time (7). […] T1D can be predicted by measuring islet autoantibodies, and thousands of individuals including young children are being identified through screening efforts, necessitating the need for treatments to delay and prevent disease onset.”

“Antigen-specific immunotherapies hold the promise of potentially inducing tolerance by inhibiting effector T cells and inducing regulatory T cells, which can act locally at tissue-specific sites of inflammation (12). Additionally, side effects are minimal with these therapies. As such, insulin and GAD have both been used as antigen-based approaches in T1D (13). Oral insulin has been evaluated in two large randomized double-blinded placebo-controlled trials over the last two decades. First in the Diabetes Prevention Trial–Type 1 (DPT-1) and then in the TrialNet clinical trials network […] The DPT-1 enrolled relatives at increased risk for T1D having islet autoantibodies […] After 6 years of treatment, there was no delay in T1D onset. […] The TrialNet study screened, enrolled, and followed 560 at-risk relatives over 9 years from 2007 to 2016, and results have been recently published (16). Unfortunately, this trial failed to meet the primary end point of delaying or preventing diabetes onset.”

“Many factors influence the potency and efficacy of antigen-specific therapy such as dose, frequency of dosing, route of administration, and, importantly, timing in the disease process. […] Over the last two decades, most T1D clinical trial designs have randomized participants 1:1 or 2:1, drug to placebo, in a double-blind two-arm design, especially those intervention trials in new-onset T1D (18). Primary end points have been delay in T1D onset for prevention trials or stimulated C-peptide area under the curve at 12 months with new-onset trials. These designs have served the field well and provided reliable human data for efficacy. However, there are limitations including the speed at which these trials can be completed, the number of interventions evaluated, dose optimization, and evaluation of mechanistic hypotheses. Alternative clinical trial designs, such as adaptive trial designs using Bayesian statistics, can overcome some of these issues. Adaptive designs use accumulating data from the trial to modify certain aspects of the study, such as enrollment and treatment group assignments. This “learn as we go” approach relies on biomarkers to drive decisions on planned trial modifications. […] One of the significant limitations for adaptive trial designs in the T1D field, at the present time, is the lack of validated biomarkers for short-term readouts to inform trial adaptations. However, large-scale collaborative efforts are ongoing to define biomarkers of T1D-specific immune dysfunction and β-cell stress and death (9,22).”

T1D prevention has proven much more difficult than originally thought, challenging the paradigm that T1D is a single disease. T1D is indeed a heterogeneous disease in terms of age of diagnosis, islet autoantibody profiles, and the rate of loss of residual β-cell function after clinical onset. Children have a much more rapid loss of residual insulin production (measured as C-peptide area under the curve following a mixed-meal tolerance test) after diagnosis than older adolescents and adults (23,24), indicating that childhood and adult-onset T1D are not identical. Further evidence for subtypes of T1D come from studies of human pancreata of T1D organ donors in which children (0–14 years of age) within 1 year of diagnosis had many more inflamed islets compared with older adolescents and adults aged 15–39 years old (25). Additionally, a younger age of T1D onset (<7 years) has been associated with higher numbers of CD20+ B cells within islets and fewer insulin-containing islets compared with an age of onset ≥13 years associated with fewer CD20+ islet infiltrating cells and more insulin-containing islets (26,27). This suggests a much more aggressive autoimmune process in younger children and distinct endotypes (a subtype of a condition defined by a distinct pathophysiologic mechanism), which has recently been proposed for T1D (27).”

“Safe and specific therapies capable of being used in children are needed for T1D prevention. The vast majority of drug development involves small biotechnology companies, specialty pharmaceutical firms, and large pharmaceutical companies, more so than traditional academia. A large amount of preclinical and clinical research (phase 1, 2, and 3 studies) are needed to advance a drug candidate through the development pipeline to achieve U.S. Food and Drug Administration (FDA) approval for a given disease. A recent analysis of over 4,000 drugs from 835 companies in development during 2003–2011 revealed that only 10.4% of drugs that enter clinical development at phase 1 (safety studies) advance to FDA approval (32). However, the success rate increases 50% for the lead indication of a drug, i.e., a drug specifically developed for one given disease (32). Reasons for this include strong scientific rationale and early efficacy signals such as correlating pharmacokinetic (drug levels) to pharmacodynamic (drug target effects) tests for the lead indication. Lead indications also tend to have smaller, better-defined “homogenous” patient populations than nonlead indications for the same drug. This would imply that the T1D field needs more companies developing drugs specifically for T1D, not type 2 diabetes or other autoimmune diseases with later testing to broaden a drug’s indication. […] In a similar but separate analysis, selection biomarkers were found to substantially increase the success rate of drug approvals across all phases of drug development. Using a selection biomarker as part of study inclusion criteria increased drug approval threefold from 8.4% to 25.9% when used in phase 1 trials, 28% to 46% when transitioning from a phase 2 to phase 3 efficacy trial, and 55% to 76% for a phase 3 trial to likelihood of approval (33). These striking data support the concept that enrichment of patient enrollment at the molecular level is a more successful strategy than heterogeneous enrollment in clinical intervention trials. […] Taken together, new drugs designed specifically for children at risk for T1D and a biomarker selecting patients for a treatment response may increase the likelihood for a successful prevention trial; however, experimental confirmation in clinical trials is needed.”

v. Metabolic Karma — The Atherogenic Legacy of Diabetes: The 2017 Edwin Bierman Award Lecture.

“Cardiovascular (CV) disease remains the major cause of mortality and is associated with significant morbidity in both type 1 and type 2 diabetes (14). Despite major improvements in the management of traditional risk factors, including hypertension, dyslipidemia, and glycemic control prevention, retardation and reversal of atherosclerosis, as manifested clinically by myocardial infarction, stroke, and peripheral vascular disease, remain a major unmet need in the population with diabetes. For example, in the Steno-2 study and in its most recent report of the follow-up phase, at least a decade after cessation of the active treatment phase, there remained a high risk of death, primarily from CV disease despite aggressive control of the traditional risk factors, in this originally microalbuminuric population with type 2 diabetes (5,6). In a meta-analysis of major CV trials where aggressive glucose lowering was instituted […] the beneficial effect of intensive glycemic control on CV disease was modest, at best (7). […] recent trials with two sodium–glucose cotransporter 2 inhibitors, empagliflozin and canagliflozin (11,12), and two long-acting glucagon-like peptide 1 agonists, liraglutide and semaglutide (13,14), have reported CV benefits that have led in some of these trials to a decrease in CV and all-cause mortality. However, even with these recent positive CV outcomes, CV disease remains the major burden in the population with diabetes (15).”

“This unmet need of residual CV disease in the population with diabetes remains unexplained but may occur as a result of a range of nontraditional risk factors, including low-grade inflammation and enhanced thrombogenicity as a result of the diabetic milieu (16). Furthermore, a range of injurious pathways as a result of chronic hyperglycemia previously studied in vitro in endothelial cells (17) or in models of microvascular complications may also be relevant and are a focus of this review […] [One] major factor that is likely to promote atherosclerosis in the diabetes setting is increased oxidative stress. There is not only increased generation of ROS from diverse sources but also reduced antioxidant defense in diabetes (40). […] vascular ROS accumulation is closely linked to atherosclerosis and vascular inflammation provide the impetus to consider specific antioxidant strategies as a novel therapeutic approach to decrease CV disease, particularly in the setting of diabetes.”

“One of the most important findings from numerous trials performed in subjects with type 1 and type 2 diabetes has been the identification that previous episodes of hyperglycemia can have a long-standing impact on the subsequent development of CV disease. This phenomenon known as “metabolic memory” or the “legacy effect” has been reported in numerous trials […] The underlying explanation at a molecular and/or cellular level for this phenomenon remains to be determined. Our group, as well as others, has postulated that epigenetic mechanisms may participate in conferring metabolic memory (5153). In in vitro studies initially performed in aortic endothelial cells, transient incubation of these cells in high glucose followed by subsequent return of these cells to a normoglycemic environment was associated with increased gene expression of the p65 subunit of NF-κB, NF-κB activation, and expression of NF-κB–dependent proteins, including MCP-1 and VCAM-1 (54).

In further defining a potential epigenetic mechanism that could explain the glucose-induced upregulation of genes implicated in vascular inflammation, a specific histone methylation mark was identified in the promoter region of the p65 gene (54). This histone 3 lysine 4 monomethylation (H3K4m1) occurred as a result of mobilization of the histone methyl transferase, Set7. Furthermore, knockdown of Set7 attenuated glucose-induced p65 upregulation and prevented the persistent upregulation of this gene despite these endothelial cells returning to a normoglycemic milieu (55). These findings, confirmed in animal models exposed to transient hyperglycemia (54), provide the rationale to consider Set7 as an appropriate target for end-organ protective therapies in diabetes. Although specific Set7 inhibitors are currently unavailable for clinical development, the current interest in drugs that block various enzymes, such as Set7, that influence histone methylation in diseases, such as cancer (56), could lead to agents that warrant testing in diabetes. Studies addressing other sites of histone methylation as well as other epigenetic pathways including DNA methylation and acetylation have been reported or are currently in progress (55,57,58), particularly in the context of diabetes complications. […] As in vitro and preclinical studies increase our knowledge and understanding of the pathogenesis of diabetes complications, it is likely that we will identify new molecular targets leading to better treatments to reduce the burden of macrovascular disease. Nevertheless, these new treatments will need to be considered in the context of improved management of traditional risk factors.”

vi. Perceived risk of diabetes seriously underestimates actual diabetes risk: The KORA FF4 study.

“According to the International Diabetes Federation (IDF), almost half of the people with diabetes worldwide are unaware of having the disease, and even in high-income countries, about one in three diabetes cases is not diagnosed [1,2]. In the USA, 28% of diabetes cases are undiagnosed [3]. In DEGS1, a recent population-based German survey, 22% of persons with HbA1c ≥ 6.5% were unaware of their disease [4]. Persons with undiagnosed diabetes mellitus (UDM) have a more than twofold risk of mortality compared to persons with normal glucose tolerance (NGT) [5,6]; many of them also have undiagnosed diabetes complications like retinopathy and chronic kidney disease [7,8]. […] early detection of diabetes and prediabetes is beneficial for patients, but may be delayed by patients´ being overly optimistic about their own health. Therefore, it is important to address how persons with UDM or prediabetes perceive their diabetes risk.”

“The proportion of persons who perceived their risk of having UDM at the time of the interview as “negligible”, “very low” or “low” was 87.1% (95% CI: 85.0–89.0) in NGT [normal glucose tolerance individuals], 83.9% (81.2–86.4) in prediabetes, and 74.2% (64.5–82.0) in UDM […]. The proportion of persons who perceived themselves at risk of developing diabetes in the following years ranged from 14.6% (95% CI: 12.6–16.8) in NGT to 20.6% (17.9–23.6) in prediabetes to 28.7% (20.5–38.6) in UDM […] In univariate regression models, perceiving oneself at risk of developing diabetes was associated with younger age, female sex, higher school education, obesity, self-rated poor general health, and parental diabetes […] the proportion of better educated younger persons (age ≤ 60 years) with prediabetes, who perceived themselves at risk of developing diabetes was 35%, whereas this figure was only 13% in less well educated older persons (age > 60 years).”

The present study shows that three out of four persons with UDM [undiagnosed diabetes mellitus] believed that the probability of having undetected diabetes was low or very low. In persons with prediabetes, more than 70% believed that they were not at risk of developing diabetes in the next years. People with prediabetes were more inclined to perceive themselves at risk of diabetes if their self-rated general health was poor, their mother or father had diabetes, they were obese, they were female, their educational level was high, and if they were younger. […] People with undiagnosed diabetes or prediabetes considerably underestimate their probability of having or developing diabetes. […] perceived diabetes risk was lower in men, lower educated and older persons. […] Our results showed that people with low and intermediate education strongly underestimate their risk of diabetes and may qualify as target groups for detection of UDM and prediabetes.”

“The present results were in line with results from the Dutch Hoorn Study [18,19]. Adriaanse et al. reported that among persons with UDM, only 28.3% perceived their likeliness of having diabetes to be at least 10% [18], and among persons with high risk of diabetes (predicted from a symptom risk questionnaire), the median perceived likeliness of having diabetes was 10.8% [19]. Again, perceived risk did not fully reflect the actual risk profiles. For BMI, there was barely any association with perceived risk of diabetes in the Dutch study [19].”

July 2, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Immunology, Medicine, Molecular biology, Pharmacology, Studies | Leave a comment

Gastrointestinal complications of diabetes (II)

Below I have added a few more observations of interest from the last half of the book. I have also bolded a few key observations and added some links along the way to make the post easier to read for people unfamiliar with these topics.

HCC [HepatoCellular Carcinoma, US] is the most common primary malignancy of the liver and globally is the fifth most common cancer [2]. […] the United States […] has seen a threefold increase between 1975 and 2007 [3]. Chronic hepatitis C virus (HCV) accounts for about half of this increase [2]. However, 15–50 % of new cases of HCC are labeled as cryptogenic or idiopathic, which suggests that other risk factors are likely playing a role [4]. NASH [Non-alcoholic steatohepatitis, US] has been proposed as the underlying cause of most cases of cryptogenic cirrhosis. […] A large proportion of cryptogenic cirrhosis […] likely represents end-stage NASH. […] In a large systematic review published in 2012, NAFLD or NASH cohorts with few or no cirrhosis cases demonstrated a minimal HCC risk with cumulative HCC mortality between 0 % and 3 % over study periods of up to two decades [8]. In contrast, consistently increased risk was observed in NASH-cirrhosis cohorts with cumulative incidence between 2.4 % over 7 years and 12.8 % over 3 years [8]. The risk of HCC was substantially lower among patients with NASH than in patients with viral hepatitis [8]. However, given the high and increasing prevalence of NAFLD, even a small increase in risk of HCC has the potential to transform into a huge case burden of HCC. […] Large population-based cohort studies from Europe have demonstrated a 1.86-fold to fourfold increase in risk of HCC among patients with diabetes [12]. Obesity, which is well established as a significant risk factor for the development of various malignancies, is associated with a 1.5-fold to fourfold increased risk for development of HCC [13]. Therefore, the excess risk of HCC in NAFLD is explained by both the increased risk for NAFLD itself with subsequent progression to NASH and the independent carcinogenic potential of diabetes and obesity [11]. […] In contrast to patients with HCC from other causes, patients with NAFLD-related HCC tend to be older and have more metabolic comorbidities but less severe liver dysfunction […] The exact mechanisms responsible for the development of HCC in NASH remain unclear.”

Patients with diabetes have an increased risk of gallstone disease, which includes gallstones, cholecystitis, or gallbladder cancer; the magnitude of the increased risk has varied across studies [22]. […] A recent systematic review and meta-analysis of studies evaluating the risk of gallstone disease estimated that a diagnosis of diabetes appears to increase the relative risk of gallstone disease by 56 % [22]. Intuitively, it would seem reasonable to attribute this to common risk factors for diabetes and gallstone disease (e.g., obesity, hyperlipidemia). However, adjustment for body mass index (BMI) in a number of studies included in the meta-analysis indicated diabetes had an independent effect on the risk of gallstone disease; it has been speculated that this is related to impaired gallbladder motility as part of diabetes-related visceral neuropathy [22]. […] A systematic review and meta-analysis suggests that both men and women with type 2 diabetes have an increased risk of gallbladder cancer (summary RR = 1.56, 95 % CI, 1.36–1.79), independent of smoking, BMI, and a history of gallstones [25]. […] While the relative risk of gallbladder cancer is increased in patients with type 2 diabetes, the absolute risk remains low […], varying from approximately 1.5 per 100,000 in North America to 25 per 100,000 in South America and Northern India [26]. […] There is a strong relationship between diabetes and hepatobiliary diseases […] Not surprisingly, autoimmune-based liver disease involving the biliary tree (i.e., primary biliary cirrhosis [PBC] and primary sclerosing cholangitis [PSC]) has been described in patients with type 1 diabetes. […] The prevalence of type 1 diabetes in patients with PSC is 4 %, and the RR of type 1 diabetes in patients with PSC was 7.95 in a large patient cohort (n = 678) [33, 34]. […] Although the relationship may not be intuitive, diabetes is intimately connected with a variety of hepatobiliary conditions […] Diabetes is often associated with more frequent adverse outcomes and should be managed aggressively.”

Impaired glucose tolerance is seen in 60 % of patients with cirrhosis [1]. Overt diabetes is seen in 20 % of patients with cirrhosis. However, it is important to note that there are two distinct types of diabetes seen with chronic liver disease. Patients can either have preexisting diabetes and later go on to develop progressive liver disease or develop diabetes as a result of cirrhosis. The latter is an entity which is sometimes referred to as “hepatogenous” diabetes. […] A recently published registry study from the UK […] demonstrated that patients with diabetes were more likely to be hospitalized with a chronic liver disease than nondiabetic patients [5]. […] type 2 diabetes was associated with an increased incidence of hospitalizations with alcoholic liver disease (RR 1.38 in men, RR 1.57 in women), nonalcoholic fatty liver disease (RR 3.03 in men, RR 5.11 in women), autoimmune liver disease (RR 1.50 in men, RR 1.25 in women), hemochromatosis (RR 1.67 in men, RR 1.60 in women), and hepatocellular carcinoma (RR 3.36 in men, RR 3.55 in women) [5, 6]. Diabetes has also been shown to affect liver disease complications. Diabetes is associated with events of hepatic decompensation such as development of ascites [7], variceal bleeding [8], and hepatic encephalopathy [9]. […] Cirrhosis is an important but under-recognized cause of mortality among patients with diabetes. In a population-based study involving nearly 7,200 patients that investigated the causes of death in patients with type 2 diabetes, chronic liver disease, and cirrhosis accounted for 4.4 % [14].”

“On average, 51 % of patients with type 1 diabetes mellitus and 35 % of patients with type 2 diabetes mellitus demonstrate pancreatic exocrine insufficiency (PEI) on fecal elastase testing where PEI is defined as fecal elastase less than 200 μg/g [17]. In a study of 1,000 patients with diabetes, including 697 with type 2 diabetes, 28.5 % of patients with type 1 and 19.9 % of patients with type 2 diabetes had severe PEI as defined by fecal elastase less than 100 μg/g [18]. […] However, there is a wide range of prevalence of PEI in these studies […] Given wide-ranging estimates, it is difficult to determine the true prevalence of PEI in patients with diabetes, especially as it translates to steatorrhea and maldigestion. […] Changes in gross and histological pancreatic morphology frequently accompany diabetes mellitus and may be a plausible link between diabetes and chronic pancreatitis. Pancreatic atrophy is often seen in autopsy studies of diabetes patients as well as with ultrasonography, computed tomography, and magnetic resonance imaging (MRI) [22–24]. Morphological changes of the pancreas in diabetes may be partially explained by the lack of trophic effect of insulin on acinar tissue. Residual exocrine function correlates well with residual beta-cell function in type 1 diabetes mellitus [25]. Yet, because not every patient with type 1 diabetes has pancreatic exocrine insufficiency, trophic action of insulin must not be the only factor. Indeed, as much of the close regulation of pancreatic exocrine function is carried out by neurohormonal mediators, diabetic neuropathy may also play a role in exocrine insufficiency in diabetics [26]. […] Though the true prevalence of PEI arising from diabetes is not definitively known, PEI leading to diabetes mellitus, termed type 3c diabetes (T3cDM) [27], appears to be less common and accounts for 5–10 % of diabetic populations [28]. A T3cDM diagnosis is made in the absence of type 1 diabetes autoimmune markers and in the setting of imaging and laboratory evidence of PEI [29]. Management of T3cDM has not been well studied, given large trials have excluded this subset of patients. […] Without dedicated clinical trials, treatment for type 3c diabetes is not standardized and commonly reflects methods used for type 2 diabetes.”

“Diabetes has been associated with an increased risk of cancer. In a Swedish population study, 24 cancer types were found to have an increased incidence among those with type 2 diabetes. Pancreatic cancer had the highest standardized incidence ratio of 2.98 (observed/expected cancer cases) compared to other cancer sites [31]. The three cell types found in the normal pancreas include acinar, ductal, and islet cells. Acinar cells comprise a majority of the organ volume (80 %), but greater than 85 % of malignant lesions arise from the ductal structures resulting in adenocarcinoma. […] According to the Surveillance, Epidemiology, and End Results (SEER) Program, pancreatic cancer is the twelfth most common cancer and the second most common gastrointestinal type behind colorectal cancer [32]. […] pancreatic cancer represents 3 % of all new cancer cases within the United States. Given the poor long-term survival rates, incidence and prevalence of the pancreatic cancer are similar. […] a majority of those with pancreatic cancer present with metastatic disease (53 %) […]. Males are affected more than females, and the median age at time of diagnosis is 71. […] Meta-analyses have demonstrated an increased risk of pancreatic cancer in those with diabetes […] [However] diabetes may be a result of pancreatic cancer as opposed to pancreatic cancer being a result of diabetes. […] Risk of pancreatic cancer does not increase as the duration of diabetes increases. Given the lack of cost-effective, noninvasive, and sensitive screening tests for pancreatic cancer, population-wide screening for pancreatic cancer in those with diabetes is prohibitive.”

June 23, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Gastroenterology | Leave a comment

A few diabetes papers of interest

i. Reevaluating the Evidence for Blood Pressure Targets in Type 2 Diabetes.

“There is general consensus that treating adults with type 2 diabetes mellitus (T2DM) and hypertension to a target blood pressure (BP) of <140/90 mmHg helps prevent cardiovascular disease (CVD). Whether more intensive BP control should be routinely targeted remains a matter of debate. While the American Diabetes Association (ADA) BP guidelines recommend an individualized assessment to consider different treatment goals, the American College of Cardiology/American Heart Association BP guidelines recommend a BP target of <130/80 mmHg for most individuals with hypertension, including those with T2DM (13).

In large part, these discrepant recommendations reflect the divergent results of the Action to Control Cardiovascular Risk in Diabetes-BP trial (ACCORD-BP) among people with T2DM and the Systolic Blood Pressure Intervention Trial (SPRINT), which excluded people with diabetes (4,5). Both trials evaluated the effect of intensive compared with standard BP treatment targets (<120 vs. <140 mmHg systolic) on a composite CVD end point of nonfatal myocardial infarction or stroke or death from cardiovascular causes. SPRINT also included unstable angina and acute heart failure in its composite end point. While ACCORD-BP did not show a significant benefit from the intervention (hazard ratio [HR] 0.88; 95% CI 0.73–1.06), SPRINT found a significant 25% relative risk reduction on the primary end point favoring intensive therapy (0.75; 0.64–0.89).”

“To some extent, CVD mechanisms and causes of death differ in T2DM patients compared with the general population. Microvascular disease (particularly kidney disease), accelerated vascular calcification, and diabetic cardiomyopathy are common in T2DM (1315). Moreover, the rate of sudden cardiac arrest is markedly increased in T2DM and related, in part, to diabetes-specific factors other than ischemic heart disease (16). Hypoglycemia is a potential cause of CVD mortality that is specific to diabetes (17). In addition, polypharmacy is common and may increase CVD risk (18). Furthermore, nonvascular causes of death account for approximately 40% of the premature mortality burden experienced by T2DM patients (19). Whether these disease processes may render patients with T2DM less amenable to derive a mortality benefit from intensive BP control, however, is not known and should be the focus of future research.

In conclusion, the divergent results between ACCORD-BP and SPRINT are most readily explained by the apparent lack of benefit of intensive BP control on CVD and all-cause mortality in ACCORD-BP, rather than differences in the design, population characteristics, or interventions between the trials. This difference in effects on mortality may be attributable to differential mechanisms underlying CVD mortality in T2DM, to chance, or to both. These observations suggest that caution should be exercised extrapolating the results of SPRINT to patients with T2DM and support current ADA recommendations to individualize BP targets, targeting a BP of <140/90 mmHg in the majority of patients with T2DM and considering lower BP targets when it is anticipated that individual benefits outweigh risks.”

ii. Modelling incremental benefits on complications rates when targeting lower HbA1c levels in people with Type 2 diabetes and cardiovascular disease.

“Glucose‐lowering interventions in Type 2 diabetes mellitus have demonstrated reductions in microvascular complications and modest reductions in macrovascular complications. However, the degree to which targeting different HbA1c reductions might reduce risk is unclear. […] Participant‐level data for Trial Evaluating Cardiovascular Outcomes with Sitagliptin (TECOS) participants with established cardiovascular disease were used in a Type 2 diabetes‐specific simulation model to quantify the likely impact of different HbA1c decrements on complication rates. […] The use of the TECOS data limits our findings to people with Type 2 diabetes and established cardiovascular disease. […] Ten‐year micro‐ and macrovascular rates were estimated with HbA1c levels fixed at 86, 75, 64, 53 and 42 mmol/mol (10%, 9%, 8%, 7% and 6%) while holding other risk factors constant at their baseline levels. Cumulative relative risk reductions for each outcome were derived for each HbA1c decrement. […] Of 5717 participants studied, 72.0% were men and 74.2% White European, with a mean (sd) age of 66.2 (7.9) years, systolic blood pressure 134 (16.9) mmHg, LDL‐cholesterol 2.3 (0.9) mmol/l, HDL‐cholesterol 1.13 (0.3) mmol/l and median Type 2 diabetes duration 9.6 (5.1–15.6) years. Ten‐year cumulative relative risk reductions for modelled HbA1c values of 75, 64, 53 and 42 mmol/mol, relative to 86 mmol/mol, were 4.6%, 9.3%, 15.1% and 20.2% for myocardial infarction; 6.0%, 12.8%, 19.6% and 25.8% for stroke; 14.4%, 26.6%, 37.1% and 46.4% for diabetes‐related ulcer; 21.5%, 39.0%, 52.3% and 63.1% for amputation; and 13.6%, 25.4%, 36.0% and 44.7 for single‐eye blindness. […] We did not investigate outcomes for renal failure or chronic heart failure as previous research conducted to create the model did not find HbA1c to be a statistically significant independent risk factor for either condition, therefore no clinically meaningful differences would be expected from modelling different HbA1c levels 11.”

“For microvascular complications, the absolute median estimates tended to be lower than for macrovascular complications at the same HbA1c level, but cumulative relative risk reductions were greater. For amputation the 10‐year absolute median estimate for a modelled constant HbA1c of 86 mmol/mol (10%) was 3.8% (3.7, 3.9), with successively lower values for each modelled 1% HbA1c decrement. Compared with the 86 mmol/mol (10%) HbA1c level, median relative risk reductions for amputation were 21.5% (21.1, 21.9) at 75 mmol/mol (9%) increasing to 52.3% (52.0, 52.6) at 53 mmol/mol (7%). […] Relative risk reductions in micro‐ and macrovascular complications for each 1% HbA1c reduction were similar for each decrement. The exception was all‐cause mortality, where the relative risk reductions for 1% HbA1c decrements were greater at higher baseline HbA1c levels. These simulated outcomes differ from the Diabetes Control and Complications Trial outcome in people with Type 1 diabetes, where lowering HbA1c from higher baseline levels had a greater impact on microvascular risk reduction 18.”

iii. Laser photocoagulation for proliferative diabetic retinopathy (Cochrane review).

“Diabetic retinopathy is a complication of diabetes in which high blood sugar levels damage the blood vessels in the retina. Sometimes new blood vessels grow in the retina, and these can have harmful effects; this is known as proliferative diabetic retinopathy. Laserphotocoagulation is an intervention that is commonly used to treat diabetic retinopathy, in which light energy is applied to the retinawith the aim of stopping the growth and development of new blood vessels, and thereby preserving vision. […] The aim of laser photocoagulation is to slow down the growth of new blood vessels in the retina and thereby prevent the progression of visual loss (Ockrim 2010). Focal laser photocoagulation uses the heat of light to seal or destroy abnormal blood vessels in the retina. Individual vessels are treated with a small number of laser burns.

PRP [panretinal photocoagulation, US] aims to slow down the growth of new blood vessels in a wider area of the retina. Many hundreds of laser burns are placed on the peripheral parts of the retina to stop blood vessels from growing (RCOphth 2012). It is thought that the anatomic and functional changes that result from photocoagulation may improve the oxygen supply to the retina, and so reduce the stimulus for neovascularisation (Stefansson 2001). Again the exact mechanisms are unclear, but it is possible that the decreased area of retinal tissue leads to improved oxygenation and a reduction in the levels of anti-vascular endothelial growth factor. A reduction in levels of anti-vascular endothelial growth factor may be important in reducing the risk of harmful new vessels forming. […] Laser photocoagulation is a well-established common treatment for DR and there are many different potential strategies for delivery of laser treatment that are likely to have different effects. A systematic review of the evidence for laser photocoagulation will provide important information on benefits and harms to guide treatment choices. […] This is the first in a series of planned reviews on laser photocoagulation. Future reviews will compare different photocoagulation techniques.”

“We identified a large number of trials of laser photocoagulation of diabetic retinopathy (n = 83) but only five of these studies were eligible for inclusion in the review, i.e. they compared laser photocoagulation with currently available lasers to no (or deferred) treatment. Three studies were conducted in the USA, one study in the UK and one study in Japan. A total of 4786 people (9503 eyes) were included in these studies. The majority of participants in four of these trials were people with proliferative diabetic retinopathy; one trial recruited mainly people with non-proliferative retinopathy.”

“At 12 months there was little difference between eyes that received laser photocoagulation and those allocated to no treatment (or deferred treatment), in terms of loss of 15 or more letters of visual acuity (risk ratio (RR) 0.99, 95% confidence interval (CI) 0.89 to1.11; 8926 eyes; 2 RCTs, low quality evidence). Longer term follow-up did not show a consistent pattern, but one study found a 20% reduction in risk of loss of 15 or more letters of visual acuity at five years with laser treatment. Treatment with laser reduced the risk of severe visual loss by over 50% at 12 months (RR 0.46, 95% CI 0.24to 0.86; 9276 eyes; 4 RCTs, moderate quality evidence). There was a beneficial effect on progression of diabetic retinopathy with treated eyes experiencing a 50% reduction in risk of progression of diabetic retinopathy (RR 0.49, 95% CI 0.37 to 0.64; 8331 eyes; 4 RCTs, low quality evidence) and a similar reduction in risk of vitreous haemorrhage (RR 0.56, 95% CI 0.37 to 0.85; 224 eyes; 2RCTs, low quality evidence).”

“Overall there is not a large amount of evidence from RCTs on the effects of laser photocoagulation compared to no treatment or deferred treatment. The evidence is dominated by two large studies conducted in the US population (DRS 1978; ETDRS 1991). These two studies were generally judged to be at low or unclear risk of bias, with the exception of inevitable unmasking of patients due to differences between intervention and control. […] In current clinical guidelines, e.g. RCOphth 2012, PRP is recommended in high-risk PDR. The recommendation is that “as retinopathy approaches the proliferative stage, laser scatter treatment (PRP) should be increasingly considered to prevent progression to high risk PDR” based on other factors such as patients’ compliance or planned cataract surgery.

These recommendations need to be interpreted while considering the risk of visual loss associated with different levels of severity of DR, as well as the risk of progression. Since PRP reduces the risk of severe visual loss, but not moderate visual loss that is more related to diabetic maculopathy, most ophthalmologists judge that there is little benefit in treating non-proliferative DR at low risk of severe visual damage, as patients would incur the known adverse effects of PRP, which, although mild, include pain and peripheral visual field loss and transient DMO [diabetic macular oedema, US]. […] This review provides evidence that laser photocoagulation is beneficial in treating diabetic retinopathy. […] based on the baseline risk of progression of the disease, and risk of visual loss, the current approach of caution in treating non-proliferative DR with laser would appear to be justified.

By current standards the quality of the evidence is not high, however, the effects on risk of progression and risk of severe visual loss are reasonably large (50% relative risk reduction).”

iv. Immune Recognition of β-Cells: Neoepitopes as Key Players in the Loss of Tolerance.

I should probably warn beforehand that this one is rather technical. It relates reasonably closely to topics covered in the molecular biology book I recently covered here on the blog, and if I had not read that book quite recently I almost certainly would not have been able to read the paper – so the coverage below is more ‘for me’ than ‘for you’. Anyway, some quotes:

“Prior to the onset of type 1 diabetes, there is progressive loss of immune self-tolerance, evidenced by the accumulation of islet autoantibodies and emergence of autoreactive T cells. Continued autoimmune activity leads to the destruction of pancreatic β-cells and loss of insulin secretion. Studies of samples from patients with type 1 diabetes and of murine disease models have generated important insights about genetic and environmental factors that contribute to susceptibility and immune pathways that are important for pathogenesis. However, important unanswered questions remain regarding the events that surround the initial loss of tolerance and subsequent failure of regulatory mechanisms to arrest autoimmunity and preserve functional β-cells. In this Perspective, we discuss various processes that lead to the generation of neoepitopes in pancreatic β-cells, their recognition by autoreactive T cells and antibodies, and potential roles for such responses in the pathology of disease. Emerging evidence supports the relevance of neoepitopes generated through processes that are mechanistically linked with β-cell stress. Together, these observations support a paradigm in which neoepitope generation leads to the activation of pathogenic immune cells that initiate a feed-forward loop that can amplify the antigenic repertoire toward pancreatic β-cell proteins.”

“Enzymatic posttranslational processes that have been implicated in neoepitope generation include acetylation (10), citrullination (11), glycosylation (12), hydroxylation (13), methylation (either protein or DNA methylation) (14), phosphorylation (15), and transglutamination (16). Among these, citrullination and transglutamination are most clearly implicated as processes that generate neoantigens in human disease, but evidence suggests that others also play a role in neoepitope formation […] Citrulline, which is among the most studied PTMs in the context of autoimmunity, is a diagnostic biomarker of rheumatoid arthritis (RA). […] Anticitrulline antibodies are among the earliest immune responses that are diagnostic of RA and often correlate with disease severity (18). We have recently documented the biological consequences of citrulline modifications and autoimmunity that arise from pancreatic β-cell proteins in the development of T1D (19). In particular, citrullinated GAD65 and glucose-regulated protein (GRP78) elicit antibody and T-cell responses in human T1D and in NOD diabetes, respectively (20,21).”

Carbonylation is an irreversible, iron-catalyzed oxidative modification of the side chains of lysine, arginine, threonine, or proline. Mitochondrial functions are particularly sensitive to carbonyl modification, which also has detrimental effects on other intracellular enzymatic pathways (30). A number of diseases have been linked with altered carbonylation of self-proteins, including Alzheimer and Parkinson diseases and cancer (27). There is some data to support that carbonyl PTM is a mechanism that directs unstable self-proteins into cellular degradation pathways. It is hypothesized that carbonyl PTM [post-translational modification] self-proteins that fail to be properly degraded in pancreatic β-cells are autoantigens that are targeted in T1D. Recently submitted studies have identified several carbonylated pancreatic β-cell neoantigens in human and murine models of T1D (27). Among these neoantigens are chaperone proteins that are required for the appropriate folding and secretion of insulin. These studies imply that although some PTM self-proteins may be direct targets of autoimmunity, others may alter, interrupt, or disturb downstream metabolic pathways in the β-cell. In particular, these studies indicated that upstream PTMs resulted in misfolding and/or metabolic disruption between proinsulin and insulin production, which provides one explanation for recent observations of increased proinsulin-to-insulin ratios in the progression of T1D (31).”

“Significant hypomethylation of DNA has been linked with several classic autoimmune diseases, such as SLE, multiple sclerosis, RA, Addison disease, Graves disease, and mixed connective tissue disease (36). Therefore, there is rationale to consider the possible influence of epigenetic changes on protein expression and immune recognition in T1D. Relevant to T1D, epigenetic modifications occur in pancreatic β-cells during progression of diabetes in NOD mice (37). […] Consequently, DNMTs [DNA methyltransferases] and protein arginine methyltransferases are likely to play a role in the regulation of β-cell differentiation and insulin gene expression, both of which are pathways that are altered in the presence of inflammatory cytokines. […] Eizirik et al. (38) reported that exposure of human islets to proinflammatory cytokines leads to modulation of transcript levels and increases in alternative splicing for a number of putative candidate genes for T1D. Their findings suggest a mechanism through which alternative splicing may lead to the generation of neoantigens and subsequent presentation of novel β-cell epitopes (39).”

“The phenomenon of neoepitope recognition by autoantibodies has been shown to be relevant in a variety of autoimmune diseases. For example, in RA, antibody responses directed against various citrullinated synovial proteins are remarkably disease-specific and routinely used as a diagnostic test in the clinic (18). Appearance of the first anticitrullinated protein antibodies occurs years prior to disease onset, and accumulation of additional autoantibody specificities correlates closely with the imminent onset of clinical arthritis (44). There is analogous evidence supporting a hierarchical emergence of autoantibody specificities and multiple waves of autoimmune damage in T1D (3,45). Substantial data from longitudinal studies indicate that insulin and GAD65 autoantibodies appear at the earliest time points during progression, followed by additional antibody specificities directed at IA-2 and ZnT8.”

“Multiple autoimmune diseases often cluster within families (or even within one person), implying shared etiology. Consequently, relevant insights can be gleaned from studies of more traditional autoantibody-mediated systemic autoimmune diseases, such as SLE and RA, where inter- and intramolecular epitope spreading are clearly paradigms for disease progression (47). In general, early autoimmunity is marked by restricted B- and T-cell epitopes, followed by an expanded repertoire coinciding with the onset of more significant tissue pathology […] Akin to T1D, other autoimmune syndromes tend to cluster to subcellular tissues or tissue components that share biological or biochemical properties. For example, SLE is marked by autoimmunity to nucleic acid–bearing macromolecules […] Unlike other systemic autoantibody-mediated diseases, such as RA and SLE, there is no clear evidence that T1D-related autoantibodies play a pathogenic role. Autoantibodies against citrulline-containing neoepitopes of proteoglycan are thought to trigger or intensify arthritis by forming immune complexes with this autoantigen in the joints of RA patients with anticitrullinated protein antibodies. In a similar manner, autoantibodies and immune complexes are hallmarks of tissue pathology in SLE. Therefore, it remains likely that autoantibodies or the B cells that produce them contribute to the pathogenesis of T1D.”

“In summation, the existing literature demonstrates that oxidation, citrullination, and deamidation can have a direct impact on T-cell recognition that contributes to loss of tolerance.”

“There is a general consensus that the pathogenesis of T1D is initiated when individuals who possess a high level of genetic risk (e.g., susceptible HLA, insulin VNTR, PTPN22 genotypes) are exposed to environmental factors (e.g., enteroviruses, diet, microbiome) that precipitate a loss of tolerance that manifests through the appearance of insulin and/or GAD autoantibodies. This early autoimmunity is followed by epitope spreading, increasing both the number of antigenic targets and the diversity of epitopes within these targets. These processes create a feed-forward loop antigen release that induces increasing inflammation and increasing numbers of distinct T-cell specificities (64). The formation and recognition of neoepitopes represents one mechanism through which epitope spreading can occur. […] mechanisms related to neoepitope formation and recognition can be envisioned at multiple stages of T1D pathogenesis. At the level of genetic risk, susceptible individuals may exhibit a genetically driven impairment of their stress response, increasing the likelihood of neoepitope formation. At the level of environmental exposure, many of the insults that are thought to initiate T1D are known to cause neoepitope formation. During the window of β-cell destruction that encompasses early autoimmunity through dysglycemia and diagnosis of T1D it remains unclear when neoepitope responses appear in relation to “classic” responses to insulin and GAD65. However, by the time of onset, neoepitope responses are clearly present and remain as part of the ongoing autoimmunity that is present during established T1D. […] The ultimate product of both direct and indirect generation of neoepitopes is an accumulation of robust and diverse autoimmune B- and T-cell responses, accelerating the pathological destruction of pancreatic islets. Clearly, the emergence of sophisticated methods of tissue and single-cell proteomics will identify novel neoepitopes, including some that occur at near the earliest stages of disease. A detailed mechanistic understanding of the pathways that lead to specific classes of neoepitopes will certainly suggest targets of therapeutic manipulation and intervention that would be hoped to impede the progression of disease.”

v. Diabetes technology: improving care, improving patient‐reported outcomes and preventing complications in young people with Type 1 diabetes.

“With the evolution of diabetes technology, those living with Type 1 diabetes are given a wider arsenal of tools with which to achieve glycaemic control and improve patient‐reported outcomes. Furthermore, the use of these technologies may help reduce the risk of acute complications, such as severe hypoglycaemia and diabetic ketoacidosis, as well as long‐term macro‐ and microvascular complications. […] Unfortunately, diabetes goals are often unmet and people with Type 1 diabetes too frequently experience acute and long‐term complications of this condition, in addition to often having less than ideal psychosocial outcomes. Increasing realization of the importance of patient‐reported outcomes is leading to diabetes care delivery becoming more patient‐centred. […] Optimal diabetes management requires both the medical and psychosocial needs of people with Type 1 diabetes and their caregivers to be addressed. […] The aim of this paper was to demonstrate how, by incorporating technology into diabetes care, we can increase patient‐centered care, reduce acute and chronic diabetes complications, and improve clinical outcomes and quality of life.”

[The paper’s Table 2 on page 422 of the pdf-version is awesome, it includes a lot of different Hba1c estimates from various patient populations all across the world. The numbers included in the table are slightly less awesome, as most populations only achieve suboptimal metabolic control.]

“The risks of all forms of complications increase with higher HbA1c concentration, increasing diabetes duration, hypertension, presence of other microvascular complications, obesity, insulin resistance, hyperlipidaemia and smoking 6. Furthermore, the Diabetes Research in Children (DirecNet) study has shown that individuals with Type 1 diabetes have white matter differences in the brain and cognitive differences compared with individuals without Type 1 diabetes. These studies showed that the degree of structural differences in the brain were related to the degree of chronic hyperglycaemia, hypoglycaemia and glucose variability 7. […] In addition to long‐term complications, people with Type 1 diabetes are also at risk of acute complications. Severe hypoglycaemia, a hypoglycaemic event resulting in altered/loss of consciousness or seizures, is a serious complication of insulin therapy. If unnoticed and untreated, severe hypoglycaemia can result in death. […] The incidence of diabetic ketoacidosis, a life‐threatening consequence of diabetes, remains unacceptably high in children with established diabetes (Table 5). The annual incidence of ketoacidosis was 5% in the Prospective Diabetes Follow‐Up Registry (DPV) in Germany and Austria, 6.4% in the National Paediatric Diabetes Audit (NPDA), and 7.1% in the Type 1 Diabetes Exchange (T1DX) registry 10. Psychosocial factors including female gender, non‐white race, lower socio‐economic status, and elevated HbA1c all contribute to increased risk of diabetic ketoacidosis 11.”

“Depression is more common in young people with Type 1 diabetes than in young people without a chronic disease […] Depression can make it more difficult to engage in diabetes self‐management behaviours, and as a result, contributes to suboptimal glycaemic control and lower rates of self‐monitoring of blood glucose (SMBG) in young people with Type 1 diabetes 15. […] Unlike depression, diabetes distress is not a clinical diagnosis but rather emotional distress that comes from the burden of living with and managing diabetes 16. A recent systematic review found that roughly one‐third of young people with Type 1 diabetes (age 10–20 years) have some level of diabetes distress and that diabetes distress was consistently associated with higher HbA1c and worse self‐management 17. […] Eating and weight‐related comorbidities also exist for individuals with Type 1 diabetes. There is a higher incidence of obesity in individuals with Type 1 diabetes on intensive insulin therapy. […] Adolescent girls and young adult women with Type 1 diabetes are more likely to omit insulin for weight loss and have disordered eating habits 20.”

“In addition to screening for and treating depression and diabetes distress to improve overall diabetes management, it is equally important to assess quality of life as well as positive coping factors that may also influence self‐management and well‐being. For example, lower scores on the PROMIS® measure of global health, which assesses social relationships as well as physical and mental well‐being, have been linked to higher depression scores and less frequent blood glucose checks 13. Furthermore, coping strategies such as problem‐solving, emotional expression, and acceptance have been linked to lower HbA1c and enhanced quality of life 21.”

“Self‐monitoring of blood glucose via multiple finger sticks for capillary blood samples per day has been the ‘gold standard’ for glucose monitoring, but SMBG only provides glucose measurements as snapshots in time. Still, the majority of young people with Type 1 diabetes use SMBG as their main method to assess glycaemia. Data from the T1DX registry suggest that an increased frequency of SMBG is associated with lower HbA1c levels 23. The development of continuous glucose monitoring (CGM) provides more values, along with the rate and direction of glucose changes. […] With continued use, CGM has been shown to decrease the incidence of hypoglycaemia and HbA1c levels 26. […] Insulin can be administered via multiple daily injections or continuous subcutaneous insulin infusion (insulin pumps). Over the last 30 years, insulin pumps have become smaller with more features, making them a valuable alternative to multiple daily injections. Insulin pump use in various registries ranges from as low as 5.9% among paediatric patients in the New Zealand national register 28 to as high as 74% in the German/Austrian DPV in children aged <6 years (Table 2) 29. Recent data suggest that consistent use of insulin pumps can result in improved HbA1c values and decreased incidence of severe hypoglycaemia 30, 31. Insulin pumps have been associated with improved quality of life 32. The data on insulin pumps and diabetic ketoacidosis are less clear.”

“The majority of Type 1 diabetes management is carried out outside the clinical setting and in individuals’ daily lives. People with Type 1 diabetes must make complex treatment decisions multiple times daily; thus, diabetes self‐management skills are central to optimal diabetes management. Unfortunately, many people with Type 1 diabetes and their caregivers are not sufficiently familiar with the necessary diabetes self‐management skills. […] Parents are often the first who learn these skills. As children become older, they start receiving more independence over their diabetes care; however, the transition of responsibilities from caregiver to child is often unstructured and haphazard. It is important to ensure that both individuals with diabetes and their caregivers have adequate self‐management skills throughout the diabetes journey.”

“In the developed world (nations with the highest gross domestic product), 87% of the population has access to the internet and 68% report using a smartphone 39. Even in developing countries, 54% of people use the internet and 37% own smartphones 39. In many areas, smartphones are the primary source of internet access and are readily available. […] There are >1000 apps for diabetes on the Apple App Store and the Google Play store. Many of these apps have focused on nutrition, blood glucose logging, and insulin dosing. Given the prevalence of smartphones and the interest in having diabetes apps handy, there is the potential for using a smartphone to deliver education and decision support tools. […] The new psychosocial position statement from the ADA recommends routine psychosocial screening in clinic. These recommendations include screening for: 1) depressive symptoms annually, at diagnosis, or with changes in medical status; 2) anxiety and worry about hypoglycaemia, complications and other diabetes‐specific worries; 3) disordered eating and insulin omission for purposes of weight control; 4) and diabetes distress in children as young as 7 or 8 years old 16. Implementation of in‐clinic screening for depression in young people with Type 1 diabetes has already been shown to be feasible, acceptable and able to identify individuals in need of treatment who may otherwise have gone unnoticed for a longer period of time which would have been having a detrimental impact on physical health and quality of life 13, 40. These programmes typically use tablets […] to administer surveys to streamline the screening process and automatically score measures 13, 40. This automation allows psychologists and social workers to focus on care delivery rather than screening. In addition to depression screening, automated tablet‐based screening for parental depression, distress and anxiety; problem‐solving skills; and resilience/positive coping factors can help the care team understand other psychosocial barriers to care. This approach allows the development of patient‐ and caregiver‐centred interventions to improve these barriers, thereby improving clinical outcomes and complication rates.”

“With the advent of electronic health records, registries and downloadable medical devices, people with Type 1 diabetes have troves of data that can be analysed to provide insights on an individual and population level. Big data analytics for diabetes are still in the early stages, but present great potential for improving diabetes care. IBM Watson Health has partnered with Medtronic to deliver personalized insights to individuals with diabetes based on device data 48. Numerous other systems […] allow people with Type 1 diabetes to access their data, share their data with the healthcare team, and share de‐identified data with the research community. Data analysis and insights such as this can form the basis for the delivery of personalized digital health coaching. For example, historical patterns can be analysed to predict activity and lead to pro‐active insulin adjustment to prevent hypoglycaemia. […] Improvements to diabetes care delivery can occur at both the population level and at the individual level using insights from big data analytics.”

vi. Route to improving Type 1 diabetes mellitus glycaemic outcomes: real‐world evidence taken from the National Diabetes Audit.

“While control of blood glucose levels reduces the risk of diabetes complications, it can be very difficult for people to achieve. There has been no significant improvement in average glycaemic control among people with Type 1 diabetes for at least the last 10 years in many European countries 6.

The National Diabetes Audit (NDA) in England and Wales has shown relatively little change in the levels of HbA1c being achieved in people with Type 1 diabetes over the last 10 years, with >70% of HbA1c results each year being >58 mmol/mol (7.5%) 7.

Data for general practices in England are published by the NDA. NHS Digital publishes annual prescribing data, including British National Formulary (BNF) codes 7, 8. Together, these data provide an opportunity to investigate whether there are systematic associations between HbA1c levels in people with Type 1 diabetes and practice‐level population characteristics, diabetes service levels and use of medication.”

“The Quality and Outcomes Framework (a payment system for general practice performance) provided a baseline list of all general practices in England for each year, the practice list size and number of people (both with Type 1 and Type 2 diabetes) on their diabetes register. General practice‐level data of participating practices were taken from the NDA 2013–2014, 2014–2015 and 2015–2016 (5455 practices in the last year). They include Type 1 diabetes population characteristics, routine review checks and the proportions of people achieving target glycaemic control and/or being at higher glycaemic risk.

Diabetes medication data for all people with diabetes were taken from the general practice prescribing in primary care data for 2013–2014, 2014–2015 and 2015–2016, including insulin and blood glucose monitoring (BGM) […] A total of 20 indicators were created that covered the epidemiological, service, medication, technological, costs and outcomes performance for each practice and year. The variance in these indicators over the 4‐year period and among general practices was also considered. […] The values of the indicators found to be in the 90th percentile were used to quantify the potential of highest performing general practices. […] In total 13 085 practice‐years of data were analysed, covering 437 000 patient‐years of management.”

“There was significant variation among the participating general practices (Fig. 3) in the proportion of people achieving target glycaemic control target [percentage of people with HbA1c ≤58 mmol/mol (7.5%)] and in the proportion at high glycaemic risk [percentage of people with HbA1c >86 mmol/mol (10%)]. […] Our analysis showed that, at general practice level, the median target glycaemic control attainment was 30%, while the 10th percentile was 16%, and the 90th percentile was 45%. The corresponding median for the high glycaemic risk percentage was 16%, while the 10th percentile (corresponding to the best performing practices) was 6% and the 90th percentile (greatest proportion of Type 1 diabetes at high glycaemic risk) was 28%. Practices in the deciles for both lowest target glycaemic control and highest high glycaemic risk had 49% of the results in the 58–86 mmol/mol range. […] A very wide variation was found in the percentage of insulin for presumed pump use (deduced from prescriptions of fast‐acting vial insulin), with a median of 3.8% at general practice level. The 10th percentile was 0% and the 90th percentile was 255% of the median inferred pump usage.”

“[O]ur findings suggest that if all practices optimized service and therapies to the levels achieved by the top decile then 16 100 (7%) more people with Type 1 diabetes would achieve the glycaemic control target of 58 mmol/mol (7.5%) and 11 500 (5%) fewer people would have HbA1c >86 mmol/mol (10%). Put another way, if the results for all practices were at the top decile level, 36% vs 29% of people with Type 1 diabetes would achieve the glycaemic control target of HbA1c ≤ 58 mmol/mol (7.5%), and as few as 10% could have HbA1c levels > 86 mmol/mol (10%) compared with 15% currently (Fig. 6). This has significant implications for the potential to improve the longer‐term outcomes of people with Type 1 diabetes, given the close link between glycaemia and complications in such individuals 5, 10, 11.”

“We found that the significant variation among the participating general practices (Fig. 2) in terms of the proportion of people with HbA1c ≤58 mmol/mol (7.5%) was only partially related to a lower proportion of people with HbA1c >86 mmol/mol (10%). There was only a weak relationship between level of target glycaemia achieved and avoidance of very suboptimal glycaemia. The overall r2 value was 0.6. This suggests that there is a degree of independence between these outcomes, so that success factors at a general practice level differ for people achieving optimal glycaemia vs those factors affecting avoiding a level of at risk glycaemia.”

May 30, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Immunology, Medicine, Molecular biology, Ophthalmology, Studies | Leave a comment

Alcohol and Aging (II)

I gave the book 3 stars on goodreads.

As is usual for publications of this nature, the book includes many chapters that cover similar topics and so the coverage can get a bit repetitive if you’re reading it from cover to cover the way I did; most of the various chapter authors obviously didn’t read the other contributions included in the book, and as each chapter is meant to stand on its own you end up with a lot of chapter introductions which cover very similar topics. If you can disregard such aspects it’s a decent book, which covers a wide variety of topics.

Below I have added some observations from some of the chapters of the book which I did not cover in my first post.

It is widely accepted that consuming heavy amounts of alcohol and binge drinking are detrimental to the brain. Animal studies that have examined the anatomical changes that occur to the brain as a consequence of consuming alcohol indicate that heavy alcohol consumption and binge drinking leads to the death of existing neurons [10, 11] and prevents production of new neurons [12, 13]. […] While animal studies indicate that consuming even moderate amounts of alcohol is detrimental to the brain, the evidence from epidemiological studies is less clear. […] Epidemiological studies that have examined the relationship between late life alcohol consumption and cognition have frequently reported that older adults who consume light to moderate amounts of alcohol are less likely to develop dementia and have higher cognitive functioning compared to older adults who do not consume alcohol. […] In a meta-analysis of 15 prospective cohort studies, consuming light to moderate amounts of alcohol was associated with significantly lower relative risk (RR) for Alzheimer’s disease (RR=0.72, 95% CI=0.61–0.86), vascular dementia (RR=0.75, 95% CI=0.57–0.98), and any type of dementia (RR=0.74, 95% CI=0.61–0.91), but not cognitive decline (RR=0.28, 95 % CI=0.03–2.83) [31]. These findings are consistent with a previous meta-analysis by Peters et al. [33] in which light to moderate alcohol consumption was associated with a decreased risk for dementia (RR=0.63, 95 % CI=0.53–0.75) and Alzheimer’s disease (RR=0.57, 95 % CI=0.44–0.74), but not vascular dementia (RR=0.82, 95% CI=0.50–1.35) or cognitive decline RR=0.89, 95% CI=0.67–1.17). […] Mild cognitive impairment (MCI) has been used to describe the prodromal stage of Alzheimer’s disease […]. There is no strong evidence to suggest that consuming alcohol is protective against MCI [39, 40] and several studies have reported non-significant findings [41–43].”

The majority of research on the relationship between alcohol consumption and cognitive outcomes has focused on the amount of alcohol consumed during old age, but there is a growing body of research that has examined the relationship between alcohol consumption during middle age and cognitive outcomes several years or decades later. The evidence from this area of research is mixed with some studies not detecting a significant relationship [17, 58, 59], while others have reported that light to moderate alcohol consumption is associated with preserved cognition [60] and decreased risk for cognitive impairment [31, 61, 62]. […] Several epidemiological studies have reported that light to moderate alcohol consumption is associated with a decreased risk for stroke, diabetes, and heart disease [36, 84, 85]. Similar to the U-shaped relationship between alcohol consumption and dementia, heavy alcohol consumption has been associated with poor health [86, 87]. The decreased risk for several metabolic and vascular health conditions for alcohol consumers has been attributed to antioxidants [54], greater concentrations of high-density lipoprotein cholesterol in the bloodstream [88], and reduced blood clot formation [89]. Stroke, diabetes, heart disease, and related conditions have all been associated with lower cognitive functioning during old age [90, 91]. The reduced prevalence of metabolic and vascular health conditions among light to moderate alcohol consumers may contribute to the decreased risk for dementia and cognitive decline for older adults who consume alcohol. A limitation of the hypothesis that the reduced risk for dementia among light and moderate alcohol consumers is conferred through the reduced prevalence of adverse health conditions associated with dementia is the possibility that this relationship is confounded by reverse causality. Alcohol consumption decreases with advancing age and adults may reduce their alcohol consumption in response to the onset of adverse health conditions […] the higher prevalence of dementia and lower cognitive functioning among abstainers may be due in part to their worse health rather than their alcohol consumption.”

A limitation of large cohort studies is that subjects who choose not to participate or are unable to participate are often less healthy than those who do participate. Non-response bias becomes more pronounced with age because only subjects who have survived to old age and are healthy enough to participate are observed. Studies on alcohol consumption and cognition are sensitive to non-response bias because light and moderate drinkers who are not healthy enough to participate in the study will not be observed. Adults who survive to old age despite consuming very high amounts of alcohol represent an even more select segment of the general population because they may have genetic, behavioral, health, social, or other factors that protect them against the negative effects of heavy alcohol consumption. As a result, the analytic sample of epidemiological studies is more likely to be comprised of “healthy” drinkers, which biases results in favor of finding a positive effect of light to moderate alcohol consumption for cognition and health in general. […] The incidence of Alzheimer’s disease doubles every 5 years after 65 years of age [94] and nearly 40% of older adults aged 85 and over are diagnosed with Alzheimer’s disease [7]. The relatively old age of onset for most dementia cases means the observed protective effect of light to moderate alcohol consumption for dementia may be due to alcohol consumers being more likely to die or drop out of a study as a result of their alcohol consumption before they develop dementia. This bias may be especially strong for heavy alcohol consumers. Not properly accounting for death as a competing outcome has been observed to artificially increase the risk of dementia among older adults with diabetes [95] and the effect that death and other competing outcomes may have on the relationship between alcohol consumption and dementia risk is unclear. […] The majority of epidemiological studies that have studied the relationship between alcohol consumption and cognition treat abstainers as the reference category. This can be problematic because often times the abstainer or non-drinking category includes older adults who stopped consuming alcohol because of poor health […] Not differentiating former alcohol consumers from lifelong abstainers has been found to explain some but not all of the benefit of alcohol consumption for preventing mortality from cardiovascular causes [96].”

“It is common for people to engage in other behaviors while consuming alcohol. This complicates the relationship between alcohol consumption and cognition because many of the behaviors associated with alcohol consumption are positively and negatively associated with cognitive functioning. For example, alcohol consumers are more likely to smoke than non-drinkers [104] and smoking has been associated with an increased risk for dementia and cognitive decline [105]. […] The relationship between alcohol consumption and cognition may also differ between people with or without a history of mental illness. Depression reduces the volume of the hippocampus [106] and there is growing evidence that depression plays an important role in dementia. Depression during middle age is recognized as a risk factor for dementia [107], and high depressive symptoms during old age may be an early symptom of dementia [108]. Middle aged adults with depression or other mental illness who self-medicate with alcohol may be at especially high risk for dementia later in life because of synergistic effects that alcohol and depression has on the brain. […] While current evidence from epidemiological studies indicates that consuming light to moderate amounts of alcohol, in particular wine, does not negatively affect cognition and in many cases is associated with cognitive health, adults who do not consume alcohol should not be encouraged to increase their alcohol consumption until further research clarifies these relationships. Inconsistencies between studies on how alcohol consumption categories are defined make it difficult to determine the “optimal” amount of alcohol consumption to prevent dementia. It is likely that the optimal amount of alcohol varies according to a person’s gender, as well as genetic, physiological, behavioral, and health characteristics, making the issue extremely complex.”

Falls are the leading cause of both fatal and nonfatal injuries among older adults, with one in three older adults falling each year, and 20–30% of people who fall suffer moderate to severe injuries such as lacerations, hip fractures, and head traumas. In fact, falls are the foremost cause of both fractures and traumatic brain injury (TBI) among older adults […] In 2013, 2.5 million nonfatal falls among older adults were treated in ED and more than 734,000 of these patients were hospitalized. […] Our analysis of the 2012 Nationwide Emergency Department Sample (NEDS) data set show that fall-related injury was a presenting problem among 12% of all ED visits by those aged 65+, with significant differences among age groups: 9% among the 65–74 age group, 12 % among the 75–84 age group, and 18 % among the 85+ age group [4]. […] heavy alcohol use predicts fractures. For example, among those 55+ years old in a health survey in England, men who consumed more than 8 units of alcohol and women who consumed more than 6 units on their heaviest drinking day in the past week had significantly increased odds of fractures (OR =1.65, 95% CI =1.37–1.98 for men and OR=2.07, 95% CI =1.28–3.35 for women) [63]. […] The 2008–2009 Canadian Community Health Survey-Healthy Aging also showed that consumption of at least one alcoholic drink per week increased the odds of falling by 40 % among those 65+ years [57].”

I at first was not much impressed by the effect sizes mentioned above because there are surely 100 relevant variables they didn’t account for/couldn’t account for, but then I thought a bit more about it. An important observation here – they don’t mention it in the coverage, but it sprang to mind – is that if sick or frail elderly people consume less alcohol than their more healthy counterparts, and are more likely to not consume alcohol (which they do, and which they are, we know this), and if frail or sick(er) elderly people are more likely to suffer a fall/fracture than are people who are relatively healthy (they are, again, we know this), well, then you’d expect consumption of alcohol to be found to have a ‘protective effect’ simply due to confounding by (reverse) indication (unless the researchers were really careful about adjusting for such things, but no such adjustments are mentioned in the coverage, which makes sense as these are just raw numbers being reported). The point is that the null here should not be that ‘these groups should be expected to have the same fall rate/fracture rate’, but rather ‘people who drink alcohol should be expected to be doing better, all else equal’ – but they aren’t, quite the reverse. So ‘the true effect size’ here may be larger than what you’d think.

I’m reasonably sure things are a lot more complicated than the above makes it appear (because of those 100 relevant variables we were talking about…), but I find it interesting anyway. Two more things to note: 1. Have another look at the numbers above if they didn’t sink in the first time. This is more than 10% of emergency department visits for that age group. Falls are a really big deal. 2. Fractures in the elderly are also a potentially really big deal. Here’s a sample quote: “One-fifth of hip fracture victims will die within 6 months of the injury, and only 50% will return to their previous level of independence.” (link). In some contexts, a fall is worse news than a cancer diagnosis, and they are very common events in the elderly. This also means that even relatively small effect sizes here can translate into quite large public health effects, because baseline incidence is so high.

The older adult population is a disproportionate consumer of prescription and over-the-counter medications. In a nationally representative sample of community-dwelling adults aged 57–84 years from the National Social Life, Health, and Aging Project (NSHAP) in 2005–2006, 81 % regularly used at least one prescription medication on a regular basis and 29% used at least five prescription medications. Forty-two percent used at least one nonprescription medication and concurrent use with a prescription medication was common, with 46% of prescription medication users also using OTC medications [2]. Prescription drug use by older adults in the U.S. is also growing. The percentage of older adults taking at least one prescription drug in the last 30 days increased from 73.6% in 1988–1994 to 89.7 % in 2007–2010 and the percentage taking five or more prescription drugs in the last 30 days increased from 13.8% in 1988–1994 to 39.7 % in 2007–2010 [3].”

The aging process can affect the response to a medication by altering its pharmacokinetics and pharmacodynamics [9, 10]. Reduced gastrointestinal motility and gastric acidity can alter the rate or extent of drug absorption. Changes in body composition, including decreased total body water and increased body fat can alter drug distribution. For alcohol, changes in body composition result in higher blood alcohol levels in older adults compared to younger adults after the same dose or quantity  of alcohol consumed. Decreased size of the liver, hepatic blood flow, and function of Phase I (oxidation, reduction, and hydrolysis) metabolic pathways result in reduced drug metabolism and increased drug exposure for drugs that undergo Phase I metabolism. Phase II hepatic metabolic pathways are generally preserved with aging. Decreased size of the kidney, renal blood flow, and glomerular filtration result in slower elimination of medications and metabolites by the kidney and increased drug exposure for medications that undergo renal elimination. Age-related impairment of homeostatic mechanisms and changes in receptor number and function can result in changes in pharmacodynamics as well. Older adults are generally more sensitive to the effects of medications and alcohol which act on the central nervous system for example. The consequences of these physiologic changes with aging are that older adults often experience increased drug exposure for the same dose (higher drug concentrations over time) and increased sensitivity to medications (greater response at a given drug concentration) than their younger counterparts.”

“Aging-related changes in physiology are not the only sources of variability in pharmacokinetics and pharmacodynamics that must be considered for an individual person. Older adults experience more chronic diseases that may decrease drug metabolism and renal elimination than younger cohorts. Frailty may result in further decline in drug metabolism, including Phase II metabolic pathways in the liver […] Drug interactions must also be considered […] A drug interaction is defined as a clinically meaningful change in the effect of one drug when coadministered with another drug [12]. Many drugs, including alcohol, have the potential for a drug interaction when administered concurrently, but whether a clinically meaningful change in effect occurs for a specific person depends on patient-specifc factors including age. Drug interactions are generally classified as pharmacokinetic interactions, where one drug alters the absorption, distribution, metabolism, or elimination of another drug resulting in increased or decreased drug exposure, or pharmacodynamic interactions, where one drug alters the response to another medication through additive or antagonistic pharmacologic effects [13]. An adverse drug event occurs when a pharmacokinetic or pharmacodynamic interaction or combination of both results in changes in drug exposure or response that lead to negative clinical outcomes. The adverse drug event could be a therapeutic failure if drug exposure is decreased or the pharmacologic response is antagonistic. The adverse drug event could be drug toxicity if the drug exposure is increased or the pharmacologic response is additive or synergistic. The threshold for experiencing an adverse event is often lower in older adults due to physiologic changes with aging and medical comorbidities, increasing their risk of experiencing an adverse drug event when medications are taken concurrently.”

“A large number of potential medication–alcohol interactions have been reported in the literature. Mechanisms of these interactions range from pharmacokinetic interactions affecting either alcohol or medication exposure to pharmacodynamics interactions resulting in exaggerated response. […] Epidemiologic evidence suggests that concurrent use of alcohol and medications among older adults is common. […] In a nationally representative U.S. sample of community-dwelling older adults in the National Social Life, Health and Aging Project (NSHAP) 2005–2006, 41% of participants reported consuming alcohol at least once per week and 20% were at risk for an alcohol–medication interaction because they were using both alcohol and alcohol-interacting medications on a regular basis [17]. […] Among participants in the Pennsylvania Assistance Contract for the Elderly program (aged 65–106 years) taking at least one prescription medication, 77% were taking an alcohol-interacting medication and 19% of the alcohol-interacting medication users reported concurrent use of alcohol [18]. […] Although these studies do not document adverse outcomes associated with alcohol–medication interactions, they do document that the potential exists for many older adults. […] High prevalence of concurrent use of alcohol and alcohol-interacting medications have also been reported in Australian men (43% of sedative or anxiolytic users were daily drinkers) [19], in older adults in Finland (42% of at-risk alcohol users were also taking alcohol-interacting medications) [20], and in older Irish adults (72% of participants were exposed to alcohol-interacting medications and 60% of these reported concurrent alcohol use) [21]. Drinking and medication use patterns in older adults may differ across countries, but alcohol–medication interactions appear to be a worldwide concern. […] Polypharmacy in general, and psychotropic burden specifically, has been associated with an increased risk of experiencing a geriatric syndrome such as falls or delirium, in older adults [26, 27]. Based on its pharmacology, alcohol can be considered as a psychotropic drug, and alcohol use should be assessed as part of the medication regimen evaluation to support efforts to prevent or manage geriatric syndromes. […] Combining alcohol and CNS active medications can be particularly problematic […] Older adults suffering from sleep problems or pain may be a particular risk for alcohol–medication interaction-related adverse events.”

In general, alcohol use in younger couples has been found to be highly concordant, that is, individuals in a relationship tend to engage in similar drinking behaviors [67,68]. Less is known, however, about alcohol use concordance between older couples. Graham and Braun [69] examined similarities in drinking behavior between spouses in a study of 826 community-dwelling older adults in Ontario, Canada. Results showed high concordance of drinking between spouses — whether they drank at all, how much they drank, and how frequently. […] Social learning theory suggests that alcohol use trajectories are strongly influenced by attitudes and behaviors of an individual’s social networks, particularly family and friends. When individuals engage in social activities with family and friends who approve of and engage in drinking, alcohol use, and misuse are reinforced [58, 59]. Evidence shows that among older adults, participation in social activities is correlated with higher levels of alcohol consumption [34, 60]. […] Brennan and Moos [29] […] found that older adults who reported less empathy and support from friends drank more alcohol, were more depressed, and were less self-confident. More stressors involving friends were associated with more drinking problems. Similar to the findings on marital conflict […], conflict in close friendships can prompt alcohol-use problems; conversely, these relationships can suffer as a result of alcohol-related problems. […] As opposed to social network theory […], social selection theory proposes that alcohol consumption changes an individual’s social context [33]. Studies among younger adults have shown that heavier drinkers chose partners and friends who approve of heavier drinking [70] and that excessive drinking can alienate social networks. The Moos study supports the idea that social selection also has a strong influence on drinking behavior among older adults.”

Traditionally, treatment studies in addiction have excluded patients over the age of 65. This bias has left a tremendous gap in knowledge regarding treatment outcomes and an understanding of the neurobiology of addiction in older adults.

Alcohol use causes well-established changes in sleep patterns, such as decreased sleep latency, decreased stage IV sleep, and precipitation or aggravation of sleep apnea [101]. There are also age-associated changes in sleep patterns including increased REM episodes, a decrease in REM length, a decrease in stage III and IV sleep, and increased awakenings. Age-associated changes in sleep can all be worsened by alcohol use and depression. Moeller and colleagues [102] demonstrated in younger subjects that alcohol and depression had additive effects upon sleep disturbances when they occurred together [102]. Wagman and colleagues [101] also have demonstrated that abstinent alcoholics did not sleep well because of insomnia, frequent awakenings, and REM fragmentation [101]; however, when these subjects ingested alcohol, sleep periodicity normalized and REM sleep was temporarily suppressed, suggesting that alcohol use could be used to self-medicate for sleep disturbances. A common anecdote from patients is that alcohol is used to help with sleep problems. […] The use of alcohol to self-medicate is considered maladaptive [34] and is associated with a host of negative outcomes. […] The use of alcohol to aid with sleep has been found to disrupt sleep architecture and cause sleep-related problems and daytime sleepiness [35, 36, 46]. Though alcohol is commonly used to aid with sleep initiation, it can worsen sleep-related breathing disorders and cause snoring and obstructive sleep apnea [36].”

Epidemiologic studies have clearly demonstrated that comorbidity between alcohol use and other psychiatric symptoms is common in younger age groups. Less is known about comorbidity between alcohol use and psychiatric illness in late life [88]. […] Blow et al. [90] reviewed the diagnosis of 3,986 VA patients between ages 60 and 69 presenting for alcohol treatment [90]. The most common comorbid psychiatric disorder was an affective disorder found in 21 % of the patients. […] Blazer et al. [91] studied 997 community dwelling elderly of whom only 4.5% had a history of alcohol use problems [91]; […] of these subjects, almost half had a comorbid diagnosis of depression or dysthymia. Comorbid depressive symptoms are not only common in late life but are also an important factor in the course and prognosis of psychiatric disorders. Depressed alcoholics have been shown to have a more complicated clinical course of depression with an increased risk of suicide and more social dysfunction than non-depressed alcoholics [9296]. […]  Alcohol use prior to late life has also been shown to influence treatment of late life depression. Cook and colleagues [94] found that a prior history of alcohol use problems predicted a more severe and chronic course for depression [94]. […] The effect of past heavy alcohol use is [also] highlighted in the findings from the Liverpool Longitudinal Study demonstrating a fivefold increase in psychiatric illness among elderly men who had a lifetime history of 5 or more years of heavy drinking [24]. The association between heavy alcohol consumption in earlier years and psychiatric morbidity in later life was not explained by current drinking habits. […] While Wernicke-Korsakoff’s syndrome is well described and often caused by alcohol use disorders, alcohol-related dementia may be difficult to differentiate from Alzheimer’s disease. Clinical diagnostic criteria for alcohol-related dementia (ARD) have been proposed and now validated in at least one trial, suggesting a method for distinguishing ARD, including Wernicke-Korsakoff’s syndrome, from other types of dementia [97, 98]. […] Finlayson et al. [100] found that 49 of 216 (23%) elderly patients presenting for alcohol treatment had dementia associated with alcohol use disorders [100].”


May 24, 2018 Posted by | Books, Demographics, Epidemiology, Medicine, Neurology, Pharmacology, Psychiatry, Statistics | Leave a comment

Alcohol and Aging

I’m currently reading this book. Below I have added some observations from the first five chapters. The book has 17 chapters in total, covering a wide variety of topics. I like the coverage so far. All the highlighted observations below were highlighted by me; they were not written in bold in the book.

“Alcohol consumption and alcohol-related deaths or problems have recently increased among older age groups in many developed countries […]. This increase in consumption, in combination with the ageing of populations worldwide, means that the absolute number of older people with alcohol problems is on the increase and a real danger exists that a “silent epidemic” may be evolving [2]. Although there is growing recognition of this public health problem, clinicians consistently under-detect alcohol problems and under-deliver behaviour change interventions to older people [8, 9] […] While older adults historically demonstrate much lower rates of alcohol use compared with younger adults [4, 5] and present to substance abuse treatment programs less frequently than their younger counterparts [6], substantial evidence suggests that at-risk alcohol use and alcohol use disorder (AUD) among older adults has been under-identified for decades [7, 8]. […] Individuals who have had alcohol-related problems over several decades and have survived into old age tend to be referred to as early onset drinkers. It is estimated that two-thirds of older drinkers fall into this category [2]. […] Late-onset drinking accounts for the remaining one-third of older people who use alcohol excessively [2]. Late-onset drinkers usually begin drinking in their 50s or 60s and tend to be of a higher socio-economic status than early onset drinkers with higher levels of education and income [2]. Stressful life events, such as bereavement or retirement, may trigger late-onset drinking […]. One study demonstrated that 70 % of late-onset drinkers had experienced stressful life events, compared with 25 % of early onset drinkers [17]. Those whose alcohol problems are of late onset tend to have fewer health problems and are more receptive to treatment than those with early onset problems […] Our data highlighted that losing a parent or partner was often pinpointed as an event that had prompted an escalation in alcohol use […] A recent systematic review which examined the relationship between late-life spousal bereavement and changes in routine health behaviour over 32 different studies [however] found only moderate evidence for increased alcohol consumption [41].”

“Understanding alcohol use among older adults requires a life course perspective [2] […]. Broadly speaking, to understand alcohol consumption patterns and associated risks among older adults, one must consider both biopsychosocial processes that emerge earlier in life and aging-specific processes, such as multimorbidity and retirement. […] In the population overall, older adulthood is a life stage in which overall alcohol consumption decreases, binge drinking becomes less common, and individuals give up drinking. […] data collected internationally supports the assertion that older adulthood is a period of declining drinking. […] Two forces specific to later life may be at work in decreasing levels of alcohol consumption in late life. First, the “sick-quitter” hypothesis [12, 13] suggests that changes in health during the aging process limit alcohol consumption. With declines in health, older adults decrease the quantity and frequency of their drinking leading to lower average consumption in the overall older adult population [11, 14]. Similarly, differential mortality of heavy drinkers may lead to decreases in alcohol use among cohorts of older adults; these changes in average drinking may be a function of early mortality of heavy drinkers [15]. Although alcohol use generally declines throughout the course of older adulthood, the population of older adults exhibits a great deal of variability in drinking patterns. […] longitudinal research studies have found that older men tend to consume alcohol at higher levels than women, and their consumption levels decline more slowly than women’s [6]. […] National survey data [from the UK] estimate that approximately 40–45% of older adults (65+) drank alcohol in the past year […] Numerous studies suggest that lifetime nondrinkers are more likely to be female, display greater religiosity (e.g., attend religious services), and have lower levels of education than their moderate drinking peers [20, 21]. […] Older adult nondrinkers are a heterogeneous population, and as such, lifetime nondrinkers and former drinkers should be studied separately. This is especially important when considering the issue of health and drinking because the context for abstinence may be different in these two groups [23, 24].”

“[V]ersion 5 of the DSM manual abandoned separate alcohol abuse and alcohol dependence diagnoses, and combined them into a single diagnosis: alcohol use disorder (AUD). […] The NSDUH survey estimated a past-year prevalence rate of alcohol abuse or dependence of 6.1 % among those aged 50–54 and 2.2 % among those ages 65 and older. […] AUD is the most severe manifestation of alcohol-related pathology among older adults, but most alcohol-related harm is not a function of disordered drinking [55]. […] older adults commonly take medications that interact with alcohol. A recent study of community-dwelling older adults (aged 57+) found that 41% consumed alcohol regularly and among regular alcohol consumers, 51 % used at least one alcohol interacting medication [57]. An analysis of the Irish Longitudinal Study on Ageing identified a high prevalence of alcohol use (60 %) among individuals taking alcohol interacting medications [58]. Falls are also a common health concern for older adults, and there is evidence of increased risk of falls among older adults who drink more than 14 drinks per week [59] […] a study by Holahan and colleagues [44] explored longitudinal outcomes for individuals who were moderate drinkers (below the weekly at-risk threshold) but who engaged in heavy episodic drinking (exceeded day threshold). Individuals were first surveyed between the ages of 55 and 65 and followed for 20 years. Episodic heavy drinkers were twice as likely to have died in the 20-year follow-up period compared with those who were not episodic heavy drinkers […to clarify, none of the episodic heavy drinkers in that study would qualify for a diagnosis of AUD, US] […] Alcohol use in the aging population has been defined through various thresholds of risk. Each approach brings certain advantages and problems. Using alcohol related disorders as a benchmark misses many older adults who may experience alcohol-related consequences to their health and well-being even though they do not meet criteria for disordered drinking. More conservative measures of alcohol risk may identify at-risk drinking in those for whom alcohol use may never compromise their health. […] among light to moderate drinkers, the level of risk is uncertain.

Among adults 65 years old and older in 2000–2001, just under 49.6% reported lifetime use [of tobacco] and 14% reported use in the last 12 months [30]. […] Data collected by the Centers for Disease Control in 2008 revealed that only 9% of individuals aged 65 and older reported being current smokers [42]. […] data from the 2001–2002 NESARC reveal a strong relationship between AUDs and tobacco use […] in 2012, 19.3% of adults 65 and older reported having ever used illicit drugs in their lifetime, whereas 47.6% of adults between the ages 60 and 64 reported lifetime drug use. […] In the 2005–2006 NSDUH […] 3.9% of adults aged 50–64, the bulk of the Baby Boomers at that time, reported past year marijuana use, compared to only 0.7% of those 65 years old and older [53]. Among those aged 50 and older reporting marijuana use, 49% reported using marijuana more than 30 days in the past year, with a mean of 81 days. […] The increasingly widespread, legal availability and acceptance of cannabis, for both medicinal and recreational use, may pose unique risks in an aging population. Across age groups, cannabis is known to impair short-term memory, increase one’s heart and respiratory rate, and elevate blood pressure [56]. […] For older adults, these risks may be particularly pronounced, especially for those whose cognitive or cardiovascular systems may already be compromised. […] Most researchers generally consider existing estimations of mental health and substance use disorders to be underestimations among older adults. […] Assumptions that older adults do not drink or use illicit substances should not be made.

“Although several studies in the United States and elsewhere have shown that moderate alcohol consumption is associated with reduced risk for heart disease [16–20] and that heavy intake is associated with increased risk of CVD incidence [6, 21] and all-cause mortality in various populations […], data specific to effects of alcohol in elderly populations remain scant. The few studies available, e.g., the Cardiovascular Health Study, suggest that moderate alcohol use is beneficial and may be associated with reduced Medicare costs among individuals with CVD [25]. The benefits and risks of alcohol consumption are dose dependent with a consistent cut-point for cardiovascular benefits being 1 drink per day for women and about 2 drinks per day for men [21]. These cut-points have also been observed for associations between alcohol consumption and all-cause mortality [21, 26]. Although there are many similarities in the effects of alcohol on CVD across many populations, the magnitude and significance of the association between amount of alcohol consumed and CVD risk remain inconsistent, especially within countries, regions, age, sex, race, and other population strata […] As shown in a recent review [33], a drinking pattern characterized by moderate drinking without episodes of heavy drinking may be more beneficial for CVD protection when compared to patterns that include heavy drinking episodes. […] In additional to amount of alcohol consumed per se, the pattern of alcohol consumption, commonly defined as the number of drinking days per week is also associated with CVD outcomes independent of the amount of alcohol consumed [18, 24, 34–37]. In general, a drinking pattern characterized by alcohol consumption on 4 or more days of the week is inversely associated with MI, stroke, and CVD risk factors“.

“The relation between moderate alcohol consumption and intermediate CVD markers was summarized in two recent reviews [6, 42]. Overall, moderate alcohol consumption is associated with improved concentrations of CVD risk markers, particularly HDL-C concentrations [18, 31, 43, 44]. Whether HDL-C resulting from moderate alcohol intake is functional and beneficial for cardioprotection remains unknown […] While moderate alcohol consumption shows no appreciable benefit on LDL-C, it is associated with significant improvement in insulin sensitivity […] Alcohol intake may also influence CVD markers through its effects on absorption and metabolism of nutrients in the body. This is critical especially in the elderly who may have deficiencies or insufficiencies of nutrients such as folate, vitamin B12, vitamin D, magnesium, and iron. Indeed, moderate alcohol consumption has been shown to improve status of nutrients associated with cardiovascular effects. For example, it improves iron absorption in humans [52, 53] and is associated with higher vitamin D levels in men [54]. […] heavy alcohol consumption [on the other hand] leads to deficiencies of magnesium [55], zinc, folate [56], and other nutrients and damages the intestinal lining and the liver impairing nutrient absorption and metabolism [57]. These effects of alcohol are likely to be worse in the elderly. […] chronic heavy drinking lowers magnesium [55], a nutrient needed for proper metabolism of vitamin D [58], implying that supplementation with vitamin D in heavy drinkers may not be as effective as intended. These effects of alcohol could also extend to prescription medications that are in common use among the elderly. […] Taken together, moderate alcohol seems to protect against cardiovascular disease across the whole life span but the data on older age groups are scanty. Theoretical considerations as well as emerging data on intermediate outcomes such as lipids, suggest that moderate alcohol could beneficially interact with medications such as statins to improve cardiovascular health but heavy alcohol could worsen CVD risk, especially in the elderly.”

Alcohol is one of the main risk factors for cancer, with alcohol use attributed to up to 44% of some cancers [2, 3] and between 3.2 and 3.7 % of all cancer deaths [4, 5]. Since 1988, alcohol has been classified as a carcinogen [6]. Types of cancers linked to alcohol use include cancers of the liver, pancreas, esophagus, breast, pharynx, and larynx with most convincing evidence for alcohol-related cancers of the upper aerodigestive tract, stomach, colorectum, liver, and the lungs [2, 7]. All of these cancers have a much higher incidence and mortality rate in older adults […] For alcohol-associated cancers, 66–95% of new cases appear in those 55 years of age or older [8, 9]. For alcohol-associated cancers, other than breast cancer, 75–95 % of new cases occur in those 55 years of age or older [8, 10, 11]. […] Four countries with a decline in alcohol use (France, the UK, Sweden, and US) have […] demonstrated a stabilization or decline in the incidence and mortality rates for types of cancers closely associated with alcohol use [12]. […] The increased risk for cancer related to alcohol use is based on a combination of both quantity/frequency and duration of use, with those consuming alcohol for 20 or more years at increased risk [14]. […] consumption of alcohol at lower levels may also increase the risk for alcohol-related cancers. Nelson et al. reported that daily consumption of 1.5 drinks or greater accounted for 26–35% of alcohol-attributable deaths [5]. Thus, the evidence is growing that daily drinking, even at lower levels, increases the risk for developing cancer in later life with the conclusion that there may be no safe threshold level for alcohol consumption below which there is no risk for cancer [6, 16, 17].”

The risk for developing alcohol-related cancer is increased among those who have a history of concurrent tobacco use and at-risk alcohol use […] Among individuals who have a history of smoking two or more packs of cigarettes and consuming more than four alcoholic drinks per day, the risk of head and neck cancer is increased greater than 35-fold [22]. […] At least 75 % of head and neck cancer is associated with alcohol and tobacco use[9]. […] There are gender differences in alcohol attributable cancer deaths with over half (56–66 %) of all alcohol-attributable cancer deaths in females resulting from breast cancer [5]. […] For women, even low-risk alcohol use (5–14.9 g/day or one standard drink of alcohol or less) increases the risk of cancer, mainly breast cancer [18]. […] Alcohol use during cancer treatment can complicate the treatment regimen and lead to poor long-term outcomes. […] Alcohol use is correlated with poor survival outcomes in oncology patients. […] Another issue for patients during cancer treatment is quality of life. Alcohol consumption at higher levels […] or patients who screened positive for a possible AUD during cancer treatment experienced worse quality of life outcomes, including problems with pain, sleep, dyspnea, total distress, anxiety, coping, shortness of breath, diarrhea, poor emotional functioning, fatigue, and poor appetite [58, 59]. Current alcohol use has also been associated with higher pain scores and long-term use of opioids [48, 49].”

May 14, 2018 Posted by | Books, Cancer/oncology, Cardiology, Epidemiology, Medicine | Leave a comment

Endocrinology (part 6 – neuroendocrine disorders and Paget’s disease)

I’m always uncertain as to how much content to cover when covering books like this one, and I usually cover handbooks in less detail (relatively) than I cover other books because of the amount of work it takes to cover all topics of interest – however I didn’t feel after writing my last post in the series that I had really finished with this book, in terms of blogging it; in fact I remember distinctly feeling a bit annoyed towards the end of writing my fifth post by the fact that I didn’t find that I could justify covering the detailed account of Paget’s disease included in the last part of the chapter, even though all of that stuff was new knowledge to me, and quite interesting – but these posts take some effort, and sometimes I cut them short just to at least blog something, rather than just have an unpublished draft lying around.

In this post I’ll first include some belated coverage of Paget’s disease, which is from the book’s chapter 6, and then I’ll cover some of the stuff included in chapter 8 of the book, about neuroendocrine disorders. Chapter 8 deals exclusively with various types of (usually quite rare) tumours. I decided to not cover chapter 7, which is devoted to paediatric endocrinology.

“Paget’s disease is the result of greatly local bone turnover, which occurs particularly in the elderly […] The 1° abnormality in Paget’s disease is gross overactivity of the osteoclasts, resulting in greatly increased ↑ bone resorption. This secondarily results in ↑ osteoblastic activity. The new bone is laid down in a highly disorganized manner […] Paget’s disease can affect any bone in the skeleton […] In most patients, it affects several sites, but, in about 20% of cases, a single bone is affected (monostotic disease). Typically, the disease will start in one end of a long bone and spread along the bone at a rate of about 1cm per year. […] Paget’s disease alters the mechanical properties of the bone. Thus, pagetic bones are more likely to bend under normal physiological loads and are thus liable to fracture. […] Pagetic bones are also larger than their normal counterparts. This can lead to ↑ arthritis at adjacent joints and to pressure on nerves, leading to neurological compression syndromes and, when it occurs in the skull base, sensorineural deafness.”

“Paget’s disease is present in about 2% of the UK population over the age of 55. It’s prevalence increases with age, and it is more common in ♂ than ♀. Only about 10% of affected patients will have symptomatic disease. […] Most notable feature is pain. […] The diagnosis of Paget’s disease is primarily radiological. […] An isotope bone scan is frequently helpful in assessing the extent of skeletal involvement […] Deafness is present in up to half of cases of skull base Paget’s. • Other neurological complications are rare. […] Osteogenic sarcoma [is a] very rare complication of Paget’s disease. […] Any increase of pain in a patient with Paget’s disease should arouse suspicion of sarcomatous degeneration. A more common cause, however, is resumption of activity of disease. […] Treatment with agents that decrease bone turnover reduces disease activity […] Although such treatment has been shown to help pain, there is little evidence that it benefits other consequences of Paget’s disease. In particular, the deafness of Paget’s disease does not regress after treatment […] Bisphosphonates have become the mainstay of treatment. […] Goals of treatment [are to:] • Minimize symptoms. • Prevent long-term complications. • Normalize bone turnover. • Alkaline phosphatase in normal range. • No actual evidence that treatment achieves this.”

The rest of this post will be devoted to covering topics from chapter 8:

Neuroendocrine cells are found in many sites throughout the body. They are particularly prominent in the GI tract and pancreas and […] have the ability to synthesize, store, and release peptide hormones. […] the majority of neuroendocrine tumours occur within the gastroenteropancreatic axis. […] >50% are traditionally termed carcinoid tumours […] with the remainder largely comprising pancreatic islet cell tumours. • Carcinoid and islet cell tumours are generally slow-growing. […] There is a move towards standardizing the terminology of these tumours […] The term NEN [neuroendocrine neoplasia] included low- and intermediate-grade neoplasia (previously referred to as carcinoid or atypical carcinoid) which are now referred to as neuroendocrine tumours (NETs) and high-grade neoplasia (neuroendocrine carcinoma, NEC). There is a confusing array of classifications of NENs, based on anatomical origin, histology, and secretory activity. • Many of these classifications are well established and widely used.”

“It is important to understand the differences between ‘differentiation’, which is the extent to which the neoplastic cells resemble their non-tumourous counterparts, and ‘grade’, which is the inherent agressiveness of the tumour. […] Neuroendocrine carcinomas are the most aggressive NENs and can be either small or large cell type. […] NENs are diagnosed based on histological features of biopsy specimens. The presenting features of the tumours vary like any other tumour, based on their anatomical location, such as abdominal pain, intestinal obstruction. Many are incidentally discovered during endoscopy or imaging for unrelated conditions. In a database study, 49% of NENs were localized, 24% had regional metastases, and 27% had distant metastases. […] These tumours rarely manifest themselves due to their secretory effect. [This is quite different from some of the other tumours they covered elsewhere in the book – US]  [….] Only a third of patients with neuroendocrine tumours develop symptoms due to hormone secretion.”

“Surgery is the treatment of choice for NENs grades 1 and 2, except in the presence of widespread distant metastases and extensive local invasion. […] Somatostatin analogues (SSA) have relatively minor side effects and provide long-term symptom control. •Octreotide and lanreotide […] reduce the level of biochemical tumour markers in the majority of patients and control symptoms in around 70% of cases. […] A combination of interferon with octreotide has been shown to produce biochemical and symptomatic improvement in patients who have previously had no significant benefit from either drug alone. […] Cytotoxic chemotherapy may be considered in patients with progressive, advanced, or uncontrolled symptomatic disease.”

“Despite the changes in nomenclature of NENs […] the ‘carcinoid crisis’ [apparently also termed ‘malignant carcinoid syndrome‘, US] is still an important descriptive term. It is a potentially life-threatening condition that should be prevented, where possible, and treated as an emergency. • Clinical features include hypotension, tachycardia, arrhythmias, flushing, diarrhoea, broncospasm, and altered sensoriom. […] carcinoid crisis can be triggered by manipulation of the tumours, such as during biopsy, surgery, or palpation. • These result in the release of biologically active compounds from the tumours. […] Carcinoid heart disease […] result in valvular stenosis or regurgitation and eventually heart failure. This condition is seen in 40-50% of patients with carcinoid syndrome and 3-4% of patients with neuroendocrine tumours”.

“An insulinoma is a functioning neuroendocrine tumour of the pancreas that causes hypoglycemia through inappropriate secretion of insulin. • Unlike other neuroendocrine tumours of the pancreas, more than 90% of insulinomas are benign. […] annual incidence of insulinomas is of the order of 1-2 per million population. […] The treatment of choice in all, but poor, surgical candidates is operative removal. […] In experienced surgical hands, the mortality is less than 1%. […] Following the removal of solitary insulinoma [>80% of cases], life expectancy is restored to normal. Malignant insulinomas, with metastases usually to the liver, have a natural history of years, rather than months, and may be controlled with medical therapy or specific antitumour therapy […] • Average 5-year survival estimated to be approximately 35% for malignant insulinomas. […] Gastrinomas are the most common functional malignant pancreatic endocrine tumours. […] The incidence of gastrinomas is 0.5-2/million population/year. […] Gastrin […] is the principal gut hormone stimulating gastric acid secretion. • The Zollinger-Ellison (ZE) syndrome is characterized by gastric acid oversecretion and manifests itself as severe peptic ulcer disease (PUD), gastro-oesophageal reflux, and diarrhoea. […] 10-year survival [in patients with gastrinomas] without liver metastases is 95%. […] Where there are diffuse metastases, […] a 10-year survival of approximately 15% [is observed].”

One of the things I was thinking about before deciding whether or not to blog this chapter was whether the (fortunately!) rare conditions encountered in the chapter really ‘deserved’ to be covered. Unlike what is the case for, say, breast cancer or colon cancer, most people won’t know someone who’ll die from malignant insulinoma. However although these conditions are very rare, I also can’t stop myself from thinking they’re also quite interesting, and I don’t care much about whether I know someone with a disease I’ve read about. And if you think these conditions are rare, well, for glucagonomas “The annual incidence is estimated at 1 per 20 million population”. These very rare conditions really serve as a reminder of how great our bodies are at dealing with all kinds of problems we’ve never even thought about. We don’t think about them precisely because a problem so rarely arises – but just now and then, well…

Let’s talk a little bit more about those glucagonomas:

“Glucagonomas are neuroendocrine tumours that usually arise from the α cells of the pancreas and produce the glucagonoma syndrome through the secretion of glucagon and other peptides derived from the preproglucagon gene. • The large majority of glucagonomas are malignant, but they are also very indolent tumours, and the diagnosis may be overlooked for many years. • Up to 90% of patients will have lymph node or liver metastases at the time of presentation. • They are classically associated with the rash of necrolytic migratory erythema. […] The characteristic rash [….] occurs in >70% of cases […] glucose intolerance is a frequent association (>90%). • Sustained gluconeogenesis also causes amino acid deficiencies and results in protein catabolism which can be associated with unrelenting weight loss in >60% of patients. • Glucagon has a direct suppressive effect on the bone marrow, resulting in a normochromic normocytic anaemia in almost all patients. […] Surgery is the only curative option, but the potential for a complete cure may be as low as 5%.”

“In 1958, Verner and Morrison1 first described a syndrome consisting of refractory watery diarrhoea and hypokalaemia, associated with a neuroendocrine tumour of the pancreas. • The syndrome of watery diarrhea, hypokalaemia and acidosis (WDHA) is due to secretion of vasoactive intestinal polypeptide (VIP). • Tumours that secrete VIP are known as VIPomas. VIPomas account for <10% of islet cell tumours and mainly occur as solitary tumours. >60% are malignant […] The most prominent symptom in most patients is profuse watery diarrhoea […] Surgery to remove the tumour is the treatment of first choice […] and may be curative in around 40% of patients. […] Somatostatin analogues produce effective symptomatic relief from the diarrhoea in most patients. Long-term use does not result in tumour regression. […] Chemotherapy […] has resulted in response rates of >30%.”

So by now we know that somatostatin analogues can provide symptom relief in a variety of contexts when you’re dealing with these conditions. But wait, what happens if you get a functional tumour of the cells that produce somatostatins? Will this mean that you just feel great all the time, or that you at least don’t have any symptoms of disease? Well, not exactly…

Somatostatinomas are very rare neuroendocrine tumours, occurring both in the pancreas and in the duodenum. • >60% are large tumours located in the head or body of the pancreas. • The clinical syndrome may be diagnosed late in the course of disease when metastatic spread to local lymph nodes and the liver has already occurred. […] • Glucose intolerance or frank diabetes mellitus may have been observed for many years prior to the diagnosis and retrospectively often represents the first clinical sign. It is probably due to the inhibitory effect of somatostatin on insulin secretion. • A high incidence of gallstones has been described similar to that seen as a side effect with long-term somatostatin analogue therapy. • Diarrhoea, steatorrhoea, and weight loss appear to be consistent clinical features […this despite the fact that you use the hormone produced by these tumours to manage diarrhea in other endocrine tumours – it’s stuff like this which makes these rare disorders far from boring to read about! US] and may be associated with inhibition of the exocrine pancreas by somatostatin.”

May 1, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Epidemiology, Medicine, Neurology, Pharmacology | Leave a comment

A few diabetes papers of interest

i. Economic Costs of Diabetes in the U.S. in 2017.

“This study updates previous estimates of the economic burden of diagnosed diabetes and quantifies the increased health resource use and lost productivity associated with diabetes in 2017. […] The total estimated cost of diagnosed diabetes in 2017 is $327 billion, including $237 billion in direct medical costs and $90 billion in reduced productivity. For the cost categories analyzed, care for people with diagnosed diabetes accounts for 1 in 4 health care dollars in the U.S., and more than half of that expenditure is directly attributable to diabetes. People with diagnosed diabetes incur average medical expenditures of ∼$16,750 per year, of which ∼$9,600 is attributed to diabetes. People with diagnosed diabetes, on average, have medical expenditures ∼2.3 times higher than what expenditures would be in the absence of diabetes. Indirect costs include increased absenteeism ($3.3 billion) and reduced productivity while at work ($26.9 billion) for the employed population, reduced productivity for those not in the labor force ($2.3 billion), inability to work because of disease-related disability ($37.5 billion), and lost productivity due to 277,000 premature deaths attributed to diabetes ($19.9 billion). […] After adjusting for inflation, economic costs of diabetes increased by 26% from 2012 to 2017 due to the increased prevalence of diabetes and the increased cost per person with diabetes. The growth in diabetes prevalence and medical costs is primarily among the population aged 65 years and older, contributing to a growing economic cost to the Medicare program.”

The paper includes a lot of details about how they went about estimating these things, but I decided against including these details here – read the full paper if you’re interested. I did however want to add some additional details, so here goes:

Absenteeism is defined as the number of work days missed due to poor health among employed individuals, and prior research finds that people with diabetes have higher rates of absenteeism than the population without diabetes. Estimates from the literature range from no statistically significant diabetes effect on absenteeism to studies reporting 1–6 extra missed work days (and odds ratios of more absences ranging from 1.5 to 3.3) (1214). Analyzing 2014–2016 NHIS data and using a negative binomial regression to control for overdispersion in self-reported missed work days, we estimate that people with diabetes have statistically higher missed work days—ranging from 1.0 to 4.2 additional days missed per year by demographic group, or 1.7 days on average — after controlling for age-group, sex, race/ethnicity, diagnosed hypertension status (yes/no), and body weight status (normal, overweight, obese, unknown). […] Presenteeism is defined as reduced productivity while at work among employed individuals and is generally measured through worker responses to surveys. Multiple recent studies report that individuals with diabetes display higher rates of presenteeism than their peers without diabetes (12,1517). […] We model productivity loss associated with diabetes-attributed presenteeism using the estimate (6.6%) from the 2012 study—which is toward the lower end of the 1.8–38% range reported in the literature. […] Reduced performance at work […] accounted for 30% of the indirect cost of diabetes.”

It is of note that even with a somewhat conservative estimate of presenteeism, this cost component is an order of magnitude larger than the absenteeism variable. It is worth keeping in mind that this ratio is likely to be different elsewhere; due to the way the American health care system is structured/financed – health insurance is to a significant degree linked to employment – you’d expect the estimated ratio to be different from what you might observe in countries like the UK or Denmark. Some more related numbers from the paper:

Inability to work associated with diabetes is estimated using a conservative approach that focuses on unemployment related to long-term disability. Logistic regression with 2014–2016 NHIS data suggests that people aged 18–65 years with diabetes are significantly less likely to be in the workforce than people without diabetes. […] we use a conservative approach (which likely underestimates the cost associated with inability to work) to estimate the economic burden associated with reduced labor force participation. […] Study results suggest that people with diabetes have a 3.1 percentage point higher rate of being out of the workforce and receiving disability payments compared with their peers without diabetes. The diabetes effect increases with age and varies by demographic — ranging from 2.1 percentage points for non-Hispanic white males aged 60–64 years to 10.6 percentage points for non-Hispanic black females aged 55–59 years.”

“In 2017, an estimated 24.7 million people in the U.S. are diagnosed with diabetes, representing ∼7.6% of the total population (and 9.7% of the adult population). The estimated national cost of diabetes in 2017 is $327 billion, of which $237 billion (73%) represents direct health care expenditures attributed to diabetes and $90 billion (27%) represents lost productivity from work-related absenteeism, reduced productivity at work and at home, unemployment from chronic disability, and premature mortality. Particularly noteworthy is that excess costs associated with medications constitute 43% of the total direct medical burden. This includes nearly $15 billion for insulin, $15.9 billion for other antidiabetes agents, and $71.2 billion in excess use of other prescription medications attributed to higher disease prevalence associated with diabetes. […] A large portion of medical costs associated with diabetes costs is for comorbidities.”

Insulin is ~$15 billion/year, out of a total estimated cost of $327 billion. This is less than 5% of the total cost. Take note of the 70 billion. I know I’ve said this before, but it bears repeating: Most of diabetes-related costs are not related to insulin.

“…of the projected 162 million hospital inpatient days in the U.S. in 2017, an estimated 40.3 million days (24.8%) are incurred by people with diabetes [who make up ~7.6% of the population – see above], of which 22.6 million days are attributed to diabetes. About one-fourth of all nursing/residential facility days are incurred by people with diabetes. About half of all physician office visits, emergency department visits, hospital outpatient visits, and medication prescriptions (excluding insulin and other antidiabetes agents) incurred by people with diabetes are attributed to their diabetes. […] The largest contributors to the cost of diabetes are higher use of prescription medications beyond antihyperglycemic medications ($71.2 billion), higher use of hospital inpatient services ($69.7 billion), medications and supplies to directly treat diabetes ($34.6 billion), and more office visits to physicians and other health providers ($30.0 billion). Approximately 61% of all health care expenditures attributed to diabetes are for health resources used by the population aged ≥65 years […] we estimate the average annual excess expenditures for the population aged <65 years and ≥65 years, respectively, at $6,675 and $13,239. Health care expenditures attributed to diabetes generally increase with age […] The population with diabetes is older and sicker than the population without diabetes, and consequently annual medical expenditures are much higher (on average) than for people without diabetes“.

“Of the estimated 24.7 million people with diagnosed diabetes, analysis of NHIS data suggests that ∼8.1 million are in the workforce. If people with diabetes participated in the labor force at rates similar to their peers without diabetes, there would be ∼2 million additional people aged 18–64 years in the workforce.”

Comparing the 2017 estimates with those produced for 2012, the overall cost of diabetes appears to have increased by ∼25% after adjusting for inflation, reflecting an 11% increase in national prevalence of diagnosed diabetes and a 13% increase in the average annual diabetes-attributed cost per person with diabetes.”

ii. Current Challenges and Opportunities in the Prevention and Management of Diabetic Foot Ulcers.

“Diabetic foot ulcers remain a major health care problem. They are common, result in considerable suffering, frequently recur, and are associated with high mortality, as well as considerable health care costs. While national and international guidance exists, the evidence base for much of routine clinical care is thin. It follows that many aspects of the structure and delivery of care are susceptible to the beliefs and opinion of individuals. It is probable that this contributes to the geographic variation in outcome that has been documented in a number of countries. This article considers these issues in depth and emphasizes the urgent need to improve the design and conduct of clinical trials in this field, as well as to undertake systematic comparison of the results of routine care in different health economies. There is strong suggestive evidence to indicate that appropriate changes in the relevant care pathways can result in a prompt improvement in clinical outcomes.”

“Despite considerable advances made over the last 25 years, diabetic foot ulcers (DFUs) continue to present a very considerable health care burden — one that is widely unappreciated. DFUs are common, the median time to healing without surgery is of the order of 12 weeks, and they are associated with a high risk of limb loss through amputation (14). The 5-year survival following presentation with a new DFU is of the order of only 50–60% and hence worse than that of many common cancers (4,5). While there is evidence that mortality is improving with more widespread use of cardiovascular risk reduction (6), the most recent data — derived from a Veterans Health Adminstration population—reported that 1-, 2-, and 5-year survival was only 81, 69, and 29%, respectively, and the association between mortality and DFU was stronger than that of any macrovascular disease (7). […] There is […] wide variation in clinical outcome within the same country (1315), suggesting that some people are being managed considerably less well than others.”

“Data on community-wide ulcer incidence are very limited. Overall incidences of 5.8 and 6.0% have been reported in selected populations of people with diabetes in the U.S. (2,12,20) while incidences of 2.1 and 2.2% have been reported from less selected populations in Europe—either in all people with diabetes (21) or in those with type 2 disease alone (22). It is not known whether the incidence is changing […] Although a number of risk factors associated with the development of ulceration are well recognized (23), there is no consensus on which dominate, and there are currently no reports of any studies that might justify the adoption of any specific strategy for population selection in primary prevention.”

“The incidence of major amputation is used as a surrogate measure of the failure of DFUs to heal. Its main value lies in the relative ease of data capture, but its value is limited because it is essentially a treatment and not a true measure of disease outcome. In no other major disease (including malignancies, cardiovascular disease, or cerebrovascular disease) is the number of treatments used as a measure of outcome. But despite this and other limitations of major amputation as an outcome measure (36), there is evidence that the overall incidence of major amputation is falling in some countries with nationwide databases (37,38). Perhaps the most convincing data come from the U.K., where the unadjusted incidence has fallen dramatically from about 3.0–3.5 per 1,000 people with diabetes per year in the mid-1990s to 1.0 or less per 1,000 per year in both England and Scotland (14,39).”

New ulceration after healing is high, with ∼40% of people having a new ulcer (whether at the same site or another) within 12 months (10). This is a critical aspect of diabetic foot disease—emphasizing that when an ulcer heals, foot disease must be regarded not as cured, but in remission (10). In this respect, diabetic foot disease is directly analogous to malignancy. It follows that the person whose foot disease is in remission should receive the same structured follow-up as a person who is in remission following treatment for cancer. Of all areas concerned with the management of DFUs, this long-term need for specialist surveillance is arguably the one that should command the greatest attention.

“There is currently little evidence to justify the adoption of very many of the products and procedures currently promoted for use in clinical practice. Guidelines are required to encourage clinicians to adopt only those treatments that have been shown to be effective in robust studies and principally in RCTs. The design and conduct of such RCTs needs improved governance because many are of low standard and do not always provide the evidence that is claimed.”

Incidence numbers like the ones included above will not always give you the full picture when there are a lot of overlapping data points in the sample (due to recurrence), but sometimes that’s all you have. However in the type 1 context we also do have some additional numbers that make it easier to appreciate the scale of the problem in that context. Here are a few additional data from a related publication I blogged some time ago (do keep in mind that estimates are likely to be lower in community samples of type 2 diabetics, even if perhaps nobody actually know precisely how much lower):

“The rate of nontraumatic amputation in T1DM is high, occurring at 0.4–7.2% per year (28). By 65 years of age, the cumulative probability of lower-extremity amputation in a Swedish administrative database was 11% for women with T1DM and 20.7% for men (10). In this Swedish population, the rate of lower-extremity amputation among those with T1DM was nearly 86-fold that of the general population.” (link)

Do keep in mind that people don’t stop getting ulcers once they reach retirement age (the 11%/20.7% is not lifetime risk, it’s a biased lower bound).

iii. Excess Mortality in Patients With Type 1 Diabetes Without Albuminuria — Separating the Contribution of Early and Late Risks.

“The current study investigated whether the risk of mortality in patients with type 1 diabetes without any signs of albuminuria is different than in the general population and matched control subjects without diabetes.”

“Despite significant improvements in management, type 1 diabetes remains associated with an increase in mortality relative to the age- and sex-matched general population (1,2). Acute complications of diabetes may initially account for this increased risk (3,4). However, with increasing duration of disease, the leading contributor to excess mortality is its vascular complications including diabetic kidney disease (DKD) and cardiovascular disease (CVD). Consequently, patients who subsequently remain free of complications may have little or no increased risk of mortality (1,2,5).”

“Mortality was evaluated in a population-based cohort of 10,737 children (aged 0–14 years) with newly diagnosed type 1 diabetes in Finland who were listed on the National Public Health Institute diabetes register, Central Drug Register, and Hospital Discharge Register in 1980–2005 […] We excluded patients with type 2 diabetes and diabetes occurring secondary to other conditions, such as steroid use, Down syndrome, and congenital malformations of the pancreas. […] FinnDiane participants who died were more likely to be male, older, have a longer duration of diabetes, and later age of diabetes onset […]. Notably, none of the conventional variables associated with complications (e.g., HbA1c, hypertension, smoking, lipid levels, or AER) were associated with all-cause mortality in this cohort of patients without albuminuria. […] The most frequent cause of death in the FinnDiane cohort was IHD [ischaemic heart disease, US] […], largely driven by events in patients with long-standing diabetes and/or previously established CVD […]. The mortality rate ratio for IHD was 4.34 (95% CI 2.49–7.57, P < 0.0001). There remained a number of deaths due to acute complications of diabetes, including ketoacidosis and hypoglycemia. This was most significant in patients with a shorter duration of diabetes but still apparent in those with long-standing diabetes[…]. Notably, deaths due to “risk-taking behavior” were lower in adults with type 1 diabetes compared with matched individuals without diabetes: mortality rate ratio was 0.42 (95% CI 0.22–0.79, P = 0.006) […] This was largely driven by the 80% reduction (95% CI 0.06–0.66) in deaths due to alcohol and drugs in males with type 1 diabetes (Table 3). No reduction was observed in female patients (rate ratio 0.90 [95% CI 0.18–4.44]), although the absolute event rate was already more than seven times lower in Finnish women than in men.”

The chief determinant of excess mortality in patients with type 1 diabetes is its complications. In the first 10 years of type 1 diabetes, the acute complications of diabetes dominate and result in excess mortality — more than twice that observed in the age- and sex-matched general population. This early excess explains why registry studies following patients with type 1 diabetes from diagnosis have consistently reported reduced life expectancy, even in patients free of chronic complications of diabetes (68). By contrast, studies of chronic complications, like FinnDiane and the Pittsburgh Epidemiology of Diabetes Complications Study (1,2), have followed participants with, usually, >10 years of type 1 diabetes at baseline. In these patients, the presence or absence of chronic complications of diabetes is critical for survival. In particular, the presence and severity of albuminuria (as a marker of vascular burden) is strongly associated with mortality outcomes in type 1 diabetes (1). […] the FinnDiane normoalbuminuric patients showed increased all-cause mortality compared with the control subjects without diabetes in contrast to when the comparison was made with the Finnish general population, as in our previous publication (1). Two crucial causes behind the excess mortality were acute diabetes complications and IHD. […] Comparisons with the general population, rather than matched control subjects, may overestimate expected mortality, diluting the SMR estimate”.

Despite major improvements in the delivery of diabetes care and other technological advances, acute complications remain a major cause of death both in children and in adults with type 1 diabetes. Indeed, the proportion of deaths due to acute events has not changed significantly over the last 30 years. […] Even in patients with long-standing diabetes (>20 years), the risk of death due to hypoglycemia or ketoacidosis remains a constant companion. […] If it were possible to eliminate all deaths from acute events, the observed mortality rate would have been no different from the general population in the early cohort. […] In long-term diabetes, avoiding chronic complications may be associated with mortality rates comparable with those of the general population; although death from IHD remains increased, this is offset by reduced risk-taking behavior, especially in men.”

“It is well-known that CVD is strongly associated with DKD (15). However, in the current study, mortality from IHD remained higher in adults with type 1 diabetes without albuminuria compared with matched control subjects in both men and women. This is concordant with other recent studies also reporting increased mortality from CVD in patients with type 1 diabetes in the absence of DKD (7,8) and reinforces the need for aggressive cardiovascular risk reduction even in patients without signs of microvascular disease. However, it is important to note that the risk of death from CVD, though significant, is still at least 10-fold lower than observed in patients with albuminuria (1). Alcohol- and drug-related deaths were substantially lower in patients with type 1 diabetes compared with the age-, sex-, and region-matched control subjects. […] This may reflect a selection bias […] Nonparticipation in health studies is associated with poorer health, stress, and lower socioeconomic status (17,18), which are in turn associated with increased risk of premature mortality. It can be speculated that with inclusion of patients with risk-taking behavior, the mortality rate in patients with diabetes would be even higher and, consequently, the SMR would also be significantly higher compared with the general population. Selection of patients who despite long-standing diabetes remained free of albuminuria may also have included individuals more accepting of general health messages and less prone to depression and nihilism arising from treatment failure.”

I think the selection bias problem is likely to be quite significant, as these results don’t really match what I’ve seen in the past. For example a recent Norwegian study on young type 1 diabetics found high mortality in their sample in significant degree due to alcohol-related causes and suicide: “A relatively high proportion of deaths were related to alcohol. […] Death was related to alcohol in 15% of cases. SMR for alcohol-related death was 6.8 (95% CI 4.5–10.3), for cardiovascular death was 7.3 (5.4–10.0), and for violent death was 3.6 (2.3–5.3).” That doesn’t sound very similar to the study above, and that study’s also from Scandinavia. In this study, in which they used data from diabetic organ donors, they found that a large proportion of the diabetics included in the study used illegal drugs: “we observed a high rate of illicit substance abuse: 32% of donors reported or tested positive for illegal substances (excluding marijuana), and multidrug use was common.”

Do keep in mind that one of the main reasons why ‘alcohol-related’ deaths are higher in diabetes is likely to be that ‘drinking while diabetic’ is a lot more risky than is ‘drinking while not diabetic’. On a related note, diabetics may not appreciate the level of risk they’re actually exposed to while drinking, due to community norms etc., so there might be a disconnect between risk preferences and observed behaviour (i.e., a diabetic might be risk averse but still engage in risky behaviours because he doesn’t know how risky those behaviours in which he’s engaging actually are).

Although the illicit drugs study indicates that diabetics at least in some samples are not averse to engaging in risky behaviours, a note of caution is probably warranted in the alcohol context: High mortality from alcohol-mediated acute complications needn’t be an indication that diabetics drink more than non-diabetics; that’s a separate question, you might see numbers like these even if they in general drink less. And a young type 1 diabetic who suffers a cardiac arrhythmia secondary to long-standing nocturnal hypoglycemia and subsequently is found ‘dead in bed’ after a bout of drinking is conceptually very different from a 50-year old alcoholic dying from a variceal bleed or acute pancreatitis. Parenthetically, if it is true that illicit drugs use is common in type 1 diabetics one reason might be that they are aware of the risks associated with alcohol (which is particularly nasty in terms of the metabolic/glycemic consequences in diabetes, compared to some other drugs) and thus they deliberately make a decision to substitute this drug with other drugs less likely to cause acute complications like severe hypoglycemic episodes or DKA (depending on the setting and the specifics, alcohol might be a contributor to both of these complications). If so, classical ‘risk behaviours’ may not always be ‘risk behaviours’ in diabetes. You need to be careful, this stuff’s complicated.

iv. Are All Patients With Type 1 Diabetes Destined for Dialysis if They Live Long Enough? Probably Not.

“Over the past three decades there have been numerous innovations, supported by large outcome trials that have resulted in improved blood glucose and blood pressure control, ultimately reducing cardiovascular (CV) risk and progression to nephropathy in type 1 diabetes (T1D) (1,2). The epidemiological data also support the concept that 25–30% of people with T1D will progress to end-stage renal disease (ESRD). Thus, not everyone develops progressive nephropathy that ultimately requires dialysis or transplantation. This is a result of numerous factors […] Data from two recent studies reported in this issue of Diabetes Care examine the long-term incidence of chronic kidney disease (CKD) in T1D. Costacou and Orchard (7) examined a cohort of 932 people evaluated for 50-year cumulative kidney complication risk in the Pittsburgh Epidemiology of Diabetes Complications study. They used both albuminuria levels and ESRD/transplant data for assessment. By 30 years’ duration of diabetes, ESRD affected 14.5% and by 40 years it affected 26.5% of the group with onset of T1D between 1965 and 1980. For those who developed diabetes between 1950 and 1964, the proportions developing ESRD were substantially higher at 34.6% at 30 years, 48.5% at 40 years, and 61.3% at 50 years. The authors called attention to the fact that ESRD decreased by 45% after 40 years’ duration between these two cohorts, emphasizing the beneficial roles of improved glycemic control and blood pressure control. It should also be noted that at 40 years even in the later cohort (those diagnosed between 1965 and 1980), 57.3% developed >300 mg/day albuminuria (7).”

Numbers like these may seem like ancient history (data from the 60s and 70s), but it’s important to keep in mind that many type 1 diabetics are diagnosed in early childhood, and that they don’t ‘get better’ later on – if they’re still alive, they’re still diabetic. …And very likely macroalbuminuric, at least if they’re from Pittsburgh. I was diagnosed in ’87.

“Gagnum et al. (8), using data from a Norwegian registry, also examined the incidence of CKD development over a 42-year follow-up period in people with childhood-onset (<15 years of age) T1D (8). The data from the Norwegian registry noted that the cumulative incidence of ESRD was 0.7% after 20 years and 5.3% after 40 years of T1D. Moreover, the authors noted the risk of developing ESRD was lower in women than in men and did not identify any difference in risk of ESRD between those diagnosed with diabetes in 1973–1982 and those diagnosed in 1989–2012. They concluded that there is a very low incidence of ESRD among patients with childhood-onset T1D diabetes in Norway, with a lower risk in women than men and among those diagnosed at a younger age. […] Analyses of population-based studies, similar to the Pittsburgh and Norway studies, showed that after 30 years of T1D the cumulative incidences of ESRD were only 10% for those diagnosed with T1D in 1961–1984 and 3% for those diagnosed in 1985–1999 in Japan (11), 3.3% for those diagnosed with T1D in 1977–2007 in Sweden (12), and 7.8% for those diagnosed with T1D in 1965–1999 in Finland (13) (Table 1).”

Do note that ESRD (end stage renal disease) is not the same thing as DKD (diabetic kidney disease), and that e.g. many of the Norwegians who did not develop ESRD nevertheless likely have kidney complications from their diabetes. That 5.3% is not the number of diabetics in that cohort who developed diabetes-related kidney complications, it’s the proportion of them who did and as a result of this needed a new kidney or dialysis in order not to die very soon. Do also keep in mind that both microalbuminuria and macroalbuminuria will substantially increase the risk of cardiovascular disease and -cardiac death. I recall a study where they looked at the various endpoints and found that more diabetics with microalbuminuria eventually died of cardiovascular disease than did ever develop kidney failure – cardiac risk goes up a lot long before end-stage renal disease. ESRD estimates don’t account for the full risk profile, and even if you look at mortality risk the number accounts for perhaps less than half of the total risk attributable to DKD. One thing the ESRD diagnosis does have going for it is that it’s a much more reliable variable indicative of significant pathology than is e.g. microalbuminuria (see e.g. this paper). The paper is short and not at all detailed, but they do briefly discuss/mention these issues:

“…there is a substantive difference between the numbers of people with stage 3 CKD (estimated glomerular filtration rate [eGFR] 30–59 mL/min/1.73 m2) versus those with stages 4 and 5 CKD (eGFR <30 mL/min/1.73 m2): 6.7% of the National Health and Nutrition Examination Survey (NHANES) population compared with 0.1–0.3%, respectively (14). This is primarily because of competing risks, such as death from CV disease that occurs in stage 3 CKD; hence, only the survivors are progressing into stages 4 and 5 CKD. Overall, these studies are very encouraging. Since the 1980s, risk of ESRD has been greatly reduced, while risk of CKD progression persists but at a slower rate. This reduced ESRD rate and slowed CKD progression is largely due to improvements in glycemic and blood pressure control and probably also to the institution of RAAS blockers in more advanced CKD. These data portend even better future outcomes if treatment guidance is followed. […] many medications are effective in blood pressure control, but RAAS blockade should always be a part of any regimen when very high albuminuria is present.”

v. New Understanding of β-Cell Heterogeneity and In Situ Islet Function.

“Insulin-secreting β-cells are heterogeneous in their regulation of hormone release. While long known, recent technological advances and new markers have allowed the identification of novel subpopulations, improving our understanding of the molecular basis for heterogeneity. This includes specific subpopulations with distinct functional characteristics, developmental programs, abilities to proliferate in response to metabolic or developmental cues, and resistance to immune-mediated damage. Importantly, these subpopulations change in disease or aging, including in human disease. […] We will discuss recent findings revealing functional β-cell subpopulations in the intact islet, the underlying basis for these identified subpopulations, and how these subpopulations may influence in situ islet function.”

I won’t cover this one in much detail, but this part was interesting:

“Gap junction (GJ) channels electrically couple β-cells within mouse and human islets (25), serving two main functions. First, GJ channels coordinate oscillatory dynamics in electrical activity and Ca2+ under elevated glucose or GLP-1, allowing pulsatile insulin secretion (26,27). Second, GJ channels lower spontaneous elevations in Ca2+ under low glucose levels (28). GJ coupling is also heterogeneous within the islet (29), leading to some β-cells being highly coupled and others showing negligible coupling. Several studies have examined how electrically heterogeneous cells interact via GJ channels […] This series of experiments indicate a “bistability” in islet function, where a threshold number of poorly responsive β-cells is sufficient to totally suppress islet function. Notably, when islets lacking GJ channels are treated with low levels of the KATP activator diazoxide or the GCK inhibitor mannoheptulose, a subpopulation of cells are silenced, presumably corresponding to the less functional population (30). Only diazoxide/mannoheptulose concentrations capable of silencing >40% of these cells will fully suppress Ca2+ elevations in normal islets. […] this indicates that a threshold number of poorly responsive cells can inhibit the whole islet. Thus, if there exists a threshold number of functionally competent β-cells (∼60–85%), then the islet will show coordinated elevations in Ca2+ and insulin secretion.

Below this threshold number, the islet will lack Ca2+ elevation and insulin secretion (Fig. 2). The precise threshold depends on the characteristics of the excitable and inexcitable populations: small numbers of inexcitable cells will increase the number of functionally competent cells required for islet activity, whereas small numbers of highly excitable cells will do the opposite. However, if GJ coupling is lowered, then inexcitable cells will exert a reduced suppression, also decreasing the threshold required. […] Paracrine communication between β-cells and other endocrine cells is also important for regulating insulin secretion. […] Little is known how these paracrine and juxtacrine mechanisms impact heterogeneous cells.”

vi. Closing in on the Mechanisms of Pulsatile Insulin Secretion.

“Insulin secretion from pancreatic islet β-cells occurs in a pulsatile fashion, with a typical period of ∼5 min. The basis of this pulsatility in mouse islets has been investigated for more than four decades, and the various theories have been described as either qualitative or mathematical models. In many cases the models differ in their mechanisms for rhythmogenesis, as well as other less important details. In this Perspective, we describe two main classes of models: those in which oscillations in the intracellular Ca2+ concentration drive oscillations in metabolism, and those in which intrinsic metabolic oscillations drive oscillations in Ca2+ concentration and electrical activity. We then discuss nine canonical experimental findings that provide key insights into the mechanism of islet oscillations and list the models that can account for each finding. Finally, we describe a new model that integrates features from multiple earlier models and is thus called the Integrated Oscillator Model. In this model, intracellular Ca2+ acts on the glycolytic pathway in the generation of oscillations, and it is thus a hybrid of the two main classes of models. It alone among models proposed to date can explain all nine key experimental findings, and it serves as a good starting point for future studies of pulsatile insulin secretion from human islets.”

This one covers material closely related to the study above, so if you find one of these papers interesting you might want to check out the other one as well. The paper is quite technical but if you were wondering why people are interested in this kind of stuff, one reason is that there’s good evidence at this point that insulin pulsativity is disturbed in type 2 diabetics and so it’d be nice to know why that is so that new drugs can be developed to correct this.

April 25, 2018 Posted by | Biology, Cardiology, Diabetes, Epidemiology, Health Economics, Medicine, Nephrology, Pharmacology, Studies | Leave a comment

Medical Statistics (I)

I was more than a little critical of the book in my review on goodreads, and the review is sufficiently detailed that I thought it would be worth including it in this post. Here’s what I wrote on goodreads (slightly edited to take full advantage of the better editing options on wordpress):

“The coverage is excessively focused on significance testing. The book also provides very poor coverage of model selection topics, where the authors not once but repeatedly recommend employing statistically invalid approaches to model selection (the authors recommend using hypothesis testing mechanisms to guide model selection, as well as using adjusted R-squared for model selection decisions – both of which are frankly awful ideas, for reasons which are obvious to people familiar with the field of model selection. “Generally, hypothesis testing is a very poor basis for model selection […] There is no statistical theory that supports the notion that hypothesis testing with a fixed α level is a basis for model selection.” “While adjusted R2 is useful as a descriptive statistic, it is not useful in model selection” – quotes taken directly from Burnham & Anderson’s book Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach).

The authors do not at any point in the coverage even mention the option of using statistical information criteria to guide model selection decisions, and frankly repeatedly recommend doing things which are known to be deeply problematic. The authors also cover material from Borenstein and Hedges’ meta-analysis text in the book, yet still somehow manage to give poor advice in the context of meta-analysis along similar lines (implicitly advising people to base model decisions within the context of whether to use fixed effects or random effects on the results of heterogeneity tests, despite this approach being criticized as problematic in the formerly mentioned text).

Basic and not terrible, but there are quite a few problems with this text.”

I’ll add a few more details about the above-mentioned problems before moving on to the main coverage. As for the model selection topic I refer specifically to my coverage of Burnham and Anderson’s book here and here – these guys spent a lot of pages talking about why you shouldn’t do what the authors of this book recommend, and I’m sort of flabbergasted medical statisticians don’t know this kind of stuff by now. To people who’ve read both these books, it’s not really in question who’s in the right here.

I believe part of the reason why I was very annoyed at the authors at times was that they seem to promote exactly a sort of blind unthinking hypothesis-testing approach to things that is unfortunately very common – the entire book is saturated with hypothesis testing stuff, which means that many other topics are woefully insufficiently covered. The meta-analysis example is probably quite illustrative; the authors spend multiple pages on study heterogeneity and how to deal with it, but the entire coverage there is centered around the discussion of a most-likely underpowered test, the result of which should perhaps in the best case scenario direct the researcher’s attention to topics he should be have been thinking carefully about from the very start of his data analysis. You don’t need to quote many words from Borenstein and Hedges (here’s a relevant link) to get to the heart of the matter here:

“It makes sense to use the fixed-effect model if two conditions are met. First, we believe that all the studies included in the analysis are functionally identical. Second, our goal is to compute the common effect size for the identified population, and not to generalize to other populations. […] this situation is relatively rare. […] By contrast, when the researcher is accumulating data from a series of studies that had been performed by researchers operating independently, it would be unlikely that all the studies were functionally equivalent. Typically, the subjects or interventions in these studies would have differed in ways that would have impacted on the results, and therefore we should not assume a common effect size. Therefore, in these cases the random-effects model is more easily justified than the fixed-effect model.

A report should state the computational model used in the analysis and explain why this model was selected. A common mistake is to use the fixed-effect model on the basis that there is no evidence of heterogeneity. As [already] explained […], the decision to use one model or the other should depend on the nature of the studies, and not on the significance of this test [because the test will often have low power anyway].”

Yet these guys spend their efforts here talking about a test that is unlikely to yield useful information and which if anything probably distracts the reader from the main issues at hand; are the studies functionally equivalent? Do we assume there’s one (‘true’) effect size, or many? What do those coefficients we’re calculating actually mean? The authors do in fact include a lot of cautionary notes about how to interpret the test, but in my view all this means is that they’re devoting critical pages to peripheral issues – and perhaps even reinforcing the view that the test is important, or why else would they spend so much effort on it? – rather than promote good thinking about the key topics at hand.

Anyway, enough of the critical comments. Below a few links related to the first chapter of the book, as well as some quotes.

Declaration of Helsinki.
Randomized controlled trial.
Minimization (clinical trials).
Blocking (statistics).
Informed consent.
Blinding (RCTs). (…related xkcd link).
Parallel study. Crossover trial.
Zelen’s design.
Superiority, equivalence, and non-inferiority trials.
Intention-to-treat concept: A review.
Case-control study. Cohort study. Nested case-control study. Cross-sectional study.
Bradford Hill criteria.
Research protocol.
Type 1 and type 2 errors.
Clinical audit. A few quotes on this topic:

“‘Clinical audit’ is a quality improvement process that seeks to improve the patient care and outcomes through systematic review of care against explicit criteria and the implementation of change. Aspects of the structures, processes and outcomes of care are selected and systematically evaluated against explicit criteria. […] The aim of audit is to monitor clinical practice against agreed best practice standards and to remedy problems. […] the choice of topic is guided by indications of areas where improvement is needed […] Possible topics [include] *Areas where a problem has been identified […] *High volume practice […] *High risk practice […] *High cost […] *Areas of clinical practice where guidelines or firm evidence exists […] The organization carrying out the audit should have the ability to make changes based on their findings. […] In general, the same methods of statistical analysis are used for audit as for research […] The main difference between audit and research is in the aim of the study. A clinical research study aims to determine what practice is best, whereas an audit checks to see that best practice is being followed.”

A few more quotes from the end of the chapter:

“In clinical medicine and in medical research it is fairly common to categorize a biological measure into two groups, either to aid diagnosis or to classify an outcome. […] It is often useful to categorize a measurement in this way to guide decision-making, and/or to summarize the data but doing this leads to a loss of information which in turn has statistical consequences. […] If a continuous variable is used for analysis in a research study, a substantially smaller sample size will be needed than if the same variable is categorized into two groups […] *Categorization of a continuous variable into two groups loses much data and should be avoided whenever possible *Categorization of a continuous variable into several groups is less problematic”

“Research studies require certain specific data which must be collected to fulfil the aims of the study, such as the primary and secondary outcomes and main factors related to them. Beyond these data there are often other data that could be collected and it is important to weigh the costs and consequences of not collecting data that will be needed later against the disadvantages of collecting too much data. […] collecting too much data is likely to add to the time and cost to data collection and processing, and may threaten the completeness and/or quality of all of the data so that key data items are threatened. For example if a questionnaire is overly long, respondents may leave some questions out or may refuse to fill it out at all.”

Stratified samples are used when fixed numbers are needed from particular sections or strata of the population in order to achieve balance across certain important factors. For example a study designed to estimate the prevalence of diabetes in different ethnic groups may choose a random sample with equal numbers of subjects in each ethnic group to provide a set of estimates with equal precision for each group. If a simple random sample is used rather than a stratified sample, then estimates for minority ethnic groups may be based on small numbers and have poor precision. […] Cluster samples may be chosen where individuals fall naturally into groups or clusters. For example, patients on a hospital wards or patients in a GP practice. If a sample is needed of these patients, it may be easier to list the clusters and then to choose a random sample of clusters, rather than to choose a random sample of the whole population. […] Cluster sampling is less efficient statistically than simple random sampling […] the ICC summarizes the extent of the ‘clustering effect’. When individuals in the same cluster are much more alike than individuals in different clusters with respect to an outcome, then the clustering effect is greater and the impact on the required sample size is correspondingly greater. In practice there can be a substantial effect on the sample size even when the ICC is quite small. […] As well as considering how representative a sample is, it is important […] to consider the size of the sample. A sample may be unbiased and therefore representative, but too small to give reliable estimates. […] Prevalence estimates from small samples will be imprecise and therefore may be misleading. […] The greater the variability of a measure, the greater the number of subjects needed in the sample to estimate it precisely. […] the power of a study is the ability of the study to detect a difference if one exists.”

April 9, 2018 Posted by | Books, Epidemiology, Medicine, Statistics | Leave a comment


I actually think this was a really nice book, considering the format – I gave it four stars on goodreads. One of the things I noticed people didn’t like about it in the reviews is that it ‘jumps’ a bit in terms of topic coverage; it covers a wide variety of applications and analytical settings. I mostly don’t consider this a weakness of the book – even if occasionally it does get a bit excessive – and I can definitely understand the authors’ choice of approach; it’s sort of hard to illustrate the potential the analytical techniques described within this book have if you’re not allowed to talk about all the areas in which they have been – or could be gainfully – applied. A related point is that many people who read the book might be familiar with the application of these tools in specific contexts but have perhaps not thought about the fact that similar methods are applied in many other areas (and they might all of them be a bit annoyed the authors don’t talk more about computer science applications, or foodweb analyses, or infectious disease applications, or perhaps sociometry…). Most of the book is about graph-theory-related stuff, but a very decent amount of the coverage deals with applications, in a broad sense of the word at least, not theory. The discussion of theoretical constructs in the book always felt to me driven to a large degree by their usefulness in specific contexts.

I have covered related topics before here on the blog, also quite recently – e.g. there’s at least some overlap between this book and Holland’s book about complexity theory in the same series (I incidentally think these books probably go well together) – and as I found the book slightly difficult to blog as it was I decided against covering it in as much detail as I sometimes do when covering these texts – this means that I decided to leave out the links I usually include in posts like these.

Below some quotes from the book.

“The network approach focuses all the attention on the global structure of the interactions within a system. The detailed properties of each element on its own are simply ignored. Consequently, systems as different as a computer network, an ecosystem, or a social group are all described by the same tool: a graph, that is, a bare architecture of nodes bounded by connections. […] Representing widely different systems with the same tool can only be done by a high level of abstraction. What is lost in the specific description of the details is gained in the form of universality – that is, thinking about very different systems as if they were different realizations of the same theoretical structure. […] This line of reasoning provides many insights. […] The network approach also sheds light on another important feature: the fact that certain systems that grow without external control are still capable of spontaneously developing an internal order. […] Network models are able to describe in a clear and natural way how self-organization arises in many systems. […] In the study of complex, emergent, and self-organized systems (the modern science of complexity), networks are becoming increasingly important as a universal mathematical framework, especially when massive amounts of data are involved. […] networks are crucial instruments to sort out and organize these data, connecting individuals, products, news, etc. to each other. […] While the network approach eliminates many of the individual features of the phenomenon considered, it still maintains some of its specific features. Namely, it does not alter the size of the system — i.e. the number of its elements — or the pattern of interaction — i.e. the specific set of connections between elements. Such a simplified model is nevertheless enough to capture the properties of the system. […] The network approach [lies] somewhere between the description by individual elements and the description by big groups, bridging the two of them. In a certain sense, networks try to explain how a set of isolated elements are transformed, through a pattern of interactions, into groups and communities.”

“[T]he random graph model is very important because it quantifies the properties of a totally random network. Random graphs can be used as a benchmark, or null case, for any real network. This means that a random graph can be used in comparison to a real-world network, to understand how much chance has shaped the latter, and to what extent other criteria have played a role. The simplest recipe for building a random graph is the following. We take all the possible pair of vertices. For each pair, we toss a coin: if the result is heads, we draw a link; otherwise we pass to the next pair, until all the pairs are finished (this means drawing the link with a probability p = ½, but we may use whatever value of p). […] Nowadays [the random graph model] is a benchmark of comparison for all networks, since any deviations from this model suggests the presence of some kind of structure, order, regularity, and non-randomness in many real-world networks.”

“…in networks, topology is more important than metrics. […] In the network representation, the connections between the elements of a system are much more important than their specific positions in space and their relative distances. The focus on topology is one of its biggest strengths of the network approach, useful whenever topology is more relevant than metrics. […] In social networks, the relevance of topology means that social structure matters. […] Sociology has classified a broad range of possible links between individuals […]. The tendency to have several kinds of relationships in social networks is called multiplexity. But this phenomenon appears in many other networks: for example, two species can be connected by different strategies of predation, two computers by different cables or wireless connections, etc. We can modify a basic graph to take into account this multiplexity, e.g. by attaching specific tags to edges. […] Graph theory [also] allows us to encode in edges more complicated relationships, as when connections are not reciprocal. […] If a direction is attached to the edges, the resulting structure is a directed graph […] In these networks we have both in-degree and out-degree, measuring the number of inbound and outbound links of a node, respectively. […] in most cases, relations display a broad variation or intensity [i.e. they are not binary/dichotomous]. […] Weighted networks may arise, for example, as a result of different frequencies of interactions between individuals or entities.”

“An organism is […] the outcome of several layered networks and not only the deterministic result of the simple sequence of genes. Genomics has been joined by epigenomics, transcriptomics, proteomics, metabolomics, etc., the disciplines that study these layers, in what is commonly called the omics revolution. Networks are at the heart of this revolution. […] The brain is full of networks where various web-like structures provide the integration between specialized areas. In the cerebellum, neurons form modules that are repeated again and again: the interaction between modules is restricted to neighbours, similarly to what happens in a lattice. In other areas of the brain, we find random connections, with a more or less equal probability of connecting local, intermediate, or distant neurons. Finally, the neocortex — the region involved in many of the higher functions of mammals — combines local structures with more random, long-range connections. […] typically, food chains are not isolated, but interwoven in intricate patterns, where a species belongs to several chains at the same time. For example, a specialized species may predate on only one prey […]. If the prey becomes extinct, the population of the specialized species collapses, giving rise to a set of co-extinctions. An even more complicated case is where an omnivore species predates a certain herbivore, and both eat a certain plant. A decrease in the omnivore’s population does not imply that the plant thrives, because the herbivore would benefit from the decrease and consume even more plants. As more species are taken into account, the population dynamics can become more and more complicated. This is why a more appropriate description than ‘foodchains’ for ecosystems is the term foodwebs […]. These are networks in which nodes are species and links represent relations of predation. Links are usually directed (big fishes eat smaller ones, not the other way round). These networks provide the interchange of food, energy, and matter between species, and thus constitute the circulatory system of the biosphere.”

“In the cell, some groups of chemicals interact only with each other and with nothing else. In ecosystems, certain groups of species establish small foodwebs, without any connection to external species. In social systems, certain human groups may be totally separated from others. However, such disconnected groups, or components, are a strikingly small minority. In all networks, almost all the elements of the systems take part in one large connected structure, called a giant connected component. […] In general, the giant connected component includes not less than 90 to 95 per cent of the system in almost all networks. […] In a directed network, the existence of a path from one node to another does not guarantee that the journey can be made in the opposite direction. Wolves eat sheep, and sheep eat grass, but grass does not eat sheep, nor do sheep eat wolves. This restriction creates a complicated architecture within the giant connected component […] according to an estimate made in 1999, more than 90 per cent of the WWW is composed of pages connected to each other, if the direction of edges is ignored. However, if we take direction into account, the proportion of nodes mutually reachable is only 24 per cent, the giant strongly connected component. […] most networks are sparse, i.e. they tend to be quite frugal in connections. Take, for example, the airport network: the personal experience of every frequent traveller shows that direct flights are not that common, and intermediate stops are necessary to reach several destinations; thousands of airports are active, but each city is connected to less than 20 other cities, on average. The same happens in most networks. A measure of this is given by the mean number of connection of their nodes, that is, their average degree.”

“[A] puzzling contradiction — a sparse network can still be very well connected — […] attracted the attention of the Hungarian mathematicians […] Paul Erdős and Alfréd Rényi. They tackled it by producing different realizations of their random graph. In each of them, they changed the density of edges. They started with a very low density: less than one edge per node. It is natural to expect that, as the density increases, more and more nodes will be connected to each other. But what Erdős and Rényi found instead was a quite abrupt transition: several disconnected components coalesced suddenly into a large one, encompassing almost all the nodes. The sudden change happened at one specific critical density: when the average number of links per node (i.e. the average degree) was greater than one, then the giant connected component suddenly appeared. This result implies that networks display a very special kind of economy, intrinsic to their disordered structure: a small number of edges, even randomly distributed between nodes, is enough to generate a large structure that absorbs almost all the elements. […] Social systems seem to be very tightly connected: in a large enough group of strangers, it is not unlikely to find pairs of people with quite short chains of relations connecting them. […] The small-world property consists of the fact that the average distance between any two nodes (measured as the shortest path that connects them) is very small. Given a node in a network […], few nodes are very close to it […] and few are far from it […]: the majority are at the average — and very short — distance. This holds for all networks: starting from one specific node, almost all the nodes are at very few steps from it; the number of nodes within a certain distance increases exponentially fast with the distance. Another way of explaining the same phenomenon […] is the following: even if we add many nodes to a network, the average distance will not increase much; one has to increase the size of a network by several orders of magnitude to notice that the paths to new nodes are (just a little) longer. The small-world property is crucial to many network phenomena. […] The small-world property is something intrinsic to networks. Even the completely random Erdős-Renyi graphs show this feature. By contrast, regular grids do not display it. If the Internet was a chessboard-like lattice, the average distance between two routers would be of the order of 1,000 jumps, and the Net would be much slower [the authors note elsewhere that “The Internet is composed of hundreds of thousands of routers, but just about ten ‘jumps’ are enough to bring an information packet from one of them to any other.”] […] The key ingredient that transforms a structure of connections into a small world is the presence of a little disorder. No real network is an ordered array of elements. On the contrary, there are always connections ‘out of place’. It is precisely thanks to these connections that networks are small worlds. […] Shortcuts are responsible for the small-world property in many […] situations.”

“Body size, IQ, road speed, and other magnitudes have a characteristic scale: that is, an average value that in the large majority of cases is a rough predictor of the actual value that one will find. […] While height is a homogeneous magnitude, the number of social connection[s] is a heterogeneous one. […] A system with this feature is said to be scale-free or scale-invariant, in the sense that it does not have a characteristic scale. This can be rephrased by saying that the individual fluctuations with respect to the average are too large for us to make a correct prediction. […] In general, a network with heterogeneous connectivity has a set of clear hubs. When a graph is small, it is easy to find whether its connectivity is homogeneous or heterogeneous […]. In the first case, all the nodes have more or less the same connectivity, while in the latter it is easy to spot a few hubs. But when the network to be studied is very big […] things are not so easy. […] the distribution of the connectivity of the nodes of the […] network […] is the degree distribution of the graph. […] In homogeneous networks, the degree distribution is a bell curve […] while in heterogeneous networks, it is a power law […]. The power law implies that there are many more hubs (and much more connected) in heterogeneous networks than in homogeneous ones. Moreover, hubs are not isolated exceptions: there is a full hierarchy of nodes, each of them being a hub compared with the less connected ones.”

“Looking at the degree distribution is the best way to check if a network is heterogeneous or not: if the distribution is fat tailed, then the network will have hubs and heterogeneity. A mathematically perfect power law is never found, because this would imply the existence of hubs with an infinite number of connections. […] Nonetheless, a strongly skewed, fat-tailed distribution is a clear signal of heterogeneity, even if it is never a perfect power law. […] While the small-world property is something intrinsic to networked structures, hubs are not present in all kind of networks. For example, power grids usually have very few of them. […] hubs are not present in random networks. A consequence of this is that, while random networks are small worlds, heterogeneous ones are ultra-small worlds. That is, the distance between their vertices is relatively smaller than in their random counterparts. […] Heterogeneity is not equivalent to randomness. On the contrary, it can be the signature of a hidden order, not imposed by a top-down project, but generated by the elements of the system. The presence of this feature in widely different networks suggests that some common underlying mechanism may be at work in many of them. […] the Barabási–Albert model gives an important take-home message. A simple, local behaviour, iterated through many interactions, can give rise to complex structures. This arises without any overall blueprint”.

Homogamy, the tendency of like to marry like, is very strong […] Homogamy is a specific instance of homophily: this consists of a general trend of like to link to like, and is a powerful force in shaping social networks […] assortative mixing [is] a special form of homophily, in which nodes tend to connect with others that are similar to them in the number of connections. By contrast [when] high- and low-degree nodes are more connected to each other [it] is called disassortative mixing. Both cases display a form of correlation in the degrees of neighbouring nodes. When the degrees of neighbours are positively correlated, then the mixing is assortative; when negatively, it is disassortative. […] In random graphs, the neighbours of a given node are chosen completely at random: as a result, there is no clear correlation between the degrees of neighbouring nodes […]. On the contrary, correlations are present in most real-world networks. Although there is no general rule, most natural and technological networks tend to be disassortative, while social networks tend to be assortative. […] Degree assortativity and disassortativity are just an example of the broad range of possible correlations that bias how nodes tie to each other.”

“[N]etworks (neither ordered lattices nor random graphs), can have both large clustering and small average distance at the same time. […] in almost all networks, the clustering of a node depends on the degree of that node. Often, the larger the degree, the smaller the clustering coefficient. Small-degree nodes tend to belong to well-interconnected local communities. Similarly, hubs connect with many nodes that are not directly interconnected. […] Central nodes usually act as bridges or bottlenecks […]. For this reason, centrality is an estimate of the load handled by a node of a network, assuming that most of the traffic passes through the shortest paths (this is not always the case, but it is a good approximation). For the same reason, damaging central nodes […] can impair radically the flow of a network. Depending on the process one wants to study, other definitions of centrality can be introduced. For example, closeness centrality computes the distance of a node to all others, and reach centrality factors in the portion of all nodes that can be reached in one step, two steps, three steps, and so on.”

“Domino effects are not uncommon in foodwebs. Networks in general provide the backdrop for large-scale, sudden, and surprising dynamics. […] most of the real-world networks show a doubled-edged kind of robustness. They are able to function normally even when a large fraction of the network is damaged, but suddenly certain small failures, or targeted attacks, bring them down completely. […] networks are very different from engineered systems. In an airplane, damaging one element is enough to stop the whole machine. In order to make it more resilient, we have to use strategies such as duplicating certain pieces of the plane: this makes it almost 100 per cent safe. In contrast, networks, which are mostly not blueprinted, display a natural resilience to a broad range of errors, but when certain elements fail, they collapse. […] A random graph of the size of most real-world networks is destroyed after the removal of half of the nodes. On the other hand, when the same procedure is performed on a heterogeneous network (either a map of a real network or a scale-free model of a similar size), the giant connected component resists even after removing more than 80 per cent of the nodes, and the distance within it is practically the same as at the beginning. The scene is different when researchers simulate a targeted attack […] In this situation the collapse happens much faster […]. However, now the most vulnerable is the second: while in the homogeneous network it is necessary to remove about one-fifth of its more connected nodes to destroy it, in the heterogeneous one this happens after removing the first few hubs. Highly connected nodes seem to play a crucial role, in both errors and attacks. […] hubs are mainly responsible for the overall cohesion of the graph, and removing a few of them is enough to destroy it.”

“Studies of errors and attacks have shown that hubs keep different parts of a network connected. This implies that they also act as bridges for spreading diseases. Their numerous ties put them in contact with both infected and healthy individuals: so hubs become easily infected, and they infect other nodes easily. […] The vulnerability of heterogeneous networks to epidemics is bad news, but understanding it can provide good ideas for containing diseases. […] if we can immunize just a fraction, it is not a good idea to choose people at random. Most of the times, choosing at random implies selecting individuals with a relatively low number of connections. Even if they block the disease from spreading in their surroundings, hubs will always be there to put it back into circulation. A much better strategy would be to target hubs. Immunizing hubs is like deleting them from the network, and the studies on targeted attacks show that eliminating a small fraction of hubs fragments the network: thus, the disease will be confined to a few isolated components. […] in the epidemic spread of sexually transmitted diseases the timing of the links is crucial. Establishing an unprotected link with a person before they establish an unprotected link with another person who is infected is not the same as doing so afterwards.”

April 3, 2018 Posted by | Biology, Books, Ecology, Engineering, Epidemiology, Genetics, Mathematics, Statistics | Leave a comment

A few (more) diabetes papers of interest

Earlier this week I covered a couple of papers, but the second paper turned out to include a lot of interesting stuff so I decided to cut the post short and postpone my coverage of the other papers I’d intended to cover in that post until a later point in time; this post includes some of those other papers I’d intended to cover in that post.

i. TCF7L2 Genetic Variants Contribute to Phenotypic Heterogeneity of Type 1 Diabetes.

“Although the autoimmune destruction of β-cells has a major role in the development of type 1 diabetes, there is growing evidence that the differences in clinical, metabolic, immunologic, and genetic characteristics among patients (1) likely reflect diverse etiology and pathogenesis (2). Factors that govern this heterogeneity are poorly understood, yet these may have important implications for prognosis, therapy, and prevention.

The transcription factor 7 like 2 (TCF7L2) locus contains the single nucleotide polymorphism (SNP) most strongly associated with type 2 diabetes risk, with an ∼30% increase per risk allele (3). In a U.S. cohort, heterozygous and homozygous carriers of the at-risk alleles comprised 40.6% and 7.9%, respectively, of the control subjects and 44.3% and 18.3%, respectively, of the individuals with type 2 diabetes (3). The locus has no known association with type 1 diabetes overall (48), with conflicting reports in latent autoimmune diabetes in adults (816). […] Our studies in two separate cohorts have shown that the type 2 diabetes–associated TCF7L2 genetic variant is more frequent among specific subsets of individuals with autoimmune type 1 diabetes, specifically those with fewer markers of islet autoimmunity (22,23). These observations support a role of this genetic variant in the pathogenesis of diabetes at least in a subset of individuals with autoimmune diabetes. However, whether individuals with type 1 diabetes and this genetic variant have distinct metabolic abnormalities has not been investigated. We aimed to study the immunologic and metabolic characteristics of individuals with type 1 diabetes who carry a type 2 diabetes–associated allele of the TCF7L2 locus.”

“We studied 810 TrialNet participants with newly diagnosed type 1 diabetes and found that among individuals 12 years and older, the type 2 diabetes–associated TCF7L2 genetic variant is more frequent in those presenting with a single autoantibody than in participants who had multiple autoantibodies. These TCF7L2 variants were also associated with higher mean C-peptide AUC and lower mean glucose AUC levels at the onset of type 1 diabetes. […] These findings suggest that, besides the well-known link with type 2 diabetes, the TCF7L2 locus may play a role in the development of type 1 diabetes. The type 2 diabetes–associated TCF7L2 genetic variant identifies a subset of individuals with autoimmune type 1 diabetes and fewer markers of islet autoimmunity, lower glucose, and higher C-peptide at diagnosis. […] A possible interpretation of these data is that TCF7L2-encoded diabetogenic mechanisms may contribute to diabetes development in individuals with limited autoimmunity […]. Because the risk of progression to type 1 diabetes is lower in individuals with single compared with multiple autoantibodies, it is possible that in the absence of this type 2 diabetes–associated TCF7L2 variant, these individuals may have not manifested diabetes. If that is the case, we would postulate that disease development in these patients may have a type 2 diabetes–like pathogenesis in which islet autoimmunity is a significant component but not necessarily the primary driver.”

“The association between this genetic variant and single autoantibody positivity was present in individuals 12 years or older but not in children younger than 12 years. […] The results in the current study suggest that the type 2 diabetes–associated TCF7L2 genetic variant plays a larger role in older individuals. There is mounting evidence that the pathogenesis of type 1 diabetes varies by age (31). Younger individuals appear to have a more aggressive form of disease, with faster decline of β-cell function before and after onset of disease, higher frequency and severity of diabetic ketoacidosis, which is a clinical correlate of severe insulin deficiency, and lower C-peptide at presentation (3135). Furthermore, older patients are less likely to have type 1 diabetes–associated HLA alleles and islet autoantibodies (28). […] Taken together, we have demonstrated that individuals with autoimmune type 1 diabetes who carry the type 2 diabetes–associated TCF7L2 genetic variant have a distinct phenotype characterized by milder immunologic and metabolic characteristics than noncarriers, closer to those of type 2 diabetes, with an important effect of age.”

ii. Heart Failure: The Most Important, Preventable, and Treatable Cardiovascular Complication of Type 2 Diabetes.

“Concerns about cardiovascular disease in type 2 diabetes have traditionally focused on atherosclerotic vasculo-occlusive events, such as myocardial infarction, stroke, and limb ischemia. However, one of the earliest, most common, and most serious cardiovascular disorders in patients with diabetes is heart failure (1). Following its onset, patients experience a striking deterioration in their clinical course, which is marked by frequent hospitalizations and eventually death. Many sudden deaths in diabetes are related to underlying ventricular dysfunction rather than a new ischemic event. […] Heart failure and diabetes are linked pathophysiologically. Type 2 diabetes and heart failure are each characterized by insulin resistance and are accompanied by the activation of neurohormonal systems (norepinephrine, angiotensin II, aldosterone, and neprilysin) (3). The two disorders overlap; diabetes is present in 35–45% of patients with chronic heart failure, whether they have a reduced or preserved ejection fraction.”

“Treatments that lower blood glucose do not exert any consistently favorable effect on the risk of heart failure in patients with diabetes (6). In contrast, treatments that increase insulin signaling are accompanied by an increased risk of heart failure. Insulin use is independently associated with an enhanced likelihood of heart failure (7). Thiazolidinediones promote insulin signaling and have increased the risk of heart failure in controlled clinical trials (6). With respect to incretin-based secretagogues, liraglutide increases the clinical instability of patients with existing heart failure (8,9), and the dipeptidyl peptidase 4 inhibitors saxagliptin and alogliptin are associated with an increased risk of heart failure in diabetes (10). The likelihood of heart failure with the use of sulfonylureas may be comparable to that with thiazolidinediones (11). Interestingly, the only two classes of drugs that ameliorate hyperinsulinemia (metformin and sodium–glucose cotransporter 2 inhibitors) are also the only two classes of antidiabetes drugs that appear to reduce the risk of heart failure and its adverse consequences (12,13). These findings are consistent with experimental evidence that insulin exerts adverse effects on the heart and kidneys that can contribute to heart failure (14). Therefore, physicians can prevent many cases of heart failure in type 2 diabetes by careful consideration of the choice of agents used to achieve glycemic control. Importantly, these decisions have an immediate effect; changes in risk are seen within the first few months of changes in treatment. This immediacy stands in contrast to the years of therapy required to see a benefit of antidiabetes drugs on microvascular risk.”

“As reported by van den Berge et al. (4), the prognosis of patients with heart failure has improved over the past two decades; heart failure with a reduced ejection fraction is a treatable disease. Inhibitors of the renin-angiotensin system are a cornerstone of the management of both disorders; they prevent the onset of heart failure and the progression of nephropathy in patients with diabetes, and they reduce the risk of cardiovascular death and hospitalization in those with established heart failure (3,15). Diabetes does not influence the magnitude of the relative benefit of ACE inhibitors in patients with heart failure, but patients with diabetes experience a greater absolute benefit from treatment (16).”

“The totality of evidence from randomized trials […] demonstrates that in patients with diabetes, heart failure is not only common and clinically important, but it can also be prevented and treated. This conclusion is particularly significant because physicians have long ignored heart failure in their focus on glycemic control and their concerns about the ischemic macrovascular complications of diabetes (1).”

iii. Closely related to the above study: Mortality Reduction Associated With β-Adrenoceptor Inhibition in Chronic Heart Failure Is Greater in Patients With Diabetes.

“Diabetes increases mortality in patients with chronic heart failure (CHF) and reduced left ventricular ejection fraction. Studies have questioned the safety of β-adrenoceptor blockers (β-blockers) in some patients with diabetes and reduced left ventricular ejection fraction. We examined whether β-blockers and ACE inhibitors (ACEIs) are associated with differential effects on mortality in CHF patients with and without diabetes. […] We conducted a prospective cohort study of 1,797 patients with CHF recruited between 2006 and 2014, with mean follow-up of 4 years.”

RESULTS Patients with diabetes were prescribed larger doses of β-blockers and ACEIs than were patients without diabetes. Increasing β-blocker dose was associated with lower mortality in patients with diabetes (8.9% per mg/day; 95% CI 5–12.6) and without diabetes (3.5% per mg/day; 95% CI 0.7–6.3), although the effect was larger in people with diabetes (interaction P = 0.027). Increasing ACEI dose was associated with lower mortality in patients with diabetes (5.9% per mg/day; 95% CI 2.5–9.2) and without diabetes (5.1% per mg/day; 95% CI 2.6–7.6), with similar effect size in these groups (interaction P = 0.76).”

“Our most important findings are:

  • Higher-dose β-blockers are associated with lower mortality in patients with CHF and LVSD, but patients with diabetes may derive more benefit from higher-dose β-blockers.

  • Higher-dose ACEIs were associated with comparable mortality reduction in people with and without diabetes.

  • The association between higher β-blocker dose and reduced mortality is most pronounced in patients with diabetes who have more severely impaired left ventricular function.

  • Among patients with diabetes, the relationship between β-blocker dose and mortality was not associated with glycemic control or insulin therapy.”

“We make the important observation that patients with diabetes may derive more prognostic benefit from higher β-blocker doses than patients without diabetes. These data should provide reassurance to patients and health care providers and encourage careful but determined uptitration of β-blockers in this high-risk group of patients.”

iv. Diabetes, Prediabetes, and Brain Volumes and Subclinical Cerebrovascular Disease on MRI: The Atherosclerosis Risk in Communities Neurocognitive Study (ARIC-NCS).

“Diabetes and prediabetes are associated with accelerated cognitive decline (1), and diabetes is associated with an approximately twofold increased risk of dementia (2). Subclinical brain pathology, as defined by small vessel disease (lacunar infarcts, white matter hyperintensities [WMH], and microhemorrhages), large vessel disease (cortical infarcts), and smaller brain volumes also are associated with an increased risk of cognitive decline and dementia (37). The mechanisms by which diabetes contributes to accelerated cognitive decline and dementia are not fully understood, but contributions of hyperglycemia to both cerebrovascular disease and primary neurodegenerative disease have been suggested in the literature, although results are inconsistent (2,8). Given that diabetes is a vascular risk factor, brain atrophy among individuals with diabetes may be driven by increased cerebrovascular disease. Brain magnetic resonance imaging (MRI) provides a noninvasive opportunity to study associations of hyperglycemia with small vessel disease (lacunar infarcts, WMH, microhemorrhages), large vessel disease (cortical infarcts), and brain volumes (9).”

“Overall, the mean age of participants [(n = 1,713)] was 75 years, 60% were women, 27% were black, 30% had prediabetes (HbA1c 5.7 to <6.5%), and 35% had diabetes. Compared with participants without diabetes and HbA1c <5.7%, those with prediabetes (HbA1c 5.7 to <6.5%) were of similar age (75.2 vs. 75.0 years; P = 0.551), were more likely to be black (24% vs. 11%; P < 0.001), have less than a high school education (11% vs. 7%; P = 0.017), and have hypertension (71% vs. 63%; P = 0.012) (Table 1). Among participants with diabetes, those with HbA1c <7.0% versus ≥7.0% were of similar age (75.4 vs. 75.1 years; P = 0.481), but those with diabetes and HbA1c ≥7.0% were more likely to be black (39% vs. 28%; P = 0.020) and to have less than a high school education (23% vs. 16%; P = 0.031) and were more likely to have a longer duration of diabetes (12 vs. 8 years; P < 0.001).”

“Compared with participants without diabetes and HbA1c <5.7%, those with diabetes and HbA1c ≥7.0% had smaller total brain volume (β −0.20 SDs; 95% CI −0.31, −0.09) and smaller regional brain volumes, including frontal, temporal, occipital, and parietal lobes; deep gray matter; Alzheimer disease signature region; and hippocampus (all P < 0.05) […]. Compared with participants with diabetes and HbA1c <7.0%, those with diabetes and HbA1c ≥7.0% had smaller total brain volume (P < 0.001), frontal lobe volume (P = 0.012), temporal lobe volume (P = 0.012), occipital lobe volume (P = 0.008), parietal lobe volume (P = 0.015), deep gray matter volume (P < 0.001), Alzheimer disease signature region volume (0.031), and hippocampal volume (P = 0.016). Both participants with diabetes and HbA1c <7.0% and those with prediabetes (HbA1c 5.7 to <6.5%) had similar total and regional brain volumes compared with participants without diabetes and HbA1c <5.7% (all P > 0.05). […] No differences in the presence of lobar microhemorrhages, subcortical microhemorrhages, cortical infarcts, and lacunar infarcts were observed among the diabetes-HbA1c categories (all P > 0.05) […]. Compared with participants without diabetes and HbA1c <5.7%, those with diabetes and HbA1c ≥7.0% had increased WMH volume (P = 0.016). The WMH volume among participants with diabetes and HbA1c ≥7.0% was also significantly greater than among those with diabetes and HbA1c <7.0% (P = 0.017).”

“Those with diabetes duration ≥10 years were older than those with diabetes duration <10 years (75.9 vs. 75.0 years; P = 0.041) but were similar in terms of race and sex […]. Compared with participants with diabetes duration <10 years, those with diabetes duration ≥10 years has smaller adjusted total brain volume (β −0.13 SDs; 95% CI −0.20, −0.05) and smaller temporal lobe (β −0.14 SDs; 95% CI −0.24, −0.03), parietal lobe (β − 0.11 SDs; 95% CI −0.21, −0.01), and hippocampal (β −0.16 SDs; 95% CI −0.30, −0.02) volumes […]. Participants with diabetes duration ≥10 years also had a 2.44 times increased odds (95% CI 1.46, 4.05) of lacunar infarcts compared with those with diabetes duration <10 years”.

In this community-based population, we found that ARIC-NCS participants with diabetes with HbA1c ≥7.0% have smaller total and regional brain volumes and an increased burden of WMH, but those with prediabetes (HbA1c 5.7 to <6.5%) and diabetes with HbA1c <7.0% have brain volumes and markers of subclinical cerebrovascular disease similar to those without diabetes. Furthermore, among participants with diabetes, those with more-severe disease (as measured by higher HbA1c and longer disease duration) had smaller total and regional brain volumes and an increased burden of cerebrovascular disease compared with those with lower HbA1c and shorter disease duration. However, we found no evidence that associations of diabetes with smaller brain volumes are mediated by cerebrovascular disease.

The findings of this study extend the current literature that suggests that diabetes is strongly associated with brain volume loss (11,2527). Global brain volume loss (11,2527) has been consistently reported, but associations of diabetes with smaller specific brain regions have been less robust (27,28). Similar to prior studies, the current results show that compared with individuals without diabetes, those with diabetes have smaller total brain volume (11,2527) and regional brain volumes, including frontal and occipital lobes, deep gray matter, and the hippocampus (25,27). Furthermore, the current study suggests that greater severity of disease (as measured by HbA1c and diabetes duration) is associated with smaller total and regional brain volumes. […] Mechanisms whereby diabetes may contribute to brain volume loss include accelerated amyloid-β and hyperphosphorylated tau deposition as a result of hyperglycemia (29). Another possible mechanism involves pancreatic amyloid (amylin) infiltration of the brain, which then promotes amyloid-β deposition (29). […] Taken together, […] the current results suggest that diabetes is associated with both lower brain volumes and increased cerebrovascular pathology (WMH and lacunes).”

v. Interventions to increase attendance for diabetic retinopathy screening (Cochrane review).

“The primary objective of the review was to assess the effectiveness of quality improvement (QI) interventions that seek to increase attendance for DRS in people with type 1 and type 2 diabetes.

Secondary objectives were:
To use validated taxonomies of QI intervention strategies and behaviour change techniques (BCTs) to code the description of interventions in the included studies and determine whether interventions that include particular QI strategies or component BCTs are more effective in increasing screening attendance;
To explore heterogeneity in effect size within and between studies to identify potential explanatory factors for variability in effect size;
To explore differential effects in subgroups to provide information on how equity of screening attendance could be improved;
To critically appraise and summarise current evidence on the resource use, costs and cost effectiveness.”

“We included 66 RCTs conducted predominantly (62%) in the USA. Overall we judged the trials to be at low or unclear risk of bias. QI strategies were multifaceted and targeted patients, healthcare professionals or healthcare systems. Fifty-six studies (329,164 participants) compared intervention versus usual care (median duration of follow-up 12 months). Overall, DRS [diabetic retinopathy screening] attendance increased by 12% (risk difference (RD) 0.12, 95% confidence interval (CI) 0.10 to 0.14; low-certainty evidence) compared with usual care, with substantial heterogeneity in effect size. Both DRS-targeted (RD 0.17, 95% CI 0.11 to 0.22) and general QI interventions (RD 0.12, 95% CI 0.09 to 0.15) were effective, particularly where baseline DRS attendance was low. All BCT combinations were associated with significant improvements, particularly in those with poor attendance. We found higher effect estimates in subgroup analyses for the BCTs ‘goal setting (outcome)’ (RD 0.26, 95% CI 0.16 to 0.36) and ‘feedback on outcomes of behaviour’ (RD 0.22, 95% CI 0.15 to 0.29) in interventions targeting patients, and ‘restructuring the social environment’ (RD 0.19, 95% CI 0.12 to 0.26) and ‘credible source’ (RD 0.16, 95% CI 0.08 to 0.24) in interventions targeting healthcare professionals.”

“Ten studies (23,715 participants) compared a more intensive (stepped) intervention versus a less intensive intervention. In these studies DRS attendance increased by 5% (RD 0.05, 95% CI 0.02 to 0.09; moderate-certainty evidence).”

“Overall, we found that there is insufficient evidence to draw robust conclusions about the relative cost effectiveness of the interventions compared to each other or against usual care.”

“The results of this review provide evidence that QI interventions targeting patients, healthcare professionals or the healthcare system are associated with meaningful improvements in DRS attendance compared to usual care. There was no statistically significant difference between interventions specifically aimed at DRS and those which were part of a general QI strategy for improving diabetes care.”

vi. Diabetes in China: Epidemiology and Genetic Risk Factors and Their Clinical Utility in Personalized Medication.

“The incidence of type 2 diabetes (T2D) has rapidly increased over recent decades, and T2D has become a leading public health challenge in China. Compared with European descents, Chinese patients with T2D are diagnosed at a relatively young age and low BMI. A better understanding of the factors contributing to the diabetes epidemic is crucial for determining future prevention and intervention programs. In addition to environmental factors, genetic factors contribute substantially to the development of T2D. To date, more than 100 susceptibility loci for T2D have been identified. Individually, most T2D genetic variants have a small effect size (10–20% increased risk for T2D per risk allele); however, a genetic risk score that combines multiple T2D loci could be used to predict the risk of T2D and to identify individuals who are at a high risk. […] In this article, we review the epidemiological trends and recent progress in the understanding of T2D genetic etiology and further discuss personalized medicine involved in the treatment of T2D.”

“Over the past three decades, the prevalence of diabetes in China has sharply increased. The prevalence of diabetes was reported to be less than 1% in 1980 (2), 5.5% in 2001 (3), 9.7% in 2008 (4), and 10.9% in 2013, according to the latest published nationwide survey (5) […]. The prevalence of diabetes was higher in the senior population, men, urban residents, individuals living in economically developed areas, and overweight and obese individuals. The estimated prevalence of prediabetes in 2013 was 35.7%, which was much higher than the estimate of 15.5% in the 2008 survey. Similarly, the prevalence of prediabetes was higher in the senior population, men, and overweight and obese individuals. However, prediabetes was more prevalent in rural residents than in urban residents. […] the 2013 survey also compared the prevalence of diabetes among different races. The crude prevalence of diabetes was 14.7% in the majority group, i.e., Chinese Han, which was higher than that in most minority ethnic groups, including Tibetan, Zhuang, Uyghur, and Muslim. The crude prevalence of prediabetes was also higher in the Chinese Han ethnic group. The Tibetan participants had the lowest prevalence of diabetes and prediabetes (4.3% and 31.3%).”

“[T]he prevalence of diabetes in young people is relatively high and increasing. The prevalence of diabetes in the 20- to 39-year age-group was 3.2%, according to the 2008 national survey (4), and was 5.9%, according to the 2013 national survey (5). The prevalence of prediabetes also increased from 9.0% in 2008 to 28.8% in 2013 […]. Young people suffering from diabetes have a higher risk of chronic complications, which are the major cause of mortality and morbidity in diabetes. According to a study conducted in Asia (6), patients with young-onset diabetes had higher mean concentrations of HbA1c and LDL cholesterol and a higher prevalence of retinopathy (20% vs. 18%, P = 0.011) than those with late-onset diabetes. In the Chinese, patients with early-onset diabetes had a higher risk of nonfatal cardiovascular disease (7) than did patients with late-onset diabetes (odds ratio [OR] 1.91, 95% CI 1.81–2.02).”

“As approximately 95% of patients with diabetes in China have T2D, the rapid increase in the prevalence of diabetes in China may be attributed to the increasing rates of overweight and obesity and the reduction in physical activity, which is driven by economic development, lifestyle changes, and diet (3,11). According to a series of nationwide surveys conducted by the China Physical Fitness Surveillance Center (12), the prevalence of overweight (BMI ≥23.0 to <27.5 kg/m2) in Chinese adults aged 20–59 years increased from 37.4% in 2000 to 39.2% in 2005, 40.7% in 2010, and 41.2% in 2014, with an estimated increase of 0.27% per year. The prevalence of obesity (BMI ≥27.5 kg/m2) increased from 8.6% in 2000 to 10.3% in 2005, 12.2% in 2010, and 12.9% in 2014, with an estimated increase of 0.32% per year […]. The prevalence of central obesity increased from 13.9% in 2000 to 18.3% in 2005, 22.1% in 2010, and 24.9% in 2014, with an estimated increase of 0.78% per year. Notably, T2D develops at a considerably lower BMI in the Chinese population than that in European populations. […] The relatively high risk of diabetes at a lower BMI could be partially attributed to the tendency toward visceral adiposity in East Asian populations, including the Chinese population (13). Moreover, East Asian populations have been found to have a higher insulin sensitivity with a much lower insulin response than European descent and African populations, implying a lower compensatory β-cell function, which increases the risk of progressing to overt diabetes (14).”

“Over the past two decades, linkage analyses, candidate gene approaches, and large-scale GWAS have successfully identified more than 100 genes that confer susceptibility to T2D among the world’s major ethnic populations […], most of which were discovered in European populations. However, less than 50% of these European-derived loci have been successfully confirmed in East Asian populations. […] there is a need to identify specific genes that are associated with T2D in other ethnic populations. […] Although many genetic loci have been shown to confer susceptibility to T2D, the mechanism by which these loci participate in the pathogenesis of T2D remains unknown. Most T2D loci are located near genes that are related to β-cell function […] most single nucleotide polymorphisms (SNPs) contributing to the T2D risk are located in introns, but whether these SNPs directly modify gene expression or are involved in linkage disequilibrium with unknown causal variants remains to be investigated. Furthermore, the loci discovered thus far collectively account for less than 15% of the overall estimated genetic heritability.”

“The areas under the receiver operating characteristic curves (AUCs) are usually used to assess the discriminative accuracy of an approach. The AUC values range from 0.5 to 1.0, where an AUC of 0.5 represents a lack of discrimination and an AUC of 1 represents perfect discrimination. An AUC ≥0.75 is considered clinically useful. The dominant conventional risk factors, including age, sex, BMI, waist circumference, blood pressure, family history of diabetes, physical activity level, smoking status, and alcohol consumption, can be combined to construct conventional risk factor–based models (CRM). Several studies have compared the predictive capacities of models with and without genetic information. The addition of genetic markers to a CRM could slightly improve the predictive performance. For example, one European study showed that the addition of an 11-SNP GRS to a CRM marginally improved the risk prediction (AUC was 0.74 without and 0.75 with the genetic markers, P < 0.001) in a prospective cohort of 16,000 individuals (37). A meta-analysis (38) consisting of 23 studies investigating the predictive performance of T2D risk models also reported that the AUCs only slightly increased with the addition of genetic information to the CRM (median AUC was increased from 0.78 to 0.79). […] Despite great advances in genetic studies, the clinical utility of genetic information in the prediction, early identification, and prevention of T2D remains in its preliminary stage.”

“An increasing number of studies have highlighted that early nutrition has a persistent effect on the risk of diabetes in later life (40,41). China’s Great Famine of 1959–1962 is considered to be the largest and most severe famine of the 20th century […] Li et al. (43) found that offspring of mothers exposed to the Chinese famine have a 3.9-fold increased risk of diabetes or hyperglycemia as adults. A more recent study (the Survey on Prevalence in East China for Metabolic Diseases and Risk Factors [SPECT-China]) conducted in 2014, among 6,897 adults from Shanghai, Jiangxi, and Zhejiang provinces, had the same conclusion that famine exposure during the fetal period (OR 1.53, 95% CI 1.09–2.14) and childhood (OR 1.82, 95% CI 1.21–2.73) was associated with diabetes (44). These findings indicate that undernutrition during early life increases the risk of hyperglycemia in adulthood and this association is markedly exaggerated when facing overnutrition in later life.”

February 23, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Health Economics, Immunology, Medicine, Neurology, Ophthalmology, Pharmacology, Studies | Leave a comment

Endocrinology (part 5 – calcium and bone metabolism)

Some observations from chapter 6:

“*Osteoclasts – derived from the monocytic cells; resorb bone. *Osteoblasts – derived from the fibroblast-like cells; make bone. *Osteocytes – buried osteoblasts; sense mechanical strain in bone. […] In order to ensure that bone can undertake its mechanical and metabolic functions, it is in a constant state of turnover […] Bone is laid down rapidly during skeletal growth at puberty. Following this, there is a period of stabilization of bone mass in early adult life. After the age of ~40, there is a gradual loss of bone in both sexes. This occurs at the rate of approximately 0.5% annually. However, in ♀ after the menopause, there is a period of rapid bone loss. The accelerated loss is maximal in the first 2-5 years after the cessation of ovarian function and then gradually declines until the previous gradual rate of loss is once again established. The excess bone loss associated with the menopause is of the order of 10% of skeletal mass. This menopause-associated loss, coupled with higher peak bone mass acquisition in ♂, largely explains why osteoporosis and its associated fractures are more common in ♀.”

“The clinical utility of routine measurements of bone turnover markers is not yet established. […] Skeletal radiology[:] *Useful for: *Diagnosis of fracture. *Diagnosis of specific diseases (e.g. Paget’s disease and osteomalacia). *Identification of bone dysplasia. *Not useful for assessing bone density. […] Isotope bone scans are useful for identifying localized areas of bone disease, such as fracture, metastases, or Paget’s disease. […] Isotope bone scans are particularly useful in Paget’s disease to establish the extent and sites of skeletal involvement and the underlying disease activity. […] Bone biopsy is occasionally necessary for the diagnosis of patients with complex metabolic bone diseases. […] Bone biopsy is not indicated for the routine diagnosis of osteoporosis. It should only be undertaken in highly specialist centres with appropriate expertise. […] Measurement of 24h urinary excretion of calcium provides a measure of risk of renal stone formation or nephrocalcinosis in states of chronic hypercalcaemia. […] 250H vitamin D […] is the main storage form of vitamin D, and the measurement of ‘total vitamin D’ is the most clinically useful measure of vitamin D status. Internationally, there remains controversy around a ‘normal’ or ‘optimal’ concentration of vitamin D. Levels over 50nmol/L are generally accepted as satisfactory and values <25nmol/L representing deficiency. True osteomalacia occurs with vitamin D values <15 nmol/L. Low levels of 250HD can result from a variety of causes […] Bone mass is quoted in terms of the number of standard deviations from an expected mean. […] A reduction of one SD in bone density will approximately double the risk of fracture.”

[I should perhaps add a cautionary note here that while this variable is very useful in general, it is more useful in some contexts than in others; and in some specific disease process contexts it is quite clear that it will tend to underestimate the fracture risk. Type 1 diabetes is a clear example. For more details, see this post.]

“Hypercalcaemia is found in 5% of hospital patients and in 0.5% of the general population. […] Many different disease states can lead to hypercalcaemia. […] In asymptomatic community-dwelling subjects, the vast majority of hypercalcaemia is the result of hyperparathyroidism. […] The clinical features of hypercalcaemia are well recognized […]; unfortunately, they are non-specific […] [They include:] *Polyuria. *Polydipsia. […] *Anorexia. *Vomiting. *Constipation. *Abdominal pain. […] *Confusion. *Lethargy. *Depression. […] Clinical signs of hypercalcaemia are rare. […] the presence of bone pain or fracture and renal stones […] indicate the presence of chronic hypercalcaemia. […] Hypercalcaemia is usually a late manifestation of malignant disease, and the primary lesion is usually evident by the time hypercalcaemia is expressed (50% of patients die within 30 days).”

“Primary hyperparathyroidism [is] [p]resent in up to 1 in 500 of the general population where it is predominantly a disease of post-menopausal ♀ […] The normal physiological response to hypocalcaemia is an increase in PTH secretion. This is termed 2° hyperparathyroidism and is not pathological in as much as the PTH secretion remains under feedback control. Continued stimulation of the parathyroid glands can lead to autonomous production of PTH. This, in turn, causes hypercalcaemia which is termed tertiary hyperparathyroidism. This is usually seen in the context of renal disease […] In majority of patients [with hyperparathyroidism] without end-organ damage, disease is benign and stable. […] Investigation is, therefore, primarily aimed at determining the presence of end-organ damage from hypercalcaemia in order to determine whether operative intervention is indicated. […] It is generally accepted that all patients with symptomatic hyperparathyroidism or evidence of end-organ damage should be considered for parathyroidectomy. This would include: *Definite symptoms of hypercalcaemia. […] *Impaired renal function. *Renal stones […] *Parathyroid bone disease, especially osteitis fibrosis cystica. *Pancreatitis. […] Patients not managed with surgery require regular follow-up. […] <5% fail to become normocalcaemic [after surgery], and these should be considered for a second operation. […] Patients rendered permanently hypoparathyroid by surgery require lifelong supplements of active metabolites of vitamin D with calcium. This can lead to hypercalciuria, and the risk of stone formation may still be present in these patients. […] In hypoparathyroidism, the target serum calcium should be at the low end of the reference range. […] any attempt to raise the plasma calcium well into the normal range is likely to result in unacceptable hypercalciuria”.

“Although hypocalcaemia can result from failure of any of the mechanisms by which serum calcium concentration is maintained, it is usually the result of either failure of PTH secretion or because of the inability to release calcium from bone. […] The clinical features of hypocalcaemia are largely as a result of neuromuscular excitability. In order of  severity, these include: *Tingling – especially of fingers, toes, or lips. *Numbness – especially of fingers, toes, or lips. *Cramps. *Carpopedal spasm. *Stridor due to laryngospasm. *Seizures. […] symptoms of hypocalcaemia tend to reflect the severity and rapidity of onset of the metabolic abnormality. […] there may be clinical signs and symptoms associated with the underlying condition: *Vitamin D deficiency may be associated with generalized bone pain, fractures, or proximal myopathy […] *Hypoparathyroidism can be accompanied by mental slowing and personality disturbances […] *If hypocalcaemia is present during the development of permanent teeth, these may show areas of enamel hypoplasia. This can be a useful physical sign, indicating that the hypocalcaemia is long-standing. […] Acute symptomatic hypocalcaemia is a medical emergency and demands urgent treatment whatever the cause […] *Patients with tetany or seizures require urgent IV treatment with calcium gluconate […] Care must be taken […] as too rapid elevation of the plasma calcium can cause arrhythmias. […] *Treatment of chronic hypocalcaemia is more dependent on the cause. […] In patients with mild parathyroid dysfunction, it may be possible to achieve acceptable calcium concentrations by using calcium supplements alone. […] The majority of patients will not achieve adequate control with such treatment. In those cases, it is necessary to use vitamin D or its metabolites in pharmacological doses to maintain plasma calcium.”

“Pseudohypoparathyroidism[:] *Resistance to parathyroid hormone action. *Due to defective signalling of PTH action via cell membrane receptor. *Also affects TSH, LH, FSH, and GH signalling. […] Patients with the most common type of pseudohypoparathyroidism (type 1a) have a characteristic set of skeletal abnormalities, known as Albright’s hereditary osteodystrophy. This comprises: *Short stature. *Obesity. *Round face. *Short metacarpals. […] The principles underlying the treatment of pseudohypoparathyroidism are the same as those underlying hypoparathyroidism. *Patients with the most common form of pseudohypoparathyroidism may have resistance to the action of other hormones which rely on G protein signalling. They, therefore, need to be assessed for thyroid and gonadal dysfunction (because of defective TSH and gonadotrophin action). If these deficiencies are present, they need to be treated in the conventional manner.”

“Osteomalacia occurs when there is inadequate mineralization of mature bone. Rickets is a disorder of the growing skeleton where there is inadequate mineralization of bone as it is laid down at the epiphysis. In most instances, osteomalacia leads to build-up of excessive unmineralized osteoid within the skeleton. In rickets, there is build-up of unmineralized osteoid in the growth plate. […] These two related conditions may coexist. […] Clinical features [of osteomalacia:] *Bone pain. *Deformity. *Fracture. *Proximal myopathy. *Hypocalcaemia (in vitamin D deficiency). […] The majority of patients with osteomalacia will show no specific radiological abnormalities. *The most characteristic abnormality is the Looser’s zone or pseudofracture. If these are present, they are virtually pathognomonic of osteomalacia. […] Oncogenic osteomalacia[:] Certain tumours appear to be able to produce FGF23 which is phosphaturic. This is rare […] Clinically, such patients usually present with profound myopathy as well as bone pain and fracture. […] Complete removal of the tumour results in resolution of the biochemical and skeletal abnormalities. If this is not possible […], treatment with vitamin D metabolites and phosphate supplements […] may help the skeletal symptoms.”

Hypophosphataemia[:] Phosphate is important for normal mineralization of bone. In the absence of sufficient phosphate, osteomalacia results. […] In addition, phosphate is important in its own right for neuromuscular function, and profound hypophosphataemia can be accompanied by encephalopathy, muscle weakness, and cardiomyopathy. It must be remembered that, as phosphate is primarily an intracellular anion, a low plasma phosphate does not necessarily represent actual phosphate depletion. […] Mainstay [of treatment] is phosphate replacement […] *Long-term administration of phosphate supplements stimulates parathyroid activity. This can lead to hypercalcaemia, a further fall in phosphate, with worsening of the bone disease […] To minimize parathyroid stimulation, it is usual to give one of the active metabolites of vitamin D in conjunction with phosphate.”

“Although the term osteoporosis refers to the reduction in the amount of bony tissue within the skeleton, this is generally associated with a loss of structural integrity of the internal architecture of the bone. The combination of both these changes means that osteoporotic bone is at high risk of fracture, even after trivial injury. […] Historically, there has been a primary reliance on bone mineral density as a threshold for treatment, whereas currently there is far greater emphasis on assessing individual patients’ risk of fracture that incorporates multiple clinical risk factors as well as bone mineral density. […] Osteoporosis may arise from a failure of the body to lay down sufficient bone during growth and maturation; an earlier than usual onset of bone loss following maturity; or an rate of that loss. […] Early menopause or late puberty (in ♂ or ♀) is associated with risk of osteoporosis. […] Lifestyle factors affecting bone mass [include:] *weight-bearing exercise [increase bone mass] […] *Smoking. *Excessive alcohol. *Nulliparity. *Poor calcium nutrition. [These all decrease bone mass] […] The risk of osteoporotic fracture increases with age. Fracture rates in ♂ are approximately half of those seen in ♀ of the same age. An ♀ aged 50 has approximately a 1:2 chance [risk, surely… – US] of sustaining an osteoporotic fracture in the rest of her life. The corresponding figure for a ♂ is 1:5. […] One-fifth of hip fracture victims will die within 6 months of the injury, and only 50% will return to their previous level of independence.”

“Any fracture, other than those affecting fingers, toes, or face, which is caused by a fall from standing height or less is called a fragility (low-trauma) fracture, and underlying osteoporosis should be considered. Patients suffering such a fracture should be considered for investigation and/or treatment for osteoporosis. […] [Osteoporosis is] [u]sually clinically silent until an acute fracture. *Two-thirds of vertebral fractures do not come to clinical attention. […] Osteoporotic vertebral fractures only rarely lead to neurological impairment. Any evidence of spinal cord compression should prompt a search for malignancy or other underlying cause. […] Osteoporosis does not cause generalized skeletal pain. […] Biochemical markers of bone turnover may be helpful in the calculation of fracture risk and in judging the response to drug therapies, but they have no role in the diagnosis of osteoporosis. […] An underlying cause for osteoporosis is present in approximately 10-30% of women and up to 50% of men with osteoporosis. […] 2° causes of osteoporosis are more common in ♂ and need to be excluded in all ♂ with osteoporotic fracture. […] Glucocorticoid treatment is one of the major 2° causes of osteoporosis.”

February 22, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology | Leave a comment

Prevention of Late-Life Depression (I)

Late-life depression is a common and highly disabling condition and is also associated with higher health care utilization and overall costs. The presence of depression may complicate the course and treatment of comorbid major medical conditions that are also highly prevalent among older adults — including diabetes, hypertension, and heart disease. Furthermore, a considerable body of evidence has demonstrated that, for older persons, residual symptoms and functional impairment due to depression are common — even when appropriate depression therapies are being used. Finally, the worldwide phenomenon of a rapidly expanding older adult population means that unprecedented numbers of seniors — and the providers who care for them — will be facing the challenge of late-life depression. For these reasons, effective prevention of late-life depression will be a critical strategy to lower overall burden and cost from this disorder. […] This textbook will illustrate the imperative for preventing late-life depression, introduce a broad range of approaches and key elements involved in achieving effective prevention, and provide detailed examples of applications of late-life depression prevention strategies”.

I gave the book two stars on goodreads. There are 11 chapters in the book, written by 22 different contributors/authors, so of course there’s a lot of variation in the quality of the material included; the two star rating was an overall assessment of the quality of the material, and the last two chapters – but in particular chapter 10 – did a really good job convincing me that the the book did not deserve a 3rd star (if you decide to read the book, I advise you to skip chapter 10). In general I think many of the authors are way too focused on statistical significance and much too hesitant to report actual effect sizes, which are much more interesting. Gender is mentioned repeatedly throughout the coverage as an important variable, to the extent that people who do not read the book carefully might think this is one of the most important variables at play; but when you look at actual effect sizes, you get reported ORs of ~1.4 for this variable, compared to e.g. ORs in the ~8-9 for the bereavement variable (see below). You can quibble about population attributable fraction and so on here, but if the effect size is that small it’s unlikely to be all that useful in terms of directing prevention efforts/resource allocation (especially considering that women make out the majority of the total population in these older age groups anyway, as they have higher life expectancy than their male counterparts).

Anyway, below I’ve added some quotes and observations from the first few chapters of the book.

Meta-analyses of more than 30 randomized trials conducted in the High Income Countries show that the incidence of new depressive and anxiety disorders can be reduced by 25–50 % over 1–2 years, compared to usual care, through the use of learning-based psychotherapies (such as interpersonal psychotherapy, cognitive behavioral therapy, and problem solving therapy) […] The case for depression prevention is compelling and represents the key rationale for this volume: (1) Major depression is both prevalent and disabling, typically running a relapsing or chronic course. […] (2) Major depression is often comorbid with other chronic conditions like diabetes, amplifying the disability associated with these conditions and worsening family caregiver burden. (3) Depression is associated with worse physical health outcomes, partly mediated through poor treatment adherence, and it is associated with excess mortality after myocardial infarction, stroke, and cancer. It is also the major risk factor for suicide across the life span and particularly in old age. (4) Available treatments are only partially effective in reducing symptom burden, sustaining remission, and averting years lived with disability.”

“[M]any people suffering from depression do not receive any care and approximately a third of those receiving care do not respond to current treatments. The risk of recurrence is high, also in older persons: half of those who have experienced a major depression will experience one or even more recurrences [4]. […] Depression increases the risk at death: among people suffering from depression the risk of dying is 1.65 times higher than among people without a depression [7], with a dose-response relation between severity and duration of depression and the resulting excess mortality [8]. In adults, the average length of a depressive episode is 8 months but among 20 % of people the depression lasts longer than 2 years [9]. […] It has been estimated that in Australia […] 60 % of people with an affective disorder receive treatment, and using guidelines and standards only 34 % receives effective treatment [14]. This translates in preventing 15 % of Years Lived with Disability [15], a measure of disease burden [14] and stresses the need for prevention [16]. Primary health care providers frequently do not recognize depression, in particular among elderly. Older people may present their depressive symptoms differently from younger adults, with more emphasis on physical complaints [17, 18]. Adequate diagnosis of late-life depression can also be hampered by comorbid conditions such as Parkinson and dementia that may have similar symptoms, or by the fact that elderly people as well as care workers may assume that “feeling down” is part of becoming older [17, 18]. […] Many people suffering from depression do not seek professional help or are not identied as depressed [21]. Almost 14 % of elderly people living in community-type living suffer from a severe depression requiring clinical attention [22] and more than 50 % of those have a chronic course [4, 23]. Smit et al. reported an incidence of 6.1 % of chronic or recurrent depression among a sample of 2,200 elderly people (ages 55–85) [21].”

“Prevention differs from intervention and treatment as it is aimed at general population groups who vary in risk level for mental health problems such as late-life depression. The Institute of Medicine (IOM) has introduced a prevention framework, which provides a useful model for comprehending the different objectives of the interventions [29]. The overall goal of prevention programs is reducing risk factors and enhancing protective factors.
The IOM framework distinguishes three types of prevention interventions: (1) universal preventive interventions, (2) selective preventive interventions, and (3) indicated preventive interventions. Universal preventive interventions are targeted at the general audience, regardless of their risk status or the presence of symptoms. Selective preventive interventions serve those sub-populations who have a significantly higher than average risk of a disorder, either imminently or over a lifetime. Indicated preventive interventions target identified individuals with minimal but detectable signs or symptoms suggesting a disorder. This type of prevention consists of early recognition and early intervention of the diseases to prevent deterioration [30]. For each of the three types of interventions, the goal is to reduce the number of new cases. The goal of treatment, on the other hand, is to reduce prevalence or the total number of cases. By reducing incidence you also reduce prevalence [5]. […] prevention research differs from treatment research in various ways. One of the most important differences is the fact that participants in treatment studies already meet the criteria for the illness being studied, such as depression. The intervention is targeted at improvement or remission of the specific condition quicker than if no intervention had taken place. In prevention research, the participants do not meet the specific criteria for the illness being studied and the overall goal of the intervention is to prevent the development of a clinical illness at a lower rate than a comparison group [5].”

A couple of risk factors [for depression] occur more frequently among the elderly than among young adults. The loss of a loved one or the loss of a social role (e.g., employment), decrease of social support and network, and the increasing change of isolation occur more frequently among the elderly. Many elderly also suffer from physical diseases: 64 % of elderly aged 65–74 has a chronic disease [36] […]. It is important to note that depression often co-occurs with other disorders such as physical illness and other mental health problems (comorbidity). Losing a spouse can have significant mental health effects. Almost half of all widows and widowers during the first year after the loss meet the criteria for depression according to the DSM-IV [37]. Depression after loss of a loved one is normal in times of mourning. However, when depressive symptoms persist during a longer period of time it is possible that a depression is developing. Zisook and Shuchter found that a year after the loss of a spouse 16 % of widows and widowers met the criteria of a depression compared to 4 % of those who did not lose their spouse [38]. […] People with a chronic physical disease are also at a higher risk of developing a depression. An estimated 12–36 % of those with a chronic physical illness also suffer from clinical depression [40]. […] around 25 % of cancer patients suffer from depression [40]. […] Depression is relatively common among elderly residing in hospitals and retirement- and nursing homes. An estimated 6–11 % of residents have a depressive illness and among 30 % have depressive symptoms [41]. […] Loneliness is common among the elderly. Among those of 60 years or older, 43 % reported being lonely in a study conducted by Perissinotto et al. […] Loneliness is often associated with physical and mental complaints; apart from depression it also increases the chance of developing dementia and excess mortality [43].”

From the public health perspective it is important to know what the potential health benefits would be if the harmful effect of certain risk factors could be removed. What health benefits would arise from this, at which efforts and costs? To measure this the population attributive fraction (PAF) can be used. The PAF is expressed in a percentage and demonstrates the decrease of the percentage of incidences (number of new cases) when the harmful effects of the targeted risk factors are fully taken away. For public health it would be more effective to design an intervention targeted at a risk factor with a high PAF than a low PAF. […] An intervention needs to be effective in order to be implemented; this means that it has to show a statistically significant difference with placebo or other treatment. Secondly, it needs to be effective; it needs to prove its benefits also in real life (“everyday care”) circumstances. Thirdly, it needs to be efficient. The measure to address this is the Number Needed to Be Treated (NNT). The NNT expresses how many people need to be treated to prevent the onset of one new case with the disorder; the lower the number, the more efficient the intervention [45]. To summarize, an indicated preventative intervention would ideally be targeted at a relatively small group of people with a high, absolute chance of developing the disease, and a risk profile that is responsible for a high PAF. Furthermore, there needs to be an intervention that is both effective and efficient. […] a more detailed and specific description of the target group results in a higher absolute risk, a lower NNT, and also a lower PAF. This is helpful in determining the costs and benefits of interventions aiming at more specific or broader subgroups in the population. […] Unfortunately very large samples are required to demonstrate reductions in universal or selected interventions [46]. […] If the incidence rate is higher in the target population, which is usually the case in selective and even more so in indicated prevention, the number of participants needed to prove an effect is much smaller [5]. This shows that, even though universal interventions may be effective, its effect is harder to prove than that of indicated prevention. […] Indicated and selective preventions appear to be the most successful in preventing depression to date; however, more research needs to be conducted in larger samples to determine which prevention method is really most effective.”

Groffen et al. [6] recently conducted an investigation among a sample of 4,809 participants from the Reykjavik Study (aged 66–93 years). Similar to the findings presented by Vink and colleagues [3], education level was related to depression risk: participants with lower education levels were more likely to report depressed mood in late-life than those with a college education (odds ratio [OR] = 1.87, 95 % confidence interval [CI] = 1.35–2.58). […] Results from a meta-analysis by Lorant and colleagues [8] showed that lower SES individuals had a greater odds of developing depression than those in the highest SES group (OR = 1.24, p= 0.004); however, the studies involved in this review did not focus on older populations. […] Cole and Dendukuri [10] performed a meta-analysis of studies involving middle-aged and older adult community residents, and determined that female gender was a risk factor for depression in this population (Pooled OR = 1.4, 95 % CI = 1.2–1.8), but not old age. Blazer and colleagues [11] found a significant positive association between older age and depressive symptoms in a sample consisting of community-dwelling older adults; however, when potential confounders such as physical disability, cognitive impairment, and gender were included in the analysis, the relationship between chronological age and depressive symptoms was reversed (p< 0.01). A study by Schoevers and colleagues [14] had similar results […] these findings suggest that higher incidence of depression observed among the oldest-old may be explained by other relevant factors. By contrast, the association of female gender with increased risk of late-life depression has been observed to be a highly consistent finding.”

In an examination of marital bereavement, Turvey et al. [16] analyzed data among 5,449 participants aged70 years […] recently bereaved participants had nearly nine times the odds of developing syndromal depression as married participants (OR = 8.8, 95 % CI = 5.1–14.9, p<0.0001), and they also had significantly higher risk of depressive symptoms 2 years after the spousal loss. […] Caregiving burden is well-recognized as a predisposing factor for depression among older adults [18]. Many older persons are coping with physically and emotionally challenging caregiving roles (e.g., caring for a spouse/partner with a serious illness or with cognitive or physical decline). Additionally, many caregivers experience elements of grief, as they mourn the loss of relationship with or the decline of valued attributes of their care recipients. […] Concepts of social isolation have also been examined with regard to late-life depression risk. For example, among 892 participants aged 65 years […], Gureje et al. [13] found that women with a poor social network and rural residential status were more likely to develop major depressive disorder […] Harlow and colleagues [21] assessed the association between social network and depressive symptoms in a study involving both married and recently widowed women between the ages of 65 and 75 years; they found that number of friends at baseline had an inverse association with CES-D (Centers for Epidemiologic Studies Depression Scale) score after 1 month (p< 0.05) and 12 months (p= 0.06) of follow-up. In a study that explicitly addressed the concept of loneliness, Jaremka et al. [22] conducted a study relating this factor to late-life depression; importantly, loneliness has been validated as a distinct construct, distinguishable among older adults from depression. Among 229 participants (mean age = 70 years) in a cohort of older adults caring for a spouse with dementia, loneliness (as measured by the NYU scale) significantly predicted incident depression (p<0.001). Finally, social support has been identified as important to late-life depression risk. For example, Cui and colleagues [23] found that low perceived social support significantly predicted worsening depression status over a 2-year period among 392 primary care patients aged 65 years and above.”

“Saunders and colleagues [26] reported […] findings with alcohol drinking behavior as the predictor. Among 701 community-dwelling adults aged 65 years and above, the authors found a significant association between prior heavy alcohol consumption and late-life depression among men: compared to those who were not heavy drinkers, men with a history of heavy drinking had a nearly fourfold higher odds of being diagnosed with depression (OR = 3.7, 95 % CI = 1.3–10.4, p< 0.05). […] Almeida et al. found that obese men were more likely than non-obese (body mass index [BMI] < 30) men to develop depression (HR = 1.31, 95 % CI = 1.05–1.64). Consistent with these results, presence of the metabolic syndrome was also found to increase risk of incident depression (HR = 2.37, 95 % CI = 1.60–3.51). Finally, leisure-time activities are also important to study with regard to late-life depression risk, as these too are readily modifiable behaviors. For example, Magnil et al. [30] examined such activities among a sample of 302 primary care patients aged 60 years. The authors observed that those who lacked leisure activities had an increased risk of developing depressive symptoms over the 2-year study period (OR = 12, 95 % CI = 1.1–136, p= 0.041). […] an important future direction in addressing social and behavioral risk factors in late-life depression is to make more progress in trials that aim to alter those risk factors that are actually modifiable.”

February 17, 2018 Posted by | Books, Epidemiology, Health Economics, Medicine, Psychiatry, Psychology, Statistics | Leave a comment

Endocrinology (part 3 – adrenal glands)

Some observations from chapter 3 below.

“The normal adrenal gland weigh 4-5g. The cortex represents 90% of the normal gland and surrounds the medulla. […] Glucocorticoid (cortisol […]) production occurs from the zona fasciculata, and adrenal androgens arise from the zona reticularis. Both of these are under the control of ACTH [see also my previous post about the book – US], which regulates both steroid synthesis and also adrenocortical growth. […] Mineralocorticoid (aldosterone […]) synthesis occurs in zona glomerulosa, predominantly under the control of the renin-angiotensin system […], although ACTH also contributes to its regulation. […] The adrenal gland […] also produces sex steroids in the form of dehydroepiandrostenedione (DHEA) and androstenedione. The synthetic pathway is under the control of ACTH. Urinary steroid profiling provides quantitative information on the biosynthetic and catabolic pathways. […] CT is the most widely used modality for imaging the adrenal glands. […] MRI can also reliably detect adrenal masses >5-10mm in diameter and, in some circumstances, provides additional information to CT […] PET can be useful in locating tumours and metastases. […] Adrenal vein sampling (AVS) […] can be useful to lateralize an adenoma or to differentiate an adenoma from bilateral hyperplasia. […] AVS is of particular value in lateralizing small aldosterone-producing adenomas that cannot easily be visualized on CT or MRI. […] The procedure should only be undertaken in patients in whom surgery is feasible and desired […] [and] should be carried out in specialist centres only; centres with <20 procedures per year have been shown to have poor success rates”.

“The majority of cases of mineralocorticoid excess are due to excess aldosterone production, […] typically associated with hypertension and hypokalemia. *Primary hyperaldosteronism is a disorder of autonomous aldosterone hypersecretion with suppressed renin levels. *Secondary hyperaldosteronism occurs when aldosterone hypersecretion occurs 2° [secondary, US] to elevated circulating renin levels. This is typical of heart failure, cirrhosis, or nephrotic syndrome but can also be due to renal artery stenosis and, occasionally, a very rare renin-producing tumour (reninoma). […] Primary hyperaldosteronism is present in around 10% of hypertensive patients. It is the most prevalent form of secondary hypertension. […] Aldosterone causes renal sodium retention and potassium loss. This results in expansion of body sodium content, leading to suppression of renal renin synthesis. The direct action of aldosterone on the distal nephron causes sodium retention and loss and hydrogen and potassium ions, resulting in a hypokalaemic alkalosis, although serum potassium […] may be normal in up to 50% of cases. Aldosterone has pathophysiological effects on a range of other tissues, causing cardiac fibrosis, vascular endothelial dysfunction, and nephrosclerosis. […] hypertension […] is often resistant to conventional therapy. […] Hypokalaemia is usually asymptomatic. […] Occasionally, the clinical syndrome of hyperaldosteronism is not associated with excess aldosterone. […] These conditions are rare.”

“Bilateral adrenal hyperplasia [make up] 60% [of cases of primary hyperaldosteronism]. […] Conn’s syndrome (aldosterone-producing adrenal adenoma) [make up] 35%. […] The pathophysiology of bilateral adrenal hyperplasia is not understood, and it is possible that it represents an extreme end of the spectrum of low renin essential hypertension. […] Aldosterone-producing carcinoma[s] [are] [r]are and usually associated with excessive secretion of other corticosteroids (cortisol, androgen, oestrogen). […] Indications [for screening include:] *Patients resistant to conventional antihypertensive medication (i.e. not controlled on three agents). *Hypertension associated with hypokalaemia […] *Hypertension developing before age of 40 years. […] Confirmation of autonomous aldosterone production is made by demonstrating failure to suppress aldosterone in face of sodium/volume loading. […] A number of tests have been described that are said to differentiate between the various subtypes of 1° [primary, US] aldosteronism […]. However, none of these are sufficiently specific to influence management decisions”.

“Laparoscopic adrenalectomy is the treatment of choice for aldosterone-secreting adenomas […] and laparoscopic adrenalectomy […] has become the procedure of choice for removal of most adrenal tumours. *Hypertension is cured in about 70%. *If it persists […], it is more amenable to medical treatment. *Overall, 50% become normotensive in 1 month and 70% within 1 year. […] Medical therapy remains an option for patients with bilateral disease and those with a solitary adrenal adenoma who are unlikely to be cured by surgery, who are unfit for operation, or who express a preference for medical management. *The mineralocorticoid receptor antagonist spironolactone […] has been used successfully for many years to treat hypertension and hypokalaemia associated with bilateral adrenal hyperplasia […] Side effects are common – particularly gynaecomastia and impotence in ♂, menstrual irregularities in ♀, and GI effects. […] Eplerenone […] is a mineralocorticoid receptor antagonist without antiandrogen effects and hence greater selectivity and less side effects than spironolactone. *Alternative drugs include the potassium-sparing diuretics amiloride and triamterene.”

“Cushing’s syndrome results from chronic excess cortisol [see also my second post in this series] […] The causes may be classified as ACTH-dependent and ACTH-independent. […] ACTH-independent Cushing’s syndrome […] is due to adrenal tumours (benign and malignant), and is responsible for 10-15% of cases of Cushing’s syndrome. […] Benign adrenocortical adenomas (ACA) are usually encapsulated and <4cm in diameter. They are usually associated with pure glucocorticoid excess. *Adrenocortical carcinomas (ACC) are usually >6cm in diameter, […] and are not infrequently associated with local invasion and metastases at the time of diagnosis. Adrenal carcinomas are characteristically associated with the excess secretion of several hormones; most frequently found is the combination of cortisol and androgen (precursors) […] ACTH-dependent Cushing’s results in bilateral adrenal hyperplasia, thus one has to firmly differentiate between ACTH-dependent and independent causes of Cushing’s before assuming bilateral adrenal hyperplasia as the primary cause of disease. […] It is important to note that, in patients with adrenal carcinoma, there may also be features related to excessive androgen production in ♀ and also a relatively more rapid time course of development of the syndrome. […] Patients with ACTH-independent Cushing’s syndrome do not suppress cortisol […] on high-dose dexamethasone testing and fail to show a rise in cortisol and ACTH following administration of CRH. […] ACTH-independent causes are adrenal in origin, and the mainstay of further investigation is adrenal imaging by CT”.

“Adrenal adenomas, which are successfully treated with surgery, have a good prognosis, and recurrence is unlikely. […] Bilateral adrenalectomy [in the context of bilateral adrenal hyperplasia] is curative. Lifelong glucocorticoid and mineralocorticoid treatment is [however] required. […] The prognosis for adrenal carcinoma is very poor despite surgery. Reports suggest a 5-year survival of 22% and median survival time of 14 months […] Treatment of adrenocortical carcinoma (ACC) should be carried out in a specialist centre, with expert surgeons, oncologists, and endocrinologists with extensive treatment in treating ACC. This improves survival.”

“Adrenal insufficiency [AI, US] is defined by the lack of cortisol, i.e. glucocorticoid deficiency, may be due to destruction of the adrenal cortex (1°, Addison’s disease and congenital adrenal hyperplasia (CAH) […] or due to disordered pituitary and hypothalamic function (2°). […] *Permanent adrenal insufficiency is found in 5 in 10,000 population. *The most frequent cause is hypothalamic-pituitary damage, which is the cause of AI in 60% of affected patients. *The remaining 40% of cases are due to primary failure of the adrenal to synthesize cortisol, almost equal prevalence of Addison’s disease (mostly of autoimmune origin, prevalence 0.9-1.4 in 10,000) and congenital adrenal hyperplasia (0.7-1.0 in 10,000). *2° adrenal insufficiency due to suppression of pituitary-hypothalamic function by exogenously administered, supraphysiological glucocorticoid doses for treatment of, for example, COPD or rheumatoid arthritis, is much more common (50-200 in 10,000 population). However, adrenal function in these patients can recover”.

“[In primary AI] [a]drenal gland destruction or dysfunction occurs due to a disease process which usually involves all three zones of the adrenal cortex, resulting in inadequate glucocorticoid, mineralocorticoid, and adrenal androgen precursor secretion. The manifestations of insufficiency do not usually appear until at least 90% of the gland has been destroyed and are usually gradual in onset […] Acute adrenal insufficiency may occur in the context of acute septicaemia […] Mineralocorticoid deficiency leads to reduced sodium retention and hyponatraemia and hypotension […] Androgen deficiency presents in ♀ with reduced axillary and pubic hair and reduced libido. (Testicular production of androgens is more important in ♂). [In secondary AI] [i]nadequate ACTH results in deficient cortisol production (and ↓ androgens in ♀). […] Mineralocorticoid secretion remains normal […] The onset is usually gradual, with partial ACTH deficiency resulting in reduced response to stress. […] Lack of stimulation of skin MC1R due to ACTH deficiency results in pale skin appearance. […] [In 1° adrenal insufficiency] hyponatraemia is present in 90% and hyperkalaemia in 65%. […] Undetectable serum cortisol is diagnostic […], but the basal cortisol is often in the normal range. A cortisol >550nmol/L precludes the diagnosis. At times of acute stress, an inappropriately low cortisol is very suggestive of the diagnosis.”

“Autoimmune adrenalitis[:] Clinical features[:] *Anorexia and weight loss (>90%). *Tiredness. *Weakness – generalized, no particular muscle groups. […] Dizziness and postural hypotension. *GI symptoms – nausea and vomiting, abdominal pain, diarrhea. *Arthralgia and myalgia. […] *Mediated by humoral and cell-mediated immune mechanisms. Autoimmune insufficiency associated with polyglandular autoimmune syndrome is more common in ♀ (70%). *Adrenal cortex antibodies are present in the majority of patients at diagnosis, and […] they are still found in approximately 70% of patients 10 years later. Up to 20% patients/year with [positive] antibodies develop adrenal insufficiency. […] *Antiadrenal antibodies are found in <2% of patients with other autoimmune endocrine disease (Hashimoto’s thyroiditis, diabetes mellitus, autoimmune hypothyroidism, hypoparathyroidism, pernicious anemia). […] antibodies to other endocrine glands are commonly found in patients with autoimmune adrenal insufficiency […] However, the presence of antibodies does not predict subsequent manifestation of organ-specific autoimmunity. […] Patients with type 1 diabetes mellitus and autoimmune thyroid disease only rarely develop autoimmune adrenal insufficiency. Approximately 60% of patients with Addison’s disease have other autoimmune or endocrine disorders. […] The adrenals are small and atrophic in chronic autoimmune adrenalitis.”

“Autoimmune polyglandular syndrome (APS) type 1[:] *Also known as autoimmune polyendocrinopathy, candidiasis, and ectodermal dystrophy (APECED). […] [C]hildhood onset. *Chronic mucocutaneous candidiasis. *Hypoparathyroidism (90%), 1° adrenal insufficiency (60%). *1° gonadal failure (41%) – usually after Addison’s diagnosis. *1° hypothyroidism. *Rarely hypopituitarism, diabetes insipidus, type 1 diabetes mellitus. […] APS type 2[:] *Adult onset. *Adrenal insufficiency (100%). 1° autoimmune thyroid disease (70%) […] Type 1 diabetes mellitus (5-20%) – often before Addison’s diagnosis. *1° gonadal failure in affected women (5-20%). […] Schmidt’s syndrome: *Addison’s disease, and *Autoimmune hypothyroidism. *Carpenter syndrome: *Addison’s disease, and *Autoimmune hypothyroidism, and/or *Type 1 diabetes mellitus.”

“An adrenal incidentaloma is an adrenal mass that is discovered incidentally upon imaging […] carried out for reasons other than a suspected adrenal pathology.  […] *Autopsy studies suggest incidence prevalence of adrenal masses of 1-6% in the general population. *Imagining studies suggest that adrenal masses are present 2-3% in the general population. Incidence increases with ageing, and 8-10% of 70-year olds harbour an adrenal mass. […] It is important to determine whether the incidentally discovered adrenal mass is: *Malignant. *Functioning and associated with excess hormonal secretion.”

January 17, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Immunology, Medicine, Nephrology, Pharmacology | Leave a comment

Endocrinology (part 2 – pituitary)

Below I have added some observations from the second chapter of the book, which covers the pituitary gland.

“The pituitary gland is centrally located at the base of the brain in the sella turcica within the sphenoid bone. It is attached to the hypothalamus by the pituitary stalk and a fine vascular network. […] The pituitary measures around 13mm transversely, 9mm anteroposteriorly, and 6mm vertically and weighs approximately 100mg. It increases during pregnancy to almost twice its normal size, and it decreases in the elderly. *Magnetic resonance imaging (MRI) currently provides the optimal imaging of the pituitary gland. *Computed tomography (CT) scans may still be useful in demonstrating calcification in tumours […] and hyperostosis in association with meningiomas or evidence of bone destruction. […] T1– weighted images demonstrate cerebrospinal fluid (CSF) as dark grey and brain as much whiter. This imagining is useful for demonstrating anatomy clearly. […] On T1– weighted images, pituitary adenomas are of lower signal intensity than the remainder of the normal gland. […] The presence of microadenomas may be difficult to demonstrate.”

“Hypopituitarism refers to either partial or complete deficiency of anterior and/or posterior pituitary hormones and may be due to [primary] pituitary disease or to hypothalamic pathology which interferes with the hypothalamic control of the pituitary. Causes: *Pituitary tumours. *Parapituitary tumours […] *Radiotherapy […] *Pituitary infarction (apoplexy), Sheehan’s syndrome. *Infiltration of the pituitary gland […] *infection […] *Trauma […] *Subarachnoid haemorrhage. *Isolated hypothalamic-releasing hormone deficiency, e.g. Kallmann’s syndrome […] *Genetic causes [Let’s stop here: Point is, lots of things can cause pituitary problems…] […] The clinical features depend on the type and degree of hormonal deficits, and the rate of its development, in addition to whether there is intercurrent illness. In the majority of cases, the development of hypopituitarism follows a characteristic order, which secretion of GH [growth hormone, US], then gonadotrophins being affected first, followed by TSH [Thyroid-Stimulating Hormone, US] and ACTH [Adrenocorticotropic Hormone, US] secretion at a later stage. PRL [prolactin, US] deficiency is rare, except in Sheehan’s syndrome associated with failure of lactation. ADH [antidiuretic hormone, US] deficiency is virtually unheard of with pituitary adenomas but may be seen rarely with infiltrative disorders and trauma. The majority of the clinical features are similar to those occurring when there is target gland insufficiency. […] NB Houssay phenomenon. Amelioration of diabetes mellitus in patients with hypopituitarism due to reduction in counter-regulatory hormones. […] The aims of investigation of hypopituitarism are to biochemically assess the extent of pituitary hormone deficiency and also to elucidate the cause. […] Treatment involves adequate and appropriate hormone replacement […] and management of the underlying cause.”

“Apoplexy refers to infarction of the pituitary gland due to either haemorrhage or ischaemia. It occurs most commonly in patients with pituitary adenomas, usually macroadenomas […] It is a medical emergency, and rapid hydrocortisone replacement can be lifesaving. It may present with […] sudden onset headache, vomiting, meningism, visual disturbance, and cranial nerve palsy.”

“Anterior pituitary hormone replacement therapy is usually performed by replacing the target hormone rather than the pituitary or hypothalamic hormone that is actually deficient. The exceptions to this are GH replacement […] and when fertility is desired […] [In the context of thyroid hormone replacement:] In contrast to replacement in [primary] hypothyroidism, the measurement of TSH cannot be used to assess adequacy of replacment in TSH deficiency due to hypothalamo-pituitary disease. Therefore, monitoring of treatment in order to avoid under- and over-replacement should be via both clinical assessment and by measuring free thyroid hormone concentrations […] [In the context of sex hormone replacement:] Oestrogen/testosterone administration is the usual method of replacement, but gonadotrophin therapy is required if fertility is desired […] Patients with ACTH deficiency usually need glucocorticoid replacement only and do not require mineralcorticoids, in contrast to patients with Addison’s disease. […] Monitoring of replacement [is] important to avoid over-replacement which is associated with BP, elevated glucose and insulin, and reduced bone mineral density (BMD). Under-replacement leads to the non-specific symptoms, as seen in Addison’s disease […] Conventional replacement […] may overtreat patients with partial ACTH deficiency.”

“There is now a considerable amount of evidence that there are significant and specific consequences of GH deficiency (GDH) in adults and that many of these features improve with GH replacement therapy. […] It is important to differentiate between adult and childhood onset GDH. […] the commonest cause in childhood is an isolated variable deficiency of GH-releasing hormone (GHRH) which may resolve in adult life […] It is, therefore, important to retest patients with childhood onset GHD when linear growth is completed (50% recovery of this group). Adult onset. GHD usually occurs [secondarily] to a structural pituitary or parapituitary condition or due to the effects of surgical treatment or radiotherapy. Prevalence[:] *Adult onset GHD 1/10,000 *Adult GHD due to adult and childhood onset GHD 3/10,000. Benefits of GH replacement[:] *Improved QoL and psychological well-being. *Improved exercise capacity. *↑ lean body mass and reduced fat mass. *Prolonged GH replacement therapy (>12-24 months) has been shown to increase BMD, which would be expected to reduce fracture rate. *There are, as yet, no outcome studies in terms of cardiovascular mortality. However, GH replacement does lead to a reduction (~15%) in cholesterol. GH replacement also leads to improved ventricular function and ↑ left ventricular mass. […] All patients with GHD should be considered for GH replacement therapy. […] adverse effects experienced with GH replacement usually resolve with dose reduction […] GH treatment may be associated with impairment of insulin sensitivity, and therefore markers of glycemia should be monitored. […] Contraindications to GH replacement[:] *Active malignancy. *Benign intracranial hypertension. *Pre-proliferative/proliferative retinopathy in diabetes mellitus.”

“*Pituitary adenomas are the most common pituitary disease in adults and constitute 10-15% of primary brain tumours. […] *The incidence of clinically apparent pituitary disease is 1 in 10,000. *Pituitary carcinoma is very rare (<0.1% of all tumours) and is most commonly ACTH- or prolactin-secreting. […] *Microadenoma <1cm. *Macroadenoma >1cm. [In terms of the functional status of tumours, the break-down is as follows:] *Prolactinoma 35-40%. *Non-functioning 30-35%. Growth hormone (acromegaly) 10-15%. *ACTH adenoma (Cushing’s disease) 5-10% *TSH adenoma <5%. […] Pituitary disease is associated with an increased mortality, predominantly due to vascular disease. This may be due to oversecretion of GH or ACTH, hormone deficiencies or excessive replacement (e.g. of hydrocortisone).”

“*Prolactinomas are the commonest functioning pituitary tumour. […] Malignant prolactinomas are very rare […] [Clinical features of hyperprolactinaemia:] *Galactorrhoea (up to 90%♀, <10% ♂). *Disturbed gonadal function [menstrual disturbance, infertility, reduced libido, ED in ♂] […] Hyperprolactinaemia is associated with a long-term risk of BMD. […] Hypothyroidism and chronic renal failure are causes of hyperprolactinaemia. […] Antipsychotic agents are the most likely psychotrophic agents to cause hyperprolactinaemia. […] Macroadenomas are space-occupying tumours, often associated with bony erosion and/or cavernous sinus invasion. […] *Invasion of the cavernous sinus may lead to cranial nerve palsies. *Occasionally, very invasive tumours may erode bone and present with a CSF leak or [secondary] meningitis. […] Although microprolactinomas may expand in size without treatment, the vast majority do not. […] Macroprolactinomas, however, will continue to expand and lead to pressure effects. Definite treatment of the tumour is, therefore, necessary.”

“Dopamine agonist treatment […] leads to suppression of PRL in most patients [with prolactinoma], with [secondary] effects of normalization of gonadal function and termination of galactorrhoea. Tumour shrinkage occurs at a variable rate (from 24h to 6-12 months) and extent and must be carefully monitored. Continued shrinkage may occur for years. Slow chiasmal decompression will correct visual field defects in the majority of patients, and immediate surgical decompression is not necessary. […] Cabergoline is more effective in normalization of PRL in microprolactinoma […], with fewer side effects than bromocriptine. […] Tumour enlargement following initial shrinkage on treatment is usually due to non-compliance. […] Since the introduction of dopamine agonist treatment, transsphenoidal surgery is indicated only for patients who are resistant to, or intolerant of, dopamine agonist treatment. The cure rate for macroprolactinomas treated with surgery is poor (30%), and, therefore, drug treatment is first-line in tumours of all size. […] Standard pituitary irradiation leads to slow reduction (over years) of PRL in the majority of patients. […] Radiotherapy is not indicated in the management of patients with microprolactinomas. It is useful in the treatment of macroprolactinomas once the tumour has been shrunken away from the chiasm, only if the tumour is resistant.”

“Acromegaly is the clinical condition resulting from prolonged excessive GH and hence IGF-1 secretion in adults. GH secretion is characterized by blunting of pulsatile secretion and failure of GH to become undetectable during the 24h day, unlike normal controls. […] *Prevalence 40-86 cases/million population. Annual incidence of new cases in the UK is 4/million population. *Onset is insidious, and there is, therefore, often a considerable delay between onset of clinical features and diagnosis. Most cases are diagnosed at 40-60 years. […] Pituitary gigantism [is] [t]he clinical syndrome resulting from excess GH secretion in children prior to fusion of the epiphyses. […] growth velocity without premature pubertal manifestations should arouse suspicion of pituitary gigantism. […] Causes of acromegaly[:] *Pituitary adenoma (>99% of cases). Macroadenomas 60-80%, microadenomas 20-40%. […] The clinical features arise from the effects of excess GH/IGF-1, excess PRL in some (as there is co-secretion of PRL in a minority (30%) of tumours […] and the tumour mass. [Signs and symptoms:] * sweating -> 80% of patients. *Headaches […] *Tiredness and lethargy. *Joint pains. *Change in ring or shoe size. *Facial appearance. Coarse features […] enlarged nose […] prognathism […] interdental separation. […] Enlargement of hands and feet […] [Complications:] *Hypertension (40%). *Insulin resistance and impaired glucose tolerance (40%)/diabetes mellitus (20%). *Obstructive sleep apnea – due to soft tissue swelling […] Ischaemic heart disease and cerebrovascular disease.”

“Management of acromegaly[:] The management strategy depends on the individual patient and also on the tumour size. Lowering of GH is essential in all situations […] Transsphenoidal surgery […] is usually the first line for treatment in most centres. *Reported cure rates vary: 40-91% for microadenomas and 10-48% for macroadenomas, depending on surgical expertise. […] Using the definition of post-operative cure as mean GH <2.5 micrograms/L, the reported recurrence rate is low (6% at 5 years). Radiotherapy […] is usually reserved for patients following unsuccessful transsphenoidal surgery, only occasionally is it used as [primary] therapy. […] normalization of mean GH may take several years and, during this time, adjunctive medical treatment (usually with somatostatin analogues) is required. […] Radiotherapy can induce GH deficiency which may need GH therapy. […] Somatostatin analogues lead to suppresion of GH secretion in 20-60% of patients with acromegaly. […] some patients are partial responders, and although somatostatin analogues will lead to lowering of mean GH, they do not suppress to normal despite dose escalation. These drugs may be used as [primary] therapy where the tumour does not cause mass effects or in patients who have received surgery and/or radiotherapy who have elevated mean GH. […] Dopamine agonists […] lead to lowering of GH levels but, very rarely, lead to normalization of GH or IGF-1 (<30%). They may be helpful, particularly if there is coexistent secretion of PRL, and, in these cases, there may be significant tumour shrinkage. […] GH receptor antagonists [are] [i]ndicated for somatostatin non-responders.”

“Cushing’s syndrome is an illness resulting from excess cortisol secretion, which has a high mortality if left untreated. There are several causes of hypercortisolaemia which must be differentiated, and the commonest cause is iatrogenic (oral, inhaled, or topical steroids). […] ACTH-dependent Cushing’s must be differentiated from ACTH-independent disease (usually due to an adrenal adenoma, or, rarely, carcinoma […]). Once a diagnosis of ACTH-dependent disease has been established, it is important to differentiate between pituitary-dependent (Cushing’s disease) and ectopic secretion. […] [Cushing’s disease is rare;] annual incidence approximately 2/million. The vast majority of Cushing’s syndrome is due to a pituitary ACTH-secreting corticotroph microadenoma. […] The features of Cushing’s syndrome are progressive and may be present for several years prior to diagnosis. […] *Facial appearance – round plethoric complexion, acne and hirsutism, thinning of scalp hair. *Weight gain – truncal obesity, buffalo hump […] *Skin – thin and fragile […] easy bruising […] *Proximal muscle weakness. *Mood disturbance – labile, depression, insomnia, psychosis. *Menstrual disturbance. *Low libido and impotence. […] Associated features [include:] *Hypertension (>50%) due to mineralocorticoid effects of cortisol […] *Impaired glucose tolerance/diabetes mellitus (30%). *Osteopenia and osteoporosis […] *Vascular disease […] *Susceptibility to infections. […] Cushing’s is associated with a hypercoagulable state, with increased cardiovascular thrombotic risks. […] Hypercortisolism suppresses the thyroidal, gonadal, and GH axes, leading to lowered levels of TSH and thyroid hormones as well as reduced gonadotrophins, gonadal steroids, and GH.”

“Treatment of Cushing’s disease[:] Transsphenoidal surgery [is] the first-line option in most cases. […] Pituitary radiotherapy [is] usually administered as second-line treatment, following unsuccessful transsphenoidal surgery. […] Medical treatment [is] indicated during the preoperative preparation of patients or while awaiting radiotherapy to be effective or if surgery or radiotherapy are contraindicated. *Inhibitors of steroidogenesis: metyrapone is usually used first-line, but ketoconazole should be used as first-line in children […] Disadvantage of these agents inhibiting steroidogenesis is the need to increase the dose to maintain control, as ACTH secretion will increase as cortisol concentrations decrease. […] Successful treatment (surgery or radiotherapy) of Cushing’s disease leads to cortisol deficiency and, therefore, glucocorticoid replacement therapy is essential. […] *Untreated [Cushing’s] disease leads to an approximately 30-50% mortality at 5 years, owing to vascular disease and susceptibility to infections. *Treated Cushing’s syndrome has a good prognosis […] *Although the physical features and severe psychological disorders associated with Cushing’s improve or resolve within weeks or months of successful treatment, more subtle mood disturbance may persist for longer. Adults may also have impaired cognitive function. […] it is likely that there is an cardiovascular risk. *Osteoporosis will usually resolve in children but may not improve significantly in older patients. […] *Hypertension has been shown to resolve in 80% and diabetes mellitus in up to 70%. *Recent data suggests that mortality even with successful treatment of Cushing’s is increased significantly.”

“The term incidentaloma refers to an incidentally detected lesion that is unassociated with hormonal hyper- or hyposecretion and has a benign natural history. The increasingly frequent detection of these lesions with technological improvements and more widespread use of sophisticated imaging has led to a management challenge – which, if any, lesions need investigation and/or treatment, and what is the optimal follow-up strategy (if required at all)? […] *Imaging studies using MRI demonstrate pituitary microadenomas in approximately 10% of normal volunteers. […] Clinically significant pituitary tumours are present in about 1 in 1,000 patients. […] Incidentally detected microadenomas are very unlikely (<10%) to increase in size whereas larger incidentally detected meso- and macroadenomas are more likely (40-50%) to enlarge. Thus, conservative management in selected patients may be appropriate for microadenomas which are incidentally detected […]. Macroadenomas should be treated, if possible.”

“Non-functioning pituitary tumours […] are unassociated with clinical syndromes of anterior pituitary hormone excess. […] Non-functioning pituitary tumours (NFA) are the commonest pituitary macroadenoma. They represent around 28% of all pituitary tumours. […] 50% enlarge, if left untreated, at 5 years. […] Tumour behaviour is variable, with some tumours behaving in a very indolent, slow-growing manner and others invading the sphenoid and cavernous sinus. […] At diagnosis, approximately 50% of patients are gonadotrophin-deficient. […] The initial definitive management in virtually every case is surgical. This removes mass effects and may lead to some recovery of pituitary function in around 10%. […] The use of post-operative radiotherapy remains controversial. […] The regrowth rate at 10 years without radiotherapy approaches 45% […] administration of post-operative radiotherapy reduces this regrowth rate to <10%. […] however, there are sequelae to radiotherapy – with a significant long-term risk of hypopituitarism and a possible risk of visual deterioration and malignancy in the field of radiation. […] Unlike the case for GH- and PRL-secreting tumours, medical therapy for NFAs is usually unhelpful […] Gonadotrophinomas […] are tumours that arise from the gonadotroph cells of the pituitary gland and produce FSH, LH, or the α subunit. […] they are usually silent and unassociated with excess detectable secretion of LH and FSH […] [they] present in the same manner as other non-functioning pituitary tumours, with mass effects and hypopituitarism […] These tumours are managed as non-functioning tumours.”

“The posterior lobe of the pituitary gland arises from the forebrain and comprises up to 25% of the normal adult pituitary gland. It produces arginine vasopressin and oxytocin. […] Oxytoxin has no known role in ♂ […] In ♀, oxytoxin contracts the pregnant uterus and also causes breast duct smooth muscle contraction, leading to breast milk ejection during breastfeeding. […] However, oxytoxin deficiency has no known adverse effect on parturition or breastfeeding. […] Arginine vasopressin is the major determinant of renal water excretion and, therefore, fluid balance. It’s main action is to reduce free water clearance. […] Many substances modulate vasopressin secretion, including the catecholamines and opioids. *The main site of action of vasopressin is in the collecting duct and the thick ascending loop of Henle […] Diabetes Insipidus (DI) […] is defined as the passage of large volumes (>3L/24h) of dilute urine (osmolality <300mOsm/kg). [It may be] [d]ue to deficiency of circulating arginine vasopressin [or] [d]ue to renal resistance to vasopressin.” […lots of other causes as well – trauma, tumours, inflammation, infection, vascular, drugs, genetic conditions…]

Hyponatraemia […] Incidence *1-6% of hospital admissions Na<130mmol/L. *15-22% hospital admissions Na<135mmol/L. […] True clinically apparent hyponatraemia is associated with either excess water or salt deficiency. […] Features *Depend on the underlying cause and also on the rate of development of hyponatraemia. May develop once sodium reaches 115mmol/L or earlier if the fall is rapid. Level at 100mmol/L or less is life-threatening. *Features of excess water are mainly neurological because of brain injury […] They include confusion and headache, progressing to seizures and coma. […] SIADH [Syndrome of Inappropriate ADH, US] is a common cause of hyponatraemia. […] The elderly are more prone to SIADH, as they are unable to suppress ADH as efficiently […] ↑ risk of hyponatraemia with SSRIs. […] rapid overcorrection of hyponatraemia may cause central pontine myelinolysis (demyelination).”

“The hypothalamus releases hormones that act as releasing hormones at the anterior pituitary gland. […] The commonest syndrome to be associated with the hypothalamus is abnormal GnRH secretion, leading to reduced gonadotrophin secretion and hypogonadism. Common causes are stress, weight loss, and excessive exercise.”

January 14, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Ophthalmology, Pharmacology | Leave a comment

A few diabetes papers of interest

i. Type 2 Diabetes in the Real World: The Elusive Nature of Glycemic Control.

“Despite U.S. Food and Drug Administration (FDA) approval of over 40 new treatment options for type 2 diabetes since 2005, the latest data from the National Health and Nutrition Examination Survey show that the proportion of patients achieving glycated hemoglobin (HbA1c) <7.0% (<53 mmol/mol) remains around 50%, with a negligible decline between the periods 2003–2006 and 2011–2014. The Healthcare Effectiveness Data and Information Set reports even more alarming rates, with only about 40% and 30% of patients achieving HbA1c <7.0% (<53 mmol/mol) in the commercially insured (HMO) and Medicaid populations, respectively, again with virtually no change over the past decade. A recent retrospective cohort study using a large U.S. claims database explored why clinical outcomes are not keeping pace with the availability of new treatment options. The study found that HbA1c reductions fell far short of those reported in randomized clinical trials (RCTs), with poor medication adherence emerging as the key driver behind the disconnect. In this Perspective, we examine the implications of these findings in conjunction with other data to highlight the discrepancy between RCT findings and the real world, all pointing toward the underrealized promise of FDA-approved therapies and the critical importance of medication adherence. While poor medication adherence is not a new issue, it has yet to be effectively addressed in clinical practice — often, we suspect, because it goes unrecognized. To support the busy health care professional, innovative approaches are sorely needed.”

“To better understand the differences between usual care and clinical trial HbA1c results, multivariate regression analysis assessed the relative contributions of key biobehavioral factors, including baseline patient characteristics, drug therapy, and medication adherence (21). Significantly, the key driver was poor medication adherence, accounting for 75% of the gap […]. Adherence was defined […] as the filling of one’s diabetes prescription often enough to cover ≥80% of the time one was recommended to be taking the medication (34). By this metric, proportion of days covered (PDC) ≥80%, only 29% of patients were adherent to GLP-1 RA treatment and 37% to DPP-4 inhibitor treatment. […] These data are consistent with previous real-world studies, which have demonstrated that poor medication adherence to both oral and injectable antidiabetes agents is very common (3537). For example, a retrospective analysis [of] adults initiating oral agents in the DPP-4 inhibitor (n = 61,399), sulfonylurea (n = 134,961), and thiazolidinedione (n = 42,012) classes found that adherence rates, as measured by PDC ≥80% at the 1-year mark after the initial prescription, were below 50% for all three classes, at 47.3%, 41.2%, and 36.7%, respectively (36). Rates dropped even lower at the 2-year follow-up (36)”

“Our current ability to assess adherence and persistence is based primarily on review of pharmacy records, which may underestimate the extent of the problem. For example, using the definition of adherence of the Centers for Medicare & Medicaid Services — PDC ≥80% — a patient could miss up to 20% of days covered and still be considered adherent. In retrospective studies of persistence, the permissible gap after the last expected refill date often extends up to 90 days (39,40). Thus, a patient may have a gap of up to 90 days and still be considered persistent.

Additionally, one must also consider the issue of primary nonadherence; adherence and persistence studies typically only include patients who have completed a first refill. A recent study of e-prescription data among 75,589 insured patients found that nearly one-third of new e-prescriptions for diabetes medications were never filled (41). Finally, none of these measures take into account if the patient is actually ingesting or injecting the medication after acquiring his or her refills.”

“Acknowledging and addressing the problem of poor medication adherence is pivotal because of the well-documented dire consequences: a greater likelihood of long-term complications, more frequent hospitalizations, higher health care costs, and elevated mortality rates (4245). In patients younger than 65, hospitalization risk in one study (n = 137,277) was found to be 30% at the lowest level of adherence to antidiabetes medications (1–19%) versus 13% at the highest adherence quintile (80–100%) […]. In patients over 65, a separate study (n = 123,235) found that all-cause hospitalization risk was 37.4% in adherent cohorts (PDC ≥80%) versus 56.2% in poorly adherent cohorts (PDC <20%) (45). […] Furthermore, for every 1,000 patients who increased adherence to their antidiabetes medications by just 1%, the total medical cost savings was estimated to be $65,464 over 3 years (45). […] “for reasons that are still unclear, the N.A. [North American] patient groups tend to have lower compliance and adherence compared to global rates during large cardiovascular studies” (46,47).”

“There are many potential contributors to poor medication adherence, including depressive affect, negative treatment perceptions, lack of patient-physician trust, complexity of the medication regimen, tolerability, and cost (48). […] A recent review of interventions addressing problematic medication adherence in type 2 diabetes found that few strategies have been shown consistently to have a marked positive impact, particularly with respect to HbA1c lowering, and no single intervention was identified that could be applied successfully to all patients with type 2 diabetes (53). Additional evidence indicates that improvements resulting from the few effective interventions, such as pharmacy-based counseling or nurse-managed home telemonitoring, often wane once the programs end (54,55). We suspect that the efficacy of behavioral interventions to address medication adherence will continue to be limited until there are more focused efforts to address three common and often unappreciated patient obstacles. First, taking diabetes medications is a burdensome and often difficult activity for many of our patients. Rather than just encouraging patients to do a better job of tolerating this burden, more work is needed to make the process easier and more convenient. […] Second, poor medication adherence often represents underlying attitudinal problems that may not be a strictly behavioral issue. Specifically, negative beliefs about prescribed medications are pervasive among patients, and behavioral interventions cannot be effective unless these beliefs are addressed directly (35). […] Third, the issue of access to medications remains a primary concern. A study by Kurlander et al. (51) found that patients selectively forgo medications because of cost; however, noncost factors, such as beliefs, satisfaction with medication-related information, and depression, are also influential.”

ii. Diabetes Research and Care Through the Ages. An overview article which might be of interest especially to people who’re not much familiar with the history of diabetes research and -treatment (a topic which is also very nicely covered in Tattersall’s book). Despite including a historical review of various topics, it also includes many observations about e.g. current (and future?) practice. Some random quotes:

“Arnoldo Cantani established a new strict level of treatment (9). He isolated his patients “under lock and key, and allowed them absolutely no food but lean meat and various fats. In the less severe cases, eggs, liver, and shell-fish were permitted. For drink the patients received water, plain or carbonated, and dilute alcohol for those accustomed to liquors, the total fluid intake being limited to one and one-half to two and one-half liters per day” (6).

Bernhard Naunyn encouraged a strict carbohydrate-free diet (6,10). He locked patients in their rooms for 5 months when necessary for “sugar-freedom” (6).” […let’s just say that treatment options have changed slightly over time – US]

“The characteristics of insulin preparations include the purity of the preparation, the concentration of insulin, the species of origin, and the time course of action (onset, peak, duration) (25). From the 1930s to the early 1950s, one of the major efforts made was to develop an insulin with extended action […]. Most preparations contained 40 (U-40) or 80 (U-80) units of insulin per mL, with U-10 and U-20 eliminated in the early 1940s. U-100 was introduced in 1973 and was meant to be a standard concentration, although U-500 had been available since the early 1950s for special circumstances. Preparations were either of mixed beef and pork origin, pure beef, or pure pork. There were progressive improvements in the purity of preparations as chemical techniques improved. Prior to 1972, conventional preparations contained 8% noninsulin proteins. […] In the early 1980s, “human” insulins were introduced (26). These were made either by recombinant DNA technology in bacteria (Escherichia coli) or yeast (Saccharomyces cerevisiae) or by enzymatic conversion of pork insulin to human insulin, since pork differed by only one amino acid from human insulin. The powerful nature of recombinant DNA technology also led to the development of insulin analogs designed for specific effects. These include rapid-acting insulin analogs and basal insulin analogs.”

“Until 1996, the only oral medications available were biguanides and sulfonylureas. Since that time, there has been an explosion of new classes of oral and parenteral preparations. […] The management of type 2 diabetes (T2D) has undergone rapid change with the introduction of several new classes of glucose-lowering therapies. […] the treatment guidelines are generally clear in the context of using metformin as the first oral medication for T2D and present a menu approach with respect to the second and third glucose-lowering medication (3032). In order to facilitate this decision, the guidelines list the characteristics of each medication including side effects and cost, and the health care provider is expected to make a choice that would be most suited for patient comorbidities and health care circumstances. This can be confusing and contributes to the clinical inertia characteristic of the usual management of T2D (33).”

“Perhaps the most frustrating barrier to optimizing diabetes management is the frequent occurrence of clinical inertia (whenever the health care provider does not initiate or intensify therapy appropriately and in a timely fashion when therapeutic goals are not reached). More broadly, the failure to advance therapy in an appropriate manner can be traced to physician behaviors, patient factors, or elements of the health care system. […] Despite clear evidence from multiple studies, health care providers fail to fully appreciate that T2D is a progressive disease. T2D is associated with ongoing β-cell failure and, as a consequence, we can safely predict that for the majority of patients, glycemic control will deteriorate with time despite metformin therapy (35). Continued observation and reinforcement of the current therapeutic regimen is not likely to be effective. As an example of real-life clinical inertia for patients with T2D on monotherapy metformin and an HbA1c of 7 to <8%, it took on the average 19 months before additional glucose-lowering therapy was introduced (36). The fear of hypoglycemia and weight gain are appropriate concerns for both patient and physician, but with newer therapies these undesirable effects are significantly diminished. In addition, health care providers must appreciate that achieving early and sustained glycemic control has been demonstrated to have long-term benefits […]. Clinicians have been schooled in the notion of a stepwise approach to therapy and are reluctant to initiate combination therapy early in the course of T2D, even if the combination intervention is formulated as a fixed-dose combination. […] monotherapy metformin failure rates with a starting HbA1c >7% are ∼20% per year (35). […] To summarize the current status of T2D at this time, it should be clearly emphasized that, first and foremost, T2D is characterized by a progressive deterioration of glycemic control. A stepwise medication introduction approach results in clinical inertia and frequently fails to meet long-term treatment goals. Early/initial combination therapies that are not associated with hypoglycemia and/or weight gain have been shown to be safe and effective. The added value of reducing CV outcomes with some of these newer medications should elevate them to a more prominent place in the treatment paradigm.”

iii. Use of Adjuvant Pharmacotherapy in Type 1 Diabetes: International Comparison of 49,996 Individuals in the Prospective Diabetes Follow-up and T1D Exchange Registries.

“The majority of those with type 1 diabetes (T1D) have suboptimal glycemic control (14); therefore, use of adjunctive pharmacotherapy to improve control has been of clinical interest. While noninsulin medications approved for type 2 diabetes have been reported in T1D research and clinical practice (5), little is known about their frequency of use. The T1D Exchange (T1DX) registry in the U.S. and the Prospective Diabetes Follow-up (DPV) registry in Germany and Austria are two large consortia of diabetes centers; thus, they provide a rich data set to address this question.

For the analysis, 49,996 pediatric and adult patients with diabetes duration ≥1 year and a registry update from 1 April 2015 to 1 July 2016 were included (19,298 individuals from 73 T1DX sites and 30,698 individuals from 354 DPV sites). Adjuvant medication use (metformin, glucagon-like peptide 1 [GLP-1] receptor agonists, dipeptidyl peptidase 4 [DPP-4] inhibitors, sodium–glucose cotransporter 2 [SGLT2] inhibitors, and other noninsulin diabetes medications including pramlintide) was extracted from participant medical records. […] Adjunctive agents, whose proposed benefits may include the ability to improve glycemic control, reduce insulin doses, promote weight loss, and suppress dysregulated postprandial glucagon secretion, have had little penetrance as part of the daily medical regimen of those in the registries studied. […] The use of any adjuvant medication was 5.4% in T1DX and 1.6% in DPV (P < 0.001). Metformin was the most commonly reported medication in both registries, with 3.5% in the T1DX and 1.3% in the DPV (P < 0.001). […] Use of adjuvant medication was associated with older age, higher BMI, and longer diabetes duration in both registries […] it is important to note that registry data did not capture the intent of adjuvant medications, which may have been to treat polycystic ovarian syndrome in women […here’s a relevant link, US].”

iv. Prevalence of and Risk Factors for Diabetic Peripheral Neuropathy in Youth With Type 1 and Type 2 Diabetes: SEARCH for Diabetes in Youth Study. I recently covered a closely related paper here (paper # 2) but the two papers cover different data sets so I decided it would be worth including this one in this post anyway. Some quotes:

“We previously reported results from a small pilot study comparing the prevalence of DPN in a subset of youth enrolled in the SEARCH for Diabetes in Youth (SEARCH) study and found that 8.5% of 329 youth with T1D (mean ± SD age 15.7 ± 4.3 years and diabetes duration 6.2 ± 0.9 years) and 25.7% of 70 youth with T2D (age 21.6 ± 4.1 years and diabetes duration 7.6 ± 1.8 years) had evidence of DPN (9). […this is the paper I previously covered here, US] Recently, we also reported the prevalence of microvascular and macrovascular complications in youth with T1D and T2D in the entire SEARCH cohort (10).

In the current study, we examined the cross-sectional and longitudinal risk factors for DPN. The aims were 1) to estimate prevalence of DPN in youth with T1D and T2D, overall and by age and diabetes duration, and 2) to identify risk factors (cross-sectional and longitudinal) associated with the presence of DPN in a multiethnic cohort of youth with diabetes enrolled in the SEARCH study.”

“The SEARCH Cohort Study enrolled 2,777 individuals. For this analysis, we excluded participants aged <10 years (n = 134), those with no antibody measures for etiological definition of diabetes (n = 440), and those with incomplete neuropathy assessment […] (n = 213), which reduced the analysis sample size to 1,992 […] There were 1,734 youth with T1D and 258 youth with T2D who participated in the SEARCH study and had complete data for the variables of interest. […] Seven percent of the participants with T1D and 22% of those with T2D had evidence of DPN.”

“Among youth with T1D, those with DPN were older (21 vs. 18 years, P < 0.0001), had a longer duration of diabetes (8.7 vs. 7.8 years, P < 0.0001), and had higher DBP (71 vs. 69 mmHg, P = 0.02), BMI (26 vs. 24 kg/m2, P < 0.001), and LDL-c levels (101 vs. 96 mg/dL, P = 0.01); higher triglycerides (85 vs. 74 mg/dL, P = 0.005); and lower HDL-c levels (51 vs. 55 mg/dL, P = 0.01) compared to those without DPN. The prevalence of DPN was 5% among nonsmokers vs. 10% among the current and former smokers (P = 0.001). […] Among youth with T2D, those with DPN were older (23 vs. 22 years, P = 0.01), had longer duration of diabetes (8.6 vs. 7.6 years; P = 0.002), and had lower HDL-c (40 vs. 43 mg/dL, P = 0.04) compared with those without DPN. The prevalence of DPN was higher among males than among females: 30% of males had DPN compared with 18% of females (P = 0.02). The prevalence of DPN was twofold higher in current smokers (33%) compared with nonsmokers (15%) and former smokers (17%) (P = 0.01). […] [T]he prevalence of DPN was further assessed by 5-year increment of diabetes duration in individuals with T1D or T2D […]. There was an approximately twofold increase in the prevalence of DPN with an increase in duration of diabetes from 5–10 years to >10 years for both the T1D group (5–13%) (P < 0.0001) and the T2D group (19–36%) (P = 0.02). […] in an unadjusted logistic regression model, youth with T2D were four times more likely to develop DPN compared with those with T1D, and though this association was attenuated, it remained significant independent of age, sex, height, and glycemic control (OR 2.99 [1.91; 4.67], P < 0.001)”.

“The prevalence estimates for DPN found in our study for youth with T2D are similar to those in the Australian cohort (8) but lower for youth with T1D than those reported in the Danish (7) and Australian (8) cohorts. The nationwide Danish Study Group for Diabetes in Childhood reported a prevalence of 62% among 339 adolescents and youth with T1D (age 12–27 years, duration 9–25 years, and HbA1c 9.7 ± 1.7%) using the vibration perception threshold to assess DPN (7). The higher prevalence in this cohort compared with ours (62 vs. 7%) could be due to the longer duration of diabetes (9–25 vs. 5–13 years) and reliance on a single measure of neuropathy (vibration perception threshold) as opposed to our use of the MNSI, which includes vibration as well as other indicators of neuropathy. In the Australian study, Eppens et al. (8) reported abnormalities in peripheral nerve function in 27% of the 1,433 adolescents with T1D (median age 15.7 years, median diabetes duration 6.8 years, and mean HbA1c 8.5%) and 21% of the 68 adolescents with T2D (median age 15.3 years, median diabetes duration 1.3 years, and mean HbA1c 7.3%) based on thermal and vibration perception threshold. These data are thus reminiscent of the persistent inconsistencies in the definition of DPN, which are reflected in the wide range of prevalence estimates being reported.”

“The alarming rise in rates of DPN for every 5-year increase in duration, coupled with poor glycemic control and dyslipidemia, in this cohort reinforces the need for clinicians rendering care to youth with diabetes to be vigilant in screening for DPN and identifying any risk factors that could potentially be modified to alter the course of the disease (2830). The modifiable risk factors that could be targeted in this young population include better glycemic control, treatment of dyslipidemia, and smoking cessation (29,30) […]. The sharp increase in rates of DPN over time is a reminder that DPN is one of the complications of diabetes that must be a part of the routine annual screening for youth with diabetes.”

v. Diabetes and Hypertension: A Position Statement by the American Diabetes Association.

“Hypertension is common among patients with diabetes, with the prevalence depending on type and duration of diabetes, age, sex, race/ethnicity, BMI, history of glycemic control, and the presence of kidney disease, among other factors (13). Furthermore, hypertension is a strong risk factor for atherosclerotic cardiovascular disease (ASCVD), heart failure, and microvascular complications. ASCVD — defined as acute coronary syndrome, myocardial infarction (MI), angina, coronary or other arterial revascularization, stroke, transient ischemic attack, or peripheral arterial disease presumed to be of atherosclerotic origin — is the leading cause of morbidity and mortality for individuals with diabetes and is the largest contributor to the direct and indirect costs of diabetes. Numerous studies have shown that antihypertensive therapy reduces ASCVD events, heart failure, and microvascular complications in people with diabetes (48). Large benefits are seen when multiple risk factors are addressed simultaneously (9). There is evidence that ASCVD morbidity and mortality have decreased for people with diabetes since 1990 (10,11) likely due in large part to improvements in blood pressure control (1214). This Position Statement is intended to update the assessment and treatment of hypertension among people with diabetes, including advances in care since the American Diabetes Association (ADA) last published a Position Statement on this topic in 2003 (3).”

“Hypertension is defined as a sustained blood pressure ≥140/90 mmHg. This definition is based on unambiguous data that levels above this threshold are strongly associated with ASCVD, death, disability, and microvascular complications (1,2,2427) and that antihypertensive treatment in populations with baseline blood pressure above this range reduces the risk of ASCVD events (46,28,29). The “sustained” aspect of the hypertension definition is important, as blood pressure has considerable normal variation. The criteria for diagnosing hypertension should be differentiated from blood pressure treatment targets.

Hypertension diagnosis and management can be complicated by two common conditions: masked hypertension and white-coat hypertension. Masked hypertension is defined as a normal blood pressure in the clinic or office (<140/90 mmHg) but an elevated home blood pressure of ≥135/85 mmHg (30); the lower home blood pressure threshold is based on outcome studies (31) demonstrating that lower home blood pressures correspond to higher office-based measurements. White-coat hypertension is elevated office blood pressure (≥140/90 mmHg) and normal (untreated) home blood pressure (<135/85 mmHg) (32). Identifying these conditions with home blood pressure monitoring can help prevent overtreatment of people with white-coat hypertension who are not at elevated risk of ASCVD and, in the case of masked hypertension, allow proper use of medications to reduce side effects during periods of normal pressure (33,34).”

“Diabetic autonomic neuropathy or volume depletion can cause orthostatic hypotension (35), which may be further exacerbated by antihypertensive medications. The definition of orthostatic hypotension is a decrease in systolic blood pressure of 20 mmHg or a decrease in diastolic blood pressure of 10 mmHg within 3 min of standing when compared with blood pressure from the sitting or supine position (36). Orthostatic hypotension is common in people with type 2 diabetes and hypertension and is associated with an increased risk of mortality and heart failure (37).

It is important to assess for symptoms of orthostatic hypotension to individualize blood pressure goals, select the most appropriate antihypertensive agents, and minimize adverse effects of antihypertensive therapy.”

“Taken together, […] meta-analyses consistently show that treating patients with baseline blood pressure ≥140 mmHg to targets <140 mmHg is beneficial, while more intensive targets may offer additional though probably less robust benefits. […] Overall, compared with people without diabetes, the relative benefits of antihypertensive treatment are similar, and absolute benefits may be greater (5,8,40). […] Multiple-drug therapy is often required to achieve blood pressure targets, particularly in the setting of diabetic kidney disease. However, the use of both ACE inhibitors and ARBs in combination is not recommended given the lack of added ASCVD benefit and increased rate of adverse events — namely, hyperkalemia, syncope, and acute kidney injury (7173). Titration of and/or addition of further blood pressure medications should be made in a timely fashion to overcome clinical inertia in achieving blood pressure targets. […] there is an absence of high-quality data available to guide blood pressure targets in type 1 diabetes. […] Of note, diastolic blood pressure, as opposed to systolic blood pressure, is a key variable predicting cardiovascular outcomes in people under age 50 years without diabetes and may be prioritized in younger adults (46,47). Though convincing data are lacking, younger adults with type 1 diabetes might more easily achieve intensive blood pressure levels and may derive substantial long-term benefit from tight blood pressure control.”

“Lifestyle management is an important component of hypertension treatment because it lowers blood pressure, enhances the effectiveness of some antihypertensive medications, promotes other aspects of metabolic and vascular health, and generally leads to few adverse effects. […] Lifestyle therapy consists of reducing excess body weight through caloric restriction, restricting sodium intake (<2,300 mg/day), increasing consumption of fruits and vegetables […] and low-fat dairy products […], avoiding excessive alcohol consumption […] (53), smoking cessation, reducing sedentary time (54), and increasing physical activity levels (55). These lifestyle strategies may also positively affect glycemic and lipid control and should be encouraged in those with even mildly elevated blood pressure.”

“Initial treatment for hypertension should include drug classes demonstrated to reduce cardiovascular events in patients with diabetes: ACE inhibitors (65,66), angiotensin receptor blockers (ARBs) (65,66), thiazide-like diuretics (67), or dihydropyridine CCBs (68). For patients with albuminuria (urine albumin-to-creatinine ratio [UACR] ≥30 mg/g creatinine), initial treatment should include an ACE inhibitor or ARB in order to reduce the risk of progressive kidney disease […]. In the absence of albuminuria, risk of progressive kidney disease is low, and ACE inhibitors and ARBs have not been found to afford superior cardioprotection when compared with other antihypertensive agents (69). β-Blockers may be used for the treatment of coronary disease or heart failure but have not been shown to reduce mortality as blood pressure–lowering agents in the absence of these conditions (5,70).”

vi. High Illicit Drug Abuse and Suicide in Organ Donors With Type 1 Diabetes.

“Organ donors with type 1 diabetes represent a unique population for research. Through a combination of immunological, metabolic, and physiological analyses, researchers utilizing such tissues seek to understand the etiopathogenic events that result in this disorder. The Network for Pancreatic Organ Donors with Diabetes (nPOD) program collects, processes, and distributes pancreata and disease-relevant tissues to investigators throughout the world for this purpose (1). Information is also available, through medical records of organ donors, related to causes of death and psychological factors, including drug use and suicide, that impact life with type 1 diabetes.

We reviewed the terminal hospitalization records for the first 100 organ donors with type 1 diabetes in the nPOD database, noting cause, circumstance, and mechanism of death; laboratory results; and history of illicit drug use. Donors were 45% female and 79% Caucasian. Mean age at time of death was 28 years (range 4–61) with mean disease duration of 16 years (range 0.25–52).”

“Documented suicide was found in 8% of the donors, with an average age at death of 21 years and average diabetes duration of 9 years. […] Similarly, a type 1 diabetes registry from the U.K. found that 6% of subjects’ deaths were attributed to suicide (2). […] Additionally, we observed a high rate of illicit substance abuse: 32% of donors reported or tested positive for illegal substances (excluding marijuana), and multidrug use was common. Cocaine was the most frequently abused substance. Alcohol use was reported in 35% of subjects, with marijuana use in 27%. By comparison, 16% of deaths in the U.K. study were deemed related to drug misuse (2).”

“We fully recognize the implicit biases of an organ donor–based population, which may not be […’may not be’ – well, I guess that’s one way to put it! – US] directly comparable to the general population. Nevertheless, the high rate of suicide and drug use should continue to spur our energy and resources toward caring for the emotional and psychological needs of those living with type 1 diabetes. The burden of type 1 diabetes extends far beyond checking blood glucose and administering insulin.”

January 10, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology, Psychiatry, Studies | Leave a comment

Occupational Epidemiology (III)

This will be my last post about the book.

Some observations from the final chapters:

“Often there is confusion about the difference between systematic reviews and metaanalyses. A meta-analysis is a quantitative synthesis of two or more studies […] A systematic review is a synthesis of evidence on the effects of an intervention or an exposure which may also include a meta-analysis, but this is not a prerequisite. It may be that the results of the studies which have been included in a systematic review are reported in such a way that it is impossible to synthesize them quantitatively. They can then be reported in a narrative manner.10 However, a meta-analysis always requires a systematic review of the literature. […] There is a long history of debate about the value of meta-analysis for occupational cohort studies or other occupational aetiological studies. In 1994, Shapiro argued that ‘meta-analysis of published non-experimental data should be abandoned’. He reasoned that ‘relative risks of low magnitude (say, less than 2) are virtually beyond the resolving power of the epidemiological microscope because we can seldom demonstrably eliminate all sources of bias’.13 Because the pooling of studies in a meta-analysis increases statistical power, the pooled estimate may easily become significant and thus incorrectly taken as an indication of causality, even though the biases in the included studies may not have been taken into account. Others have argued that the method of meta-analysis is important but should be applied appropriately, taking into account the biases in individual studies.14 […] We believe that the synthesis of aetiological studies should be based on the same general principles as for intervention studies, and the existing methods adapted to the particular challenges of cohort and case-control studies. […] Since 2004, there is a special entity, the Cochrane Occupational Safety and Health Review Group, that is responsible for the preparing and updating of reviews of occupational safety and health interventions […]. There were over 100 systematic reviews on these topics in the Cochrane Library in 2012.”

“The believability of a systematic review’s results depends largely on the quality of the included studies. Therefore, assessing and reporting on the quality of the included studies is important. For intervention studies, randomized trials are regarded as of higher quality than observational studies, and the conduct of the study (e.g. in terms of response rate or completeness of follow-up) also influences quality. A conclusion derived from a few high-quality studies will be more reliable than when the conclusion is based on even a large number of low-quality studies. Some form of quality assessment is nowadays commonplace in intervention reviews but is still often missing in reviews of aetiological studies. […] It is tempting to use quality scores, such as the Jadad scale for RCTs34 and the Downs and Black scale for non-RCT intervention studies35 but these, in their original format, are insensitive to variation in the importance of risk areas for a given research question. The score system may give the same value to two studies (say, 10 out of 12) when one, for example, lacked blinding and the other did not randomize, thus implying that their quality is equal. This would not be a problem if randomization and blinding were equally important for all questions in all reviews, but this is not the case. For RCTs an important development in this regard has been the Cochrane risk of bias tool.36 This is a checklist of six important domains that have been shown to be important areas of bias in RCTs: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, and selective reporting.”

“[R]isks of bias tools developed for intervention studies cannot be used for reviews of aetiological studies without relevant modification. This is because, unlike interventions, exposures are usually more complicated to assess when we want to attribute the outcome to them alone. These scales do not cover all items that may need assessment in an aetiological study, such as confounding and information bias relating to exposures. […] Surprisingly little methodological work has been done to develop validated tools for aetiological epidemiology and most tools in use are not validated,38 […] Two separate checklists, for observational studies of incidence and prevalence and for risk factor assessment, have been developed and validated recently.40 […] Publication and other reporting bias is probably a much bigger issue for aetiological studies than for intervention studies. This is because, for clinical trials, the introduction of protocol registration, coupled with the regulatory system for new medications, has helped in assessing and preventing publication and reporting bias. No such checks exist for observational studies.”

“Most ill health that arises from occupational exposures can also arise from nonoccupational exposures, and the same type of exposure can occur in occupational and non-occupational settings. With the exception of malignant mesothelioma (which is essentially only caused by exposure to asbestos), there is no way to determine which exposure caused a particular disorder, nor where the causative exposure occurred. This means that usually it is not possible to determine the burden just by counting the number of cases. Instead, approaches to estimating this burden have been developed. There are also several ways to define burden and how best to measure it.”

“The population attributable fraction (PAF) is the proportion of cases that would not have occurred in the absence of an occupational exposure. It can be estimated by combining two measures — a risk estimate (usually relative risk (RR) or odds ratio) of the disorder of interest that is associated with exposure to the substance of concern; and an estimate of the proportion of the population exposed to the substance at work (p(E)). This approach has been used in several studies, particularly for estimating cancer burden […] There are several possible equations that can be used to calculate the PAF, depending on the available data […] PAFs cannot in general be combined by summing directly because: (1) summing PAFs for overlapping exposures (i.e. agents to which the same ‘ever exposed’ workers may have been exposed) may give an overall PAF exceeding 100%, and (2) summing disjoint (not concurrently occurring) exposures also introduces upward bias. Strategies to avoid this include partitioning exposed numbers between overlapping exposures […] or estimating only for the ‘dominant’ carcinogen with the highest risk. Where multiple exposures remain, one approach is to assume that the exposures are independent and their joint effects are multiplicative. The PAFs can then be combined to give an overall PAF for that cancer using a product sum. […] Potential sources of bias for PAFs include inappropriate choice of risk estimates, imprecision in the risk estimates and estimates of proportions exposed, inaccurate risk exposure period and latency assumptions, and a lack of separate risk estimates in some cases for women and/or cancer incidence. In addition, a key decision is the choice of which diseases and exposures are to be included.”

“The British Cancer Burden study is perhaps the most detailed study of occupationally related cancers in that it includes all those relevant carcinogens classified at the end of 2008 […] In the British study the attributable fractions ranged from less than 0.01% to 95% overall, the most important cancer sites for occupational attribution being, for men, mesothelioma (97%), sinonasal (46%), lung (21.1%), bladder (7.1%), and non-melanoma skin cancer (7.1%) and, for women, mesothelioma (83%), sinonasal (20.1%), lung (5.3%), breast (4.6%), and nasopharynx (2.5%). Occupation also contributed 2% or more overall to cancers of the larynx, oesophagus, and stomach, and soft tissue sarcoma with, in addition for men, melanoma of the eye (due to welding), and non-Hodgkin lymphoma. […] The overall results from the occupational risk factors component of the Global Burden of Disease 2010 study illustrate several important aspects of burden studies.14 Of the estimated 850 000 occupationally related deaths worldwide, the top three causes were: (1) injuries (just over a half of all deaths); (2) particulate matter, gases, and fumes leading to COPD; and (3) carcinogens. When DALYs were used as the burden measure, injuries still accounted for the highest proportion (just over one-third), but ergonomic factors leading to low back pain resulted in almost as many DALYs, and both were almost an order of magnitude higher than the DALYs from carcinogens. The difference in relative contributions of the various risk factors between deaths and DALYs arises because of the varying ages of those affected, and the differing chronicity of the resulting conditions. Both measures are valid, but they represent a different aspect of the burden arising from the hazardous exposures […]. Both the British and Global Burden of Disease studies draw attention to the important issues of: (1) multiple occupational carcinogens causing specific types of cancer, for example, the British study evaluated 21 lung carcinogens; and (2) specific carcinogens causing several different cancers, for example, IARC now defines asbestos as a group 1 or 2A carcinogen for seven cancer sites. These issues require careful consideration for burden estimation and for prioritizing risk reduction strategies. […] The long latency of many cancers means that estimates of current burden are based on exposures occurring in the past, often much higher than those existing today. […] long latency [also] means that risk reduction measures taken now will take a considerable time to be reflected in reduced disease incidence.”

“Exposures and effects are linked by dynamic processes occurring across time. These processes can often be usefully decomposed into two distinct biological relationships, each with several components: 1. The exposure-dose relationship […] 2. The dose-effect relationship […] These two component relationships are sometimes represented by two different mathematical models: a toxicokinetic model […], and a disease process model […]. Depending on the information available, these models may be relatively simple or highly complex. […] Often the various steps in the disease process do not occur at the same rate, some of these processes are ‘fast’, such as cell killing, while others are ‘slow’, such as damage repair. Frequently a few slow steps in a process become limiting to the overall rate, which sets the temporal pattern for the entire exposure-response relationship. […] It is not necessary to know the full mechanism of effects to guide selection of an exposure-response model or exposure metric. Because of the strong influence of the rate-limiting steps, often it is only necessary to have observations on the approximate time course of effects. This is true whether the effects appear to be reversible or irreversible, and whether damage progresses proportionately with each unit of exposure (actually dose) or instead occurs suddenly, and seemingly without regard to the amount of exposure, such as an asthma attack.”

“In this chapter, we argue that formal disease process models have the potential to improve the sensitivity of epidemiology for detecting new and emerging occupational and environmental risks where there is limited mechanistic information. […] In our approach, these models are often used to create exposure or dose metrics, which are in turn used in epidemiological models to estimate exposure-disease associations. […] Our goal is a methodology to formulate strong tests of our exposure-disease hypotheses in which a hypothesis is developed in as much biological detail as it can be, expressed in a suitable dynamic (temporal) model, and tested by its fit with a rich data set, so that its flaws and misperceptions of reality are fully displayed. Rejecting such a fully developed biological hypothesis is more informative than either rejecting or failing to reject a generic or vaguely defined hypothesis.” For example, the hypothesis ‘truck drivers have more risk of lung cancer than non-drivers’13 is of limited usefulness for prevention […]. Hypothesizing that a particular chemical agent in truck exhaust is associated with lung cancer — whether the hypothesis is refuted or supported by data — is more likely to lead to successful prevention activities. […] we believe that the choice of models against which to compare the data should, so far as possible, be guided by explicit hypotheses about the underlying biological processes. In other words, you can get as much as possible from epidemiology by starting from well-thought-out hypotheses that are formalized as mathematical models into which the data will be placed. The disease process models can serve this purpose.2″

“The basic idea of empirical Bayes (EB) and semiBayes (SB) adjustments for multiple associations is that the observed variation of the estimated relative risks around their geometric mean is larger than the variation of the true (but unknown) relative risks. In SB adjustments, an a priori value for the extra variation is chosen which assigns a reasonable range of variation to the true relative risks and this value is then used to adjust the observed relative risks.7 The adjustment consists in shrinking outlying relative risks towards the overall mean (of the relative risks for all the different exposures being considered). The larger the individual variance of the relative risks, the stronger the shrinkage, so that the shrinkage is stronger for less reliable estimates based on small numbers. Typical applications in which SB adjustments are a useful alternative to traditional methods of adjustment for multiple comparisons are in large occupational surveillance studies, where many relative risks are estimated with few or no a priori beliefs about which associations might be causal.7″

“The advantage of [the SB adjustment] approach over classical Bonferroni corrections is that on the average it produces more valid estimates of the odds ratio for each occupation/exposure. If we do a study which involves assessing hundreds of occupations, the problem is not only that we get many ‘false positive’ results by chance. A second problem is that even the ‘true positives’ tend to have odds ratios that are too high. For example, if we have a group of occupations with true odds ratios around 1.5, then the ones that stand out in the analysis are those with the highest odds ratios (e.g. 2.5) which will be elevated partly because of real effects and partly by chance. The Bonferroni correction addresses the first problem (too many chance findings) but not the second, that the strongest odds ratios are probably too high. In contrast, SB adjustment addresses the second problem by correcting for the anticipated regression to the mean that would have occurred if the study had been repeated, and thereby on the average produces more valid odds ratio estimates for each occupation/exposure. […] most epidemiologists write their Methods and Results sections as frequentists and their Introduction and Discussion sections as Bayesians. In their Methods and Results sections, they ‘test’ their findings as if their data are the only data that exist. In the Introduction and Discussion, they discuss their findings with regard to their consistency with previous studies, as well as other issues such as biological plausibility. This creates tensions when a small study has findings which are not statistically significant but which are consistent with prior knowledge, or when a study finds statistically significant findings which are inconsistent with prior knowledge. […] In some (but not all) instances, things can be made clearer if we include Bayesian methods formally in the Methods and Results sections of our papers”.

“In epidemiology, risk is most often quantified in terms of relative risk — i.e. the ratio of the probability of an adverse outcome in someone with a specified exposure to that in someone who is unexposed, or exposed at a different specified level. […] Relative risks can be estimated from a wider range of study designs than individual attributable risks. They have the advantage that they are often stable across different groups of people (e.g. of different ages, smokers, and non-smokers) which makes them easier to estimate and quantify. Moreover, high relative risks are generally unlikely to be explained by unrecognized bias or confounding. […] However, individual attributable risks are a more relevant measure by which to quantify the impact of decisions in risk management on individuals. […] Individual attributable risk is the difference in the probability of an adverse outcome between someone with a specified exposure and someone who is unexposed, or exposed at a different specified level. It is the critical measure when considering the impact of decisions in risk management on individuals. […] Population attributable risk is the difference in the frequency of an adverse outcome between a population with a given distribution of exposures to a hazardous agent, and that in a population with no exposure, or some other specified distribution of exposures. It depends on the prevalence of exposure at different levels within the population, and on the individual attributable risk for each level of exposure. It is a measure of the impact of the agent at a population level, and is relevant to decisions in risk management for populations. […] Population attributable risks are highest when a high proportion of a population is exposed at levels which carry high individual attributable risks. On the other hand, an exposure which carries a high individual attributable risk may produce only a small population attributable risk if the prevalence of such exposure is low.”

“Hazard characterization entails quantification of risks in relation to routes, levels, and durations of exposure. […] The findings from individual studies are often used to determine a no observed adverse effect level (NOAEL), lowest observed effect level (LOEL), or benchmark dose lower 95% confidence limit (BMDL) for relevant effects […] [NOAEL] is the highest dose or exposure concentration at which there is no discernible adverse effect. […] [LOEL] is the lowest dose or exposure concentration at which a discernible effect is observed. If comparison with unexposed controls indicates adverse effects at all of the dose levels in an experiment, a NOAEL cannot be derived, but the lowest dose constitutes a LOEL, which might be used as a comparator for estimated exposures or to derive a toxicological reference value […] A BMDL is defined in relation to a specified adverse outcome that is observed in a study. Usually, this is the outcome which occurs at the lowest levels of exposure and which is considered critical to the assessment of risk. Statistical modelling is applied to the experimental data to estimate the dose or exposure concentration which produces a specified small level of effect […]. The BMDL is the lower 95% confidence limit for this estimate. As such, it depends both on the toxicity of the test chemical […], and also on the sample sizes used in the study (other things being equal, larger sample sizes will produce more precise estimates, and therefore higher BMDLs). In addition to accounting for sample size, BMDLs have the merit that they exploit all of the data points in a study, and do not depend so critically on the spacing of doses that is adopted in the experimental design (by definition a NOAEL or LOEL can only be at one of the limited number of dose levels used in the experiment). On the other hand, BMDLs can only be calculated where an adverse effect is observed. Even if there are no clear adverse effects at any dose level, a NOAEL can be derived (it will be the highest dose administered).”

December 8, 2017 Posted by | Books, Cancer/oncology, Epidemiology, Medicine, Statistics | Leave a comment

Occupational Epidemiology (II)

Some more observations from the book below.

“RD [Retinal detachment] is the separation of the neurosensory retina from the underlying retinal pigment epithelium.1 RD is often preceded by posterior vitreous detachment — the separation of the posterior vitreous from the retina as a result of vitreous degeneration and shrinkage2 — which gives rise to the sudden appearance of floaters and flashes. Late symptoms of RD may include visual field defects (shadows, curtains) or even blindness. The success rate of RD surgery has been reported to be over 90%;3 however, a loss of visual acuity is frequently reported by patients, particularly if the macula is involved.4 Since the natural history of RD can be influenced by early diagnosis, patients experiencing symptoms of posterior vitreous detachment are advised to undergo an ophthalmic examination.5 […] Studies of the incidence of RD give estimates ranging from 6.3 to 17.9 cases per 100 000 person-years.6 […] Age is a well-known risk factor for RD. In most studies the peak incidence was recorded among subjects in their seventh decade of life. A secondary peak at a younger age (20–30 years) has been identified […] attributed to RD among highly myopic patients.6 Indeed, depending on the severity,
myopia is associated with a four- to ten-fold increase in risk of RD.7 [Diabetics with retinopathy are also at increased risk of RD, US] […] While secondary prevention of RD is current practice, no effective primary prevention strategy is available at present. The idea is widespread among practitioners that RD is not preventable, probably the consequence of our historically poor understanding of the aetiology of RD. For instance, on the website of the Mayo Clinic — one of the top-ranked hospitals for ophthalmology in the US — it is possible to read that ‘There’s no way to prevent retinal detachment’.9

“Intraocular pressure […] is influenced by physical activity. Dynamic exercise causes an acute reduction in intraocular pressure, whereas physical fitness is associated with a lower baseline value.29 Conversely, a sudden rise in intraocular pressure has been reported during the Valsalva manoeuvre.30-32 […] Occupational physical activity may […] cause both short- and long-term variations in intraocular pressure. On the one hand, physically demanding jobs may contribute to decreased baseline levels by increasing physical fitness but, on the other hand, lifting tasks may cause an important acute increase in pressure. Moreover, the eye of a manual worker who performs repeated lifting tasks involving the Valsalva manoeuvre may undergo several dramatic changes in intraocular pressure within a single working shift. […] A case-control study was carried out to test the hypothesis that repeated lifting tasks involving the Valsalva manoeuvre could be a risk factor for RD. […] heavy lifting was a strong risk factor for RD (OR 4.4, 95% CI 1.6–13). Intriguingly, body mass index (BMI) also showed a clear association with RD (top quartile: OR 6.8, 95% CI 1.6–29). […] Based on their findings, the authors concluded that heavy occupational lifting (involving the Valsalva manoeuvre) may be a relevant risk factor for RD in myopics.

“The proportion of the world’s population over 60 is forecast to double from 11.6% in 2012 to 21.8% in 2050.1 […] the International Labour Organization notes that, worldwide, just 40% of the working age population has legal pension coverage, and only 26% of the working population is effectively covered by old-age pension schemes. […] in less developed regions, labour force participation in those over 65 is much higher than in more developed regions.8 […] Longer working lives increase cumulative exposures, as well as increasing the time since exposure — important when there is a long latency period between exposure and resultant disease. Further, some exposures may have a greater effect when they occur to older workers, e.g. carcinogens that are promoters rather than initiators. […] Older workers tend to have more chronic health conditions. […] Older workers have fewer injuries, but take longer to recover. […] For some ‘knowledge workers’, like physicians, even a relatively minor cognitive decline […] might compromise their competence. […]  Most past studies have treated age as merely a confounding variable and rarely, if ever, have considered it an effect modifier. […]  Jex and colleagues24 argue that conceptually we should treat age as the variable of interest so that other variables are viewed as moderating the impact of age. […] The single best improvement to epidemiological research on ageing workers is to conduct longitudinal studies, including follow-up of workers into retirement. Cross-sectional designs almost certainly incur the healthy survivor effect, since unhealthy workers may retire early.25 […] Analyses should distinguish ageing per se, genetic factors, work exposures, and lifestyle in order to understand their relative and combined effects on health.”

“Musculoskeletal disorders have long been recognized as an important source of morbidity and disability in many occupational populations.1,2 Most musculoskeletal disorders, for most people, are characterized by recurrent episodes of pain that vary in severity and in their consequences for work. Most episodes subside uneventfully within days or weeks, often without any intervention, though about half of people continue to experience some pain and functional limitations after 12 months.3,4 In working populations, musculoskeletal disorders may lead to a spell of sickness absence. Sickness absence is increasingly used as a health parameter of interest when studying the consequences of functional limitations due to disease in occupational groups. Since duration of sickness absence contributes substantially to the indirect costs of illness, interventions increasingly address return to work (RTW).5 […] The Clinical Standards Advisory Group in the United Kingdom reported RTW within 2 weeks for 75% of all low back pain (LBP) absence episodes and suggested that approximately 50% of all work days lost due to back pain in the working population are from the 85% of people who are off work for less than 7 days.6″

Any RTW curve over time can be described with a mathematical Weibull function.15 This Weibull function is characterized by a scale parameter λ and a shape parameter k. The scale parameter λ is a function of different covariates that include the intervention effect, preferably expressed as hazard ratio (HR) between the intervention group and the reference group in a Cox’s proportional hazards regression model. The shape parameter k reflects the relative increase or decrease in survival time, thus expressing how much the RTW rate will decrease with prolonged sick leave. […] a HR as measure of effect can be introduced as a covariate in the scale parameter λ in the Weibull model and the difference in areas under the curve between the intervention model and the basic model will give the improvement in sickness absence days due to the intervention. By introducing different times of starting the intervention among those workers still on sick leave, the impact of timing of enrolment can be evaluated. Subsequently, the estimated changes in total sickness absence days can be expressed in a benefit/cost ratio (BC ratio), where benefits are the costs saved due to a reduction in sickness absence and costs are the expenditures relating to the intervention.15″

“A crucial factor in understanding why interventions are effective or not is the timing of the enrolment of workers on sick leave into the intervention. The RTW pattern over time […] has important consequences for appropriate timing of the best window for effective clinical and occupational interventions. The evidence presented by Palmer and colleagues clearly suggests that [in the context of LBP] a stepped care approach is required. In the first step of rapid RTW, most workers will return to work even without specific interventions. Simple, short interventions involving effective coordination and cooperation between primary health care and the workplace will be sufficient to help the majority of workers to achieve an early RTW. In the second step, more expensive, structured interventions are reserved for those who are having difficulties returning, typically between 4 weeks and 3 months. However, to date there is little evidence on the optimal timing of such interventions for workers on sick leave due to LBP.14,15 […] the cost-benefits of a structured RTW intervention among workers on sick leave will be determined by the effectiveness of the intervention, the natural speed of RTW in the target population, the timing of the enrolment of workers into the intervention, and the costs of both the intervention and of a day of sickness absence. […] The cost-effectiveness of a RTW intervention will be determined by the effectiveness of the intervention, the costs of the intervention and of a day of sickness absence, the natural course of RTW in the target population, the timing of the enrolment of workers into the RTW intervention, and the time lag before the intervention takes effect. The latter three factors are seldom taken into consideration in systematic reviews and guidelines for management of RTW, although their impact may easily be as important  as classical measures of effectiveness, such as effect size or HR.”

“In order to obtain information of the highest quality and utility, surveillance schemes have to be designed, set up, and managed with the same methodological rigour as high-calibre prospective cohort studies. Whether surveillance schemes are voluntary or not, considerable effort has to be invested to ensure a satisfactory and sufficient denominator, the best numerator quality, and the most complete ascertainment. Although the force of statute is relied upon in some surveillance schemes, even in these the initial and continuing motivation of the reporters (usually physicians) is paramount. […] There is a surveillance ‘pyramid’ within which the patient’s own perception is at the base, the GP is at a higher level, and the clinical specialist is close to the apex. The source of the surveillance reports affects the numerator because case severity and case mix differ according to the level in the pyramid.19 Although incidence rate estimates may be expected to be lower at the higher levels in the surveillance pyramid this is not necessarily always the case. […] Although surveillance undertaken by physicians who specialize in the organ system concerned or in occupational disease (or in both aspects) may be considered to be the medical ‘gold standard’ it can suffer from a more limited patient catchment because of various referral filters. Surveillance by GPs will capture numerator cases as close to the base of the pyramid as possible, but may suffer from greater diagnostic variation than surveillance by specialists. Limiting recruitment to GPs with a special interest, and some training, in occupational medicine is a compromise between the two levels.20

“When surveillance is part of a statutory or other compulsory scheme then incident case identification is a continuous and ongoing process. However, when surveillance is voluntary, for a research objective, it may be preferable to sample over shorter, randomly selected intervals, so as to reduce the demands associated with the data collection and ‘reporting fatigue’. Evidence so far suggests that sampling over shorter time intervals results in higher incidence estimates than continuous sampling.21 […] Although reporting fatigue is an important consideration in tempering conclusions drawn from […] multilevel models, it is possible to take account of this potential bias in various ways. For example, when evaluating interventions, temporal trends in outcomes resulting from other exposures can be used to control for fatigue.23,24 The phenomenon of reporting fatigue may be characterized by an ‘excess of zeroes’ beyond what is expected of a Poisson distribution and this effect can be quantified.27 […] There are several considerations in determining incidence from surveillance data. It is possible to calculate an incidence rate based on the general population, on the population of working age, or on the total working population,19 since these denominator bases are generally readily available, but such rates are not the most useful in determining risk. Therefore, incidence rates are usually calculated in respect of specific occupations or industries.22 […] Ideally, incidence rates should be expressed in relation to quantitative estimates of exposure but most surveillance schemes would require additional data collection as special exercises to achieve this aim.” [for much more on these topics, see also M’ikanatha & Iskander’s book.]

“Estimates of lung cancer risk attributable to occupational exposures vary considerably by geographical area and depend on study design, especially on the exposure assessment method, but may account for around 5–20% of cancers among men, but less (<5%) among women;2 among workers exposed to (suspected) lung carcinogens, the percentage will be higher. […] most exposure to known lung carcinogens originates from occupational settings and will affect millions of workers worldwide.  Although it has been established that these agents are carcinogenic, only limited evidence is available about the risks encountered at much lower levels in the general population. […] One of the major challenges in community-based occupational epidemiological studies has been valid assessment of the occupational exposures experienced by the population at large. Contrary to the detailed information usually available for an industrial population (e.g. in a retrospective cohort study in a large chemical company) that often allows for quantitative exposure estimation, community-based studies […] have to rely on less precise and less valid estimates. The choice of method of exposure assessment to be applied in an epidemiological study depends on the study design, but it boils down to choosing between acquiring self-reported exposure, expert-based individual exposure assessment, or linking self-reported job histories with job-exposure matrices (JEMs) developed by experts. […] JEMs have been around for more than three decades.14 Their main distinction from either self-reported or expert-based exposure assessment methods is that exposures are no longer assigned at the individual subject level but at job or task level. As a result, JEMs make no distinction in assigned exposure between individuals performing the same job, or even between individuals performing a similar job in different companies. […] With the great majority of occupational exposures having a rather low prevalence (<10%) in the general population it is […] extremely important that JEMs are developed aiming at a highly specific exposure assessment so that only jobs with a high likelihood (prevalence) and intensity of exposure are considered to be exposed. Aiming at a high sensitivity would be disastrous because a high sensitivity would lead to an enormous number of individuals being assigned an exposure while actually being unexposed […] Combinations of the methods just described exist as well”.

“Community-based studies, by definition, address a wider range of types of exposure and a much wider range of encountered exposure levels (e.g. relatively high exposures in primary production but often lower in downstream use, or among indirectly exposed individuals). A limitation of single community-based studies is often the relatively low number of exposed individuals. Pooling across studies might therefore be beneficial. […] Pooling projects need careful planning and coordination, because the original studies were conducted for different purposes, at different time periods, using different questionnaires. This heterogeneity is sometimes perceived as a disadvantage but also implies variations that can be studied and thereby provide important insights. Every pooling project has its own dynamics but there are several general challenges that most pooling projects confront. Creating common variables for all studies can stretch from simple re-naming of variables […] or recoding of units […] to the re-categorization of national educational systems […] into years of formal education. Another challenge is to harmonize the different classification systems of, for example, diseases (e.g. International Classification of Disease (ICD)-9 versus ICD-10), occupations […], and industries […]. This requires experts in these respective fields as well as considerable time and money. Harmonization of data may mean losing some information; for example, ISCO-68 contains more detail than ISCO-88, which makes it possible to recode ISCO-68 to ISCO-88 with only a little loss of detail, but it is not possible to recode ISCO-88 to ISCO-68 without losing one or two digits in the job code. […] Making the most of the data may imply that not all studies will qualify for all analyses. For example, if a study did not collect data regarding lung cancer cell type, it can contribute to the overall analyses but not to the cell type-specific analyses. It is important to remember that the quality of the original data is critical; poor data do not become better by pooling.”

December 6, 2017 Posted by | Books, Cancer/oncology, Demographics, Epidemiology, Health Economics, Medicine, Ophthalmology, Statistics | Leave a comment

Occupational Epidemiology (I)

Below some observations from the first chapters of the book, which I called ‘very decent’ on goodreads.

“Coal workers were amongst the first occupational groups to be systematically studied in well-designed epidemiological research programmes. As a result, the causes and spectrum of non-malignant respiratory disease among coal workers have been rigorously explored and characterized.1,2 While respirable silica (quartz) in mining has long been accepted as a cause of lung disease, the important contributing role of coal mine dust was questioned until the middle of the twentieth century.3 Occupational exposure to coal mine dust has now been shown unequivocally to cause excess mortality and morbidity from non-malignant respiratory disease, including coal workers’ pneumoconiosis (CWP) and chronic obstructive pulmonary disease (COPD). The presence of respirable quartz, often a component of coal mine dust, contributes to disease incidence and severity, increasing the risk of morbidity and mortality in exposed workers.”

Coal is classified into three major coal ranks: lignite, bituminous, and anthracite from lowest to highest carbon content and heating value. […] In the US, the Bureau of Mines and the Public Health Service actively studied anthracite and bituminous coal mines and miners throughout the mid-1900s.3 These studies showed significant disease among workers with minimal silica exposure, suggesting that coal dust itself was toxic; however, these results were suppressed and not widely distributed. It was not until the 1960s that a popular movement of striking coal miners and their advocates demanded legislation to prevent, study, and compensate miners for respiratory diseases caused by coal dust exposure. […] CWP [Coal Workers’ Pneumoconiosis] is an interstitial lung disease resulting from the accumulation of coal mine dust in miners’ lungs and the tissue reaction to its presence. […] It is classified […] as simple or complicated; the latter is also known as progressive massive fibrosis (PMF) […] PMF is a progressive, debilitating disease which is predictive of disability and mortality […] A causal exposure-response relationship has been established between cumulative coal mine dust exposure and risk of developing both CWP and PMF,27-31 and with mortality from pneumoconiosis and PMF.23-26, 30 Incidence, the stage of CWP, and progression to PMF, as well as mortality, are positively associated with increasing proportion of respirable silica in the coal mine dust32 and higher coal rank. […] Not only do coal workers experience occupational mortality from CWP and PMF,12, 23-26 they also have excess mortality from COPD compared to the general population. Cross-sectional and longitudinal studies […] have demonstrated an exposure-response relationship between cumulative coal mine dust exposure and chronic bronchitis,36-40 respiratory symptoms,41 and pulmonary function even in the presence of normal radiographic findings.42 The relationship between the rate of decline of lung function and coal mine dust exposure is not linear, the greatest reduction occurring in the first few years of exposure.43

“Like most occupational cohort studies, those of coal workers are affected by the healthy worker effect. A strength of the PFR and NCS studies is the ability to use internal analysis (i.e. comparing workers by exposure level) which controls for selection bias at hire, one component of the effect.59 However, internal analyses may not fully control for ongoing selection bias if symptoms of adverse health effects are related to exposure (referred to as the healthy worker survivor effect) […] Work status is a key component of the healthy worker survivor effect, as are length of time since entering the industry and employment duration.61 Both the PFR and NCS studies have consistently found higher rates of symptoms and disease among former miners compared with current miners, consistent with a healthy worker survivor effect.62,63″

“Coal mining is rapidly expanding in the developing world. From 2007 to 2010 coal production declined in the US by 6% and Europe by 10% but increased in Eurasia by 9%, in Africa by 3%, and in Asia and Oceania by 19%.71 China saw a dramatic increase of 39% from 2007 to 2011. There have been few epidemiological studies published that characterize the disease burden among coal workers during this expansion but, in one study conducted among miners in Liaoning Province, China, rates of CWP were high.72 There are an estimated six million underground miners in China at present;73 hence even low disease rates will cause a high burden of illness and excess premature mortality.”

“Colonization with S. aureus may occur on mucous membranes of the respiratory or intestinal tract, or on other body surfaces, and is usually asymptomatic. Nasal colonization with S. aureus in the human population occurs among around 30% of individuals. Methicillin-resistant S. aureus (MRSA) are strains that have developed resistance to beta-lactam antibiotics […] and, as a result, may cause difficult-to-treat infections in humans. Nasal colonization with MRSA in the general population is low; the highest rate reported in a population-based survey was 1.5%.2,3 Infections with MRSA are associated with treatment failure and increased severity of disease.4,5 […] In 2004 a case of, at that time non-typeable, MRSA was reported in a 6-month-old girl admitted to a hospital in the Netherlands. […] Later on, this strain and some related strains appeared strongly associated with livestock production, and were labelled livestock-associated MRSA (LA-MRSA) and are nowadays referred to as MRSA ST398. […] It is common knowledge that the use of antimicrobial agents in humans, animals, and plants promotes the selection and spread of antimicrobial-resistant bacteria and resistance genes through genetic mutations and gene transfer.15 Antimicrobial agents are widely used in veterinary medicine and modern food animal production depends on the use of large amounts of antimicrobials for disease control. Use of antimicrobials probably played an important role in the emergence of MRSA ST398.”

MRSA was rarely isolated from animals before 2000. […] Since 2005 onwards, LA-MRSA has been increasingly frequently reported in different food production animals, including cattle, pigs, and poultry […] The MRSA case illustrates the rapid emergence, and transmission from animals to humans, of a new strain of resistant micro-organisms from an animal reservoir, creating risks for different occupational groups. […] High animal-to-human transmission of ST398 has been reported in pig farming, leading to an elevated prevalence of nasal MRSA carriage ranging from a few per cent in Ireland up to 86% in German pig farmers […]. One study showed a clear association between the prevalence of MRSA carriage among participants from farms with MRSA colonized pigs (50%) versus 3% on farms without colonized pigs […] MRSA prevalence is low among animals from alternative breeding systems with low use of antimicrobials, also leading to low carriage rates in farmers.71 […] Veterinarians are […] frequently in direct contact with livestock, and are clearly at elevated risk of LA-MRSA carriage when compared to the general population. […] Of all LA-MRSA carrying individuals, a fraction appear to be persistent carriers. […] Few studies have examined transmission from humans to humans. Generally, studies among family members of livestock farmers show a considerably lower prevalence than among the farmers with more intense animal contact. […] Individuals who are ST398 carriers in the general population usually have direct animal contact.43,44 On the other hand, the emergence of ST398 isolates without known risk factors for acquisition and without a link to livestock has been reported.45 In addition, a human-specific ST398 clone has recently been identified and thus the spread of LA-MRSA from occupational populations to the general population cannot be ruled out.46 Transmission dynamics, especially between humans not directly exposed to animals, remain unclear and might be changing.”

“Enterobacteriaceae that produce ESBLs are an emerging concern in public health. ESBLs inactivate beta-lactam antimicrobials by hydrolysis and therefore cause resistance to various beta-lactam antimicrobials, including penicillins and cephalosporins.54 […] The genes encoding for ESBLs are often located on plasmids which can be transferred between different bacterial species. Also, coexistence with other types of antimicrobial resistance occurs. In humans, infections with ESBL-producing Enterobacteriaceae are associated with increased burden of disease and costs.58 A variety of ESBLs have been identified in bacteria derived from food-producing animals worldwide. The occurrence of different ESBL types depends on the animal species and the geographical area. […] High use of antimicrobials and inappropriate use of cephalosporins in livestock production are considered to be associated with the emergence and high prevalence of ESBL-producers in the animals.59-60 Food-producing animals can serve as a reservoir for ESBL producing Enterobacteriaceae and ESBL genes. […] recent findings suggest that transmission from animals to humans may occur through (in)direct contact with livestock during work. This may thus pose an occupational health risk for farmers and potentially for other humans with regular contact with this working population. […] Compared to MRSA, the dynamics of ESBLs seem more complex. […] The variety of potential ESBL transmission routes makes it complex to determine the role of direct contact with livestock as an occupational risk for ESBL carriage. However, the increasing occurrence of ESBLs in livestock worldwide and the emerging insight into transmission through direct contact suggests that farmers have a higher risk of becoming a carrier of ESBLs. Until now, there have not been sufficient data available to quantify the relevant importance of this route of transmission.”

“Welders die more often from pneumonia than do their social class peers. This much has been revealed by successive analyses of occupational mortality for England and Wales. The pattern can now be traced back more than seven decades. During 1930–32, 285 deaths were observed with 171 expected;3 in 1949–53, 70 deaths versus 31 expected;4 in 1959–63, 101 deaths as compared with 54.9 expected;5 and in 1970–72, 66 deaths with 42.0 expected.6 […] The finding that risks decline after retirement is an argument against confounding by lifestyle variables such as smoking, as is the specificity of effect to lobar rather than bronchopneumonia. […] Analyses of death certificates […] support a case for a hazard that is reversible when exposure stops. […] In line with the mortality data, hospitalized pneumonia [has also] prove[n] to be more common among welders and other workers with exposure to metal fume than in workers from non-exposed jobs. Moreover, risks were confined to exposures in the previous 12 months […] Recently, inhalation experiments have confirmed that welding fume can promote bacterial growth in animals. […] A coherent body of evidence thus indicates that metal fume is a hazard for pneumonia. […] Presently, knowledge is lacking on the exposure-response relationship and what constitutes a ‘safe’ or ‘unsafe’ level or pattern of exposure to metal fume. […]  The pattern of epidemiological evidence […] is generally compatible with a hazard from iron in metal fume. Iron could promote infective risk in at least one of two ways: by acting as a growth nutrient for microorganisms, or as a cause of free radical injury. […] the Joint Committee on Vaccination and Immunisation, on behalf of the Department of Health in England, decided in November 2011 to recommend that ‘welders who have not received the pneumococcal polysaccharide vaccine (PPV23) previously should be offered a single dose of 0.5ml of PPV23 vaccine’ and that ‘employers should ensure that provision is in place for workers to receive PPV23’.”

December 2, 2017 Posted by | Books, Epidemiology, Infectious disease, Medicine | Leave a comment

A few diabetes papers of interest

i. Thirty Years of Research on the Dawn Phenomenon: Lessons to Optimize Blood Glucose Control in Diabetes.

“More than 30 years ago in Diabetes Care, Schmidt et al. (1) defined “dawn phenomenon,” the night-to-morning elevation of blood glucose (BG) before and, to a larger extent, after breakfast in subjects with type 1 diabetes (T1D). Shortly after, a similar observation was made in type 2 diabetes (T2D) (2), and the physiology of glucose homeostasis at night was studied in normal, nondiabetic subjects (35). Ever since the first description, the dawn phenomenon has been studied extensively with at least 187 articles published as of today (6). […] what have we learned from the last 30 years of research on the dawn phenomenon? What is the appropriate definition, the identified mechanism(s), the importance (if any), and the treatment of the dawn phenomenon in T1D and T2D?”

“Physiology of glucose homeostasis in normal, nondiabetic subjects indicates that BG and plasma insulin concentrations remain remarkably flat and constant overnight, with a modest, transient increase in insulin secretion just before dawn (3,4) to restrain hepatic glucose production (4) and prevent hyperglycemia. Thus, normal subjects do not exhibit the dawn phenomenon sensu strictiori because they secrete insulin to prevent it.

In T1D, the magnitude of BG elevation at dawn first reported was impressive and largely secondary to the decrease of plasma insulin concentration overnight (1), commonly observed with evening administration of NPH or lente insulins (8) (Fig. 1). Even in early studies with intravenous insulin by the “artificial pancreas” (Biostator) (2), plasma insulin decreased overnight because of progressive inactivation of insulin in the pump (9). This artifact exaggerated the dawn phenomenon, now defined as need for insulin to limit fasting hyperglycemia (2). When the overnight waning of insulin was prevented by continuous subcutaneous insulin infusion (CSII) […] or by the long-acting insulin analogs (LA-IAs) (8), it was possible to quantify the real magnitude of the dawn phenomenon — 15–25 mg/dL BG elevation from nocturnal nadir to before breakfast […]. Nocturnal spikes of growth hormone secretion are the most likely mechanism of the dawn phenomenon in T1D (13,14). The observation from early pioneering studies in T1D (1012) that insulin sensitivity is higher after midnight until 3 a.m. as compared to the period 4–8 a.m., soon translated into use of more physiological replacement of basal insulin […] to reduce risk of nocturnal hypoglycemia while targeting fasting near-normoglycemia”.

“In T2D, identification of diurnal changes in BG goes back decades, but only quite recently fasting hyperglycemia has been attributed to a transient increase in hepatic glucose production (both glycogenolysis and gluconeogenesis) at dawn in the absence of compensatory insulin secretion (1517). Monnier et al. (7) report on the overnight (interstitial) glucose concentration (IG), as measured by continuous ambulatory IG monitoring, in three groups of 248 subjects with T2D […] Importantly, the dawn phenomenon had an impact on mean daily IG and A1C (mean increase of 0.39% [4.3 mmol/mol]), which was independent of treatment. […] Two messages from the data of Monnier et al. (7) are important. First, the dawn phenomenon is confirmed as a frequent event across the heterogeneous population of T2D independent of (oral) treatment and studied in everyday life conditions, not only in the setting of specialized clinical research units. Second, the article reaffirms that the primary target of treatment in T2D is to reestablish near-normoglycemia before and after breakfast (i.e., to treat the dawn phenomenon) to lower mean daily BG and A1C (8). […] the dawn phenomenon induces hyperglycemia not only before, but, to a larger extent, after breakfast as well (7,18). Over the years, fasting (and postbreakfast) hyperglycemia in T2D worsens as result of progressively impaired pancreatic B-cell function on the background of continued insulin resistance primarily at dawn (8,1518) and independently of age (19). Because it is an early metabolic abnormality leading over time to the vicious circle of “hyperglycemia begets hyperglycemia” by glucotoxicity and lipotoxicity, the dawn phenomenon in T2D should be treated early and appropriately before A1C continues to increase (20).”

“Oral medications do not adequately control the dawn phenomenon, even when given in combination (7,18). […] The evening replacement of basal insulin, which abolishes the dawn phenomenon by restraining hepatic glucose production and lipolysis (21), is an effective treatment as it mimics the physiology of glucose homeostasis in normal, nondiabetic subjects (4). Early use of basal insulin in T2D is an add-on option treatment after failure of metformin to control A1C <7.0% (20). However, […] it would be wise to consider initiation of basal insulin […] before — not after — A1C has increased well beyond 7.0%, as usually it is done in practice currently.”

ii. Peripheral Neuropathy in Adolescents and Young Adults With Type 1 and Type 2 Diabetes From the SEARCH for Diabetes in Youth Follow-up Cohort.

“Diabetic peripheral neuropathy (DPN) is among the most distressing of all the chronic complications of diabetes and is a cause of significant disability and poor quality of life (4). Depending on the patient population and diagnostic criteria, the prevalence of DPN among adults with diabetes ranges from 30 to 70% (57). However, there are insufficient data on the prevalence and predictors of DPN among the pediatric population. Furthermore, early detection and good glycemic control have been proven to prevent or delay adverse outcomes associated with DPN (5,8,9). Near-normal control of blood glucose beginning as soon as possible after the onset of diabetes may delay the development of clinically significant nerve impairment (8,9). […] The American Diabetes Association (ADA) recommends screening for DPN in children and adolescents with type 2 diabetes at diagnosis and 5 years after diagnosis for those with type 1 diabetes, followed by annual evaluations thereafter, using simple clinical tests (10). Since subclinical signs of DPN may precede development of frank neuropathic symptoms, systematic, preemptive screening is required in order to identify DPN in its earliest stages.

There are various measures that can be used for the assessment of DPN. The Michigan Neuropathy Screening Instrument (MNSI) is a simple, sensitive, and specific tool for the screening of DPN (11). It was validated in large independent cohorts (12,13) and has been widely used in clinical trials and longitudinal cohort studies […] The aim of this pilot study was to provide preliminary estimates of the prevalence of and factors associated with DPN among children and adolescents with type 1 and type 2 diabetes.”

“A total of 399 youth (329 with type 1 and 70 with type 2 diabetes) participated in the pilot study. Youth with type 1 diabetes were younger (mean age 15.7 ± 4.3 years) and had a shorter duration of diabetes (mean duration 6.2 ± 0.9 years) compared with youth with type 2 diabetes (mean age 21.6 ± 4.1 years and mean duration 7.6 ± 1.8 years). Participants with type 2 diabetes had a higher BMI z score and waist circumference, were more likely to be smokers, and had higher blood pressure and lipid levels than youth with type 1 diabetes (all P < 0.001). A1C, however, did not significantly differ between the two groups (mean A1C 8.8 ± 1.8% [73 ± 2 mmol/mol] for type 1 diabetes and 8.5 ± 2.9% [72 ± 3 mmol/mol] for type 2 diabetes; P = 0.5) but was higher than that recommended by the ADA for this age-group (A1C ≤7.5%) (10). The prevalence of DPN (defined as the MNSIE score >2) was 8.2% among youth with type 1 diabetes and 25.7% among those with type 2 diabetes. […] Youth with DPN were older and had a longer duration of diabetes, greater central obesity (increased waist circumference), higher blood pressure, an atherogenic lipid profile (low HDL cholesterol and marginally high triglycerides), and microalbuminuria. A1C […] was not significantly different between those with and without DPN (9.0% ± 2.0 […] vs. 8.8% ± 2.1 […], P = 0.58). Although nearly 37% of youth with type 2 diabetes came from lower-income families with annual income <25,000 USD per annum (as opposed to 11% for type 1 diabetes), socioeconomic status was not significantly associated with DPN (P = 0.77).”

“In the unadjusted logistic regression model, the odds of having DPN was nearly four times higher among those with type 2 diabetes compared with youth with type 1 diabetes (odds ratio [OR] 3.8 [95% CI 1.9–7.5, P < 0.0001). This association was attenuated, but remained significant, after adjustment for age and sex (OR 2.3 [95% CI 1.1–5.0], P = 0.03). However, this association was no longer significant (OR 2.1 [95% CI 0.3–15.9], P = 0.47) when additional covariates […] were added to the model […] The loss of the association between diabetes type and DPN with addition of covariates in the fully adjusted model could be due to power loss, given the small number of youth with DPN in the sample, or indicative of stronger associations between these covariates and DPN such that conditioning on them eliminates the observed association between DPN and diabetes type.”

“The prevalence of DPN among type 1 diabetes youth in our pilot study is lower than that reported by Eppens et al. (15) among 1,433 Australian adolescents with type 1 diabetes assessed by thermal threshold testing and VPT (prevalence of DPN 27%; median age and duration 15.7 and 6.8 years, respectively). A much higher prevalence was also reported among Danish (62.5%) and Brazilian (46%) cohorts of type 1 diabetes youth (16,17) despite a younger age (mean age among Danish children 13.7 years and Brazilian cohort 12.9 years). The prevalence of DPN among youth with type 2 diabetes (26%) found in our study is comparable to that reported among the Australian cohort (21%) (15). The wide ranges in the prevalence estimates of DPN among the young cannot solely be attributed to the inherent racial/ethnic differences in this population but could potentially be due to the differing criteria and diagnostic tests used to define and characterize DPN.”

“In our study, the duration of diabetes was significantly longer among those with DPN, but A1C values did not differ significantly between the two groups, suggesting that a longer duration with its sustained impact on peripheral nerves is an important determinant of DPN. […] Cho et al. (22) reported an increase in the prevalence of DPN from 14 to 28% over 17 years among 819 Australian adolescents with type 1 diabetes aged 11–17 years at baseline, despite improvements in care and minor improvements in A1C (8.2–8.7%). The prospective Danish Study Group of Diabetes in Childhood also found no association between DPN (assessed by VPT) and glycemic control (23).”

“In conclusion, our pilot study found evidence that the prevalence of DPN in adolescents with type 2 diabetes approaches rates reported in adults with diabetes. Several CVD risk factors such as central obesity, elevated blood pressure, dyslipidemia, and microalbuminuria, previously identified as predictors of DPN among adults with diabetes, emerged as independent predictors of DPN in this young cohort and likely accounted for the increased prevalence of DPN in youth with type 2 diabetes.

iii. Disturbed Eating Behavior and Omission of Insulin in Adolescents Receiving Intensified Insulin Treatment.

“Type 1 diabetes appears to be a risk factor for the development of disturbed eating behavior (DEB) (1,2). Estimates of the prevalence of DEB among individuals with type 1 diabetes range from 10 to 49% (3,4), depending on methodological issues such as the definition and measurement of DEB. Some studies only report the prevalence of full-threshold diagnoses of anorexia nervosa, bulimia nervosa, and eating disorders not otherwise specified, whereas others also include subclinical eating disorders (1). […] Although different terminology complicates the interpretation of prevalence rates across studies, the findings are sufficiently robust to indicate that there is a higher prevalence of DEB in type 1 diabetes compared with healthy controls. A meta-analysis reported a three-fold increase of bulimia nervosa, a two-fold increase of eating disorders not otherwise specified, and a two-fold increase of subclinical eating disorders in patients with type 1 diabetes compared with controls (2). No elevated rates of anorexia nervosa were found.”

“When DEB and type 1 diabetes co-occur, rates of morbidity and mortality are dramatically increased. A Danish study of comorbid type 1 diabetes and anorexia nervosa showed that the crude mortality rate at 10-year follow-up was 2.5% for type 1 diabetes and 6.5% for anorexia nervosa, but the rate increased to 34.8% when occurring together (the standardized mortality rates were 4.06, 8.86, and 14.5, respectively) (9). The presence of DEB in general also can severely impair metabolic control and advance the onset of long-term diabetes complications (4). Insulin reduction or omission is an efficient weight loss strategy uniquely available to patients with type 1 diabetes and has been reported in up to 37% of patients (1012). Insulin restriction is associated with poorer metabolic control, and previous research has found that self-reported insulin restriction at baseline leads to a three-fold increased risk of mortality at 11-year follow-up (10).

Few population-based studies have specifically investigated the prevalence of and relationship between DEBs and insulin restriction. The generalizability of existing research remains limited by relatively small samples and a lack of males. Further, many studies have relied on generic measures of DEBs, which may not be appropriate for use in individuals with type 1 diabetes. The Diabetes Eating Problem Survey–Revised (DEPS-R) is a newly developed and diabetes-specific screening tool for DEBs. A recent study demonstrated satisfactory psychometric properties of the Norwegian version of the DEPS-R among children and adolescents with type 1 diabetes 11–19 years of age (13). […] This study aimed to assess young patients with type 1 diabetes to assess the prevalence of DEBs and frequency of insulin omission or restriction, to compare the prevalence of DEB between males and females across different categories of weight and age, and to compare the clinical features of participants with and without DEBs and participants who restrict and do not restrict insulin. […] The final sample consisted of 770 […] children and adolescents with type 1 diabetes 11–19 years of age. There were 380 (49.4%) males and 390 (50.6%) females.”

27.7% of female and 9% of male children and adolescents with type 1 diabetes receiving intensified insulin treatment scored above the predetermined cutoff on the DEPS-R, suggesting a level of disturbed eating that warrants further attention by treatment providers. […] Significant differences emerged across age and weight categories, and notable sex-specific trends were observed. […] For the youngest (11–13 years) and underweight (BMI <18.5) categories, the proportion of DEB was <10% for both sexes […]. Among females, the prevalence of DEB increased dramatically with age to ∼33% among 14 to 16 year olds and to nearly 50% among 17 to 19 year olds. Among males, the rate remained low at 7% for 14 to 16 year olds and doubled to ∼15% for 17 to 19 year olds.

A similar sex-specific pattern was detected across weight categories. Among females, the prevalence of DEB increased steadily and significantly from 9% among the underweight category to 23% for normal weight, 42% for overweight, and 53% for the obese categories, respectively. Among males, ∼6–7% of both the underweight and normal weight groups reported DEB, with rates increasing to ∼15% for both the overweight and obese groups. […] When separated by sex, females scoring above the cutoff on the DEPS-R had significantly higher HbA1c (9.2% [SD, 1.9]) than females scoring below the cutoff (8.4% [SD, 1.3]; P < 0.001). The same trend was observed among males (9.2% [SD, 1.6] vs. 8.4% [SD, 1.3]; P < 0.01). […] A total of 31.6% of the participants reported using less insulin and 6.9% reported skipping their insulin dose entirely at least occasionally after overeating. When assessing the sexes separately, we found that 36.8% of females reported restricting and 26.2% reported skipping insulin because of overeating. The rates for males were 9.4 and 4.5%, respectively.”

“The finding that DEBs are common in young patients with type 1 diabetes is in line with previous literature (2). However, because of different assessment methods and different definitions of DEB, direct comparison with other studies is complicated, especially because this is the first study to have used the DEPS-R in a prevalence study. However, two studies using the original DEPS have reported similar results, with 37.9% (23) and 53.8% (24) of the participants reporting engaging in unhealthy weight control practices. In our study, females scored significantly higher than males, which is not surprising given previous studies demonstrating an increased risk of development of DEB in nondiabetic females compared with males. In addition, the prevalence rates increased considerably by increasing age and weight. A relationship between eating pathology and older age and higher BMI also has been demonstrated in previous research conducted in both diabetic and nondiabetic adolescent populations.”

“Consistent with existent literature (1012,27), we found a high frequency of insulin restriction. For example, Bryden et al. (11) assessed 113 males and females (aged 17–25 years) with type 1 diabetes and found that a total of 37% of the females (no males) reported a history of insulin omission or reduction for weight control purposes. Peveler et al. (12) investigated 87 females with type 1 diabetes aged 11–25 years, and 36% reported intentionally reducing or omitting their insulin doses to control their weight. Finally, Goebel-Fabbri et al. (10) examined 234 females 13–60 years of age and found that 30% reported insulin restriction. Similarly, 36.8% of the participants in our study reported reducing their insulin doses occasionally or more often after overeating.”

iv. Clinical Inertia in People With Type 2 Diabetes. A retrospective cohort study of more than 80,000 people.

“Despite good-quality evidence of tight glycemic control, particularly early in the disease trajectory (3), people with type 2 diabetes often do not reach recommended glycemic targets. Baseline characteristics in observational studies indicate that both insulin-experienced and insulin-naïve people may have mean HbA1c above the recommended target levels, reflecting the existence of patients with poor glycemic control in routine clinical care (810). […] U.K. data, based on an analysis reflecting previous NICE guidelines, show that it takes a mean of 7.7 years to initiate insulin after the start of the last OAD [oral antidiabetes drugs] (in people taking two or more OADs) and that mean HbA1c is ~10% (86 mmol/mol) at the time of insulin initiation (12). […] This failure to intensify treatment in a timely manner has been termed clinical inertia; however, data are lacking on clinical inertia in the diabetes-management pathway in a real-world primary care setting, and studies that have been carried out are, relatively speaking, small in scale (13,14). This retrospective cohort analysis investigates time to intensification of treatment in people with type 2 diabetes treated with OADs and the associated levels of glycemic control, and compares these findings with recommended treatment guidelines for diabetes.”

“We used the Clinical Practice Research Datalink (CPRD) database. This is the world’s largest computerized database, representing the primary care longitudinal records of >13 million patients from across the U.K. The CPRD is representative of the U.K. general population, with age and sex distributions comparable with those reported by the U.K. National Population Census (15). All information collected in the CPRD has been subjected to validation studies and been proven to contain consistent and high-quality data (16).”

“50,476 people taking one OAD, 25,600 people taking two OADs, and 5,677 people taking three OADs were analyzed. Mean baseline HbA1c (the most recent measurement within 6 months before starting OADs) was 8.4% (68 mmol/mol), 8.8% (73 mmol/mol), and 9.0% (75 mmol/mol) in people taking one, two, or three OADs, respectively. […] In people with HbA1c ≥7.0% (≥53 mmol/mol) taking one OAD, median time to intensification with an additional OAD was 2.9 years, whereas median time to intensification with insulin was >7.2 years. Median time to insulin intensification in people with HbA1c ≥7.0% (≥53 mmol/mol) taking two or three OADs was >7.2 and >7.1 years, respectively. In people with HbA1c ≥7.5% or ≥8.0% (≥58 or ≥64 mmol/mol) taking one OAD, median time to intensification with an additional OAD was 1.9 or 1.6 years, respectively; median time to intensification with insulin was >7.1 or >6.9 years, respectively. In those people with HbA1c ≥7.5% or ≥8.0% (≥58 or ≥64 mmol/mol) and taking two OADs, median time to insulin was >7.2 and >6.9 years, respectively; and in those people taking three OADs, median time to insulin intensification was >6.1 and >6.0 years, respectively.”

“By end of follow-up, treatment of 17.5% of people with HbA1c ≥7.0% (≥53 mmol/mol) taking three OADs was intensified with insulin, treatment of 20.6% of people with HbA1c ≥7.5% (≥58 mmol/mol) taking three OADs was intensified with insulin, and treatment of 22.0% of people with HbA1c ≥8.0% (≥64 mmol/mol) taking three OADs was intensified with insulin. There were minimal differences in the proportion of patients intensified between the groups. […] In people taking one OAD, the probability of an additional OAD or initiation of insulin was 23.9% after 1 year, increasing to 48.7% by end of follow-up; in people taking two OADs, the probability of an additional OAD or initiation of insulin was 11.4% after 1 year, increasing to 30.1% after 2 years; and in people taking three OADs, the probability of an additional OAD or initiation of insulin was 5.7% after 1 year, increasing to 12.0% by the end of follow-up […] Mean ± SD HbA1c in patients taking one OAD was 8.7 ± 1.6% in those intensified with an additional OAD (n = 14,605), 9.4 ± 2.3% (n = 1,228) in those intensified with insulin, and 8.7 ± 1.7% (n = 15,833) in those intensified with additional OAD or insulin. Mean HbA1c in patients taking two OADs was 8.8 ± 1.5% (n = 3,744), 9.8 ± 1.9% (n = 1,631), and 9.1 ± 1.7% (n = 5,405), respectively. In patients taking three OADs, mean HbA1c at intensification with insulin was 9.7 ± 1.6% (n = 514).”

This analysis shows that there is a delay in intensifying treatment in people with type 2 diabetes with suboptimal glycemic control, with patients remaining in poor glycemic control for >7 years before intensification of treatment with insulin. In patients taking one, two, or three OADs, median time from initiation of treatment to intensification with an additional OAD for any patient exceeded the maximum follow-up time of 7.2–7.3 years, dependent on subcohort. […] Despite having HbA1c levels for which diabetes guidelines recommend treatment intensification, few people appeared to undergo intensification (4,6,7). The highest proportion of people with clinical inertia was for insulin initiation in people taking three OADs. Consequently, these people experienced prolonged periods in poor glycemic control, which is detrimental to long-term outcomes.”

“Previous studies in U.K. general practice have shown similar findings. A retrospective study involving 14,824 people with type 2 diabetes from 154 general practice centers contributing to the Doctors Independent Network Database (DIN-LINK) between 1995 and 2005 observed that median time to insulin initiation for people prescribed multiple OADs was 7.7 years (95% CI 7.4–8.5 years); mean HbA1c before insulin was 9.85% (84 mmol/mol), which decreased by 1.34% (95% CI 1.24–1.44%) after therapy (12). A longitudinal observational study from health maintenance organization data in 3,891 patients with type 2 diabetes in the U.S. observed that, despite continued HbA1c levels >7% (>53 mmol/mol), people treated with sulfonylurea and metformin did not start insulin for almost 3 years (21). Another retrospective cohort study, using data from the Health Improvement Network database of 2,501 people with type 2 diabetes, estimated that only 25% of people started insulin within 1.8 years of multiple OAD failure, if followed for 5 years, and that 50% of people delayed starting insulin for almost 5 years after failure of glycemic control with multiple OADs (22). The U.K. cohort of a recent, 26-week observational study examining insulin initiation in clinical practice reported a large proportion of insulin-naïve people with HbA1c >9% (>75 mmol/mol) at baseline (64%); the mean HbA1c in the global cohort was 8.9% (74 mmol/mol) (10). Consequently, our analysis supports previous findings concerning clinical inertia in both U.K. and U.S. general practice and reflects little improvement in recent years, despite updated treatment guidelines recommending tight glycemic control.

v. Small- and Large-Fiber Neuropathy After 40 Years of Type 1 Diabetes. Associations with glycemic control and advanced protein glycation: the Oslo Study.

“How hyperglycemia may cause damage to the nervous system is not fully understood. One consequence of hyperglycemia is the generation of advanced glycation end products (AGEs) that can form nonenzymatically between glucose, lipids, and amino groups. It is believed that AGEs are involved in the pathophysiology of neuropathy. AGEs tend to affect cellular function by altering protein function (11). One of the AGEs, N-ε-(carboxymethyl)lysine (CML), has been found in excessive amounts in the human diabetic peripheral nerve (12). High levels of methylglyoxal in serum have been found to be associated with painful peripheral neuropathy (13). In recent years, differentiation of affected nerves is possible by virtue of specific function tests to distinguish which fibers are damaged in diabetic polyneuropathy: large myelinated (Aα, Aβ), small thinly myelinated (Aδ), or small nonmyelinated (C) fibers. […] Our aims were to evaluate large- and small-nerve fiber function in long-term type 1 diabetes and to search for longitudinal associations with HbA1c and the AGEs CML and methylglyoxal-derived hydroimidazolone.”

“27 persons with type 1 diabetes of 40 ± 3 years duration underwent large-nerve fiber examinations, with nerve conduction studies at baseline and years 8, 17, and 27. Small-fiber functions were assessed by quantitative sensory thresholds (QST) and intraepidermal nerve fiber density (IENFD) at year 27. HbA1c was measured prospectively through 27 years. […] Fourteen patients (52%) reported sensory symptoms. Nine patients reported symptoms of a sensory neuropathy (reduced sensibility in feet or impaired balance), while three of these patients described pain. Five patients had symptoms compatible with carpal tunnel syndrome (pain or paresthesias within the innervation territory of the median nerve […]. An additional two had no symptoms but abnormal neurological tests with absent tendon reflexes and reduced sensibility. A total of 16 (59%) of the patients had symptoms or signs of neuropathy. […] No patient with symptoms of neuropathy had normal neurophysiological findings. […] Abnormal autonomic testing was observed in 7 (26%) of the patients and occurred together with neurophysiological signs of peripheral neuropathy. […] Twenty-two (81%) had small-fiber dysfunction by QST. Heat pain thresholds in the foot were associated with hydroimidazolone and HbA1c. IENFD was abnormal in 19 (70%) and significantly lower in diabetic patients than in age-matched control subjects (4.3 ± 2.3 vs. 11.2 ± 3.5 mm, P < 0.001). IENFD correlated negatively with HbA1c over 27 years (r = −0.4, P = 0.04) and CML (r = −0.5, P = 0.01). After adjustment for age, height, and BMI in a multiple linear regression model, CML was still independently associated with IENFD.”

Our study shows that small-fiber dysfunction is more prevalent than large-fiber dysfunction in diabetic neuropathy after long duration of type 1 diabetes. Although large-fiber abnormalities were less common than small-fiber abnormalities, almost 60% of the participants had their large nerves affected after 40 years with diabetes. Long-term blood glucose estimated by HbA1c measured prospectively through 27 years and AGEs predict large- and small-nerve fiber function.”

vi. Subarachnoid Hemorrhage in Type 1 Diabetes. A prospective cohort study of 4,083 patients with diabetes.

“Subarachnoid hemorrhage (SAH) is a life-threatening cerebrovascular event, which is usually caused by a rupture of a cerebrovascular aneurysm. These aneurysms are mostly found in relatively large-caliber (≥1 mm) vessels and can often be considered as macrovascular lesions. The overall incidence of SAH has been reported to be 10.3 per 100,000 person-years (1), even though the variation in incidence between countries is substantial (1). Notably, the population-based incidence of SAH is 35 per 100,000 person-years in the adult (≥25 years of age) Finnish population (2). The incidence of nonaneurysmal SAH is globally unknown, but it is commonly believed that 5–15% of all SAHs are of nonaneurysmal origin. Prospective, long-term, population-based SAH risk factor studies suggest that smoking (24), high blood pressure (24), age (2,3), and female sex (2,4) are the most important risk factors for SAH, whereas diabetes (both types 1 and 2) does not appear to be associated with an increased risk of SAH (2,3).

An increased risk of cardiovascular disease is well recognized in people with diabetes. There are, however, very few studies on the risk of cerebrovascular disease in type 1 diabetes since most studies have focused on type 2 diabetes alone or together with type 1 diabetes. Cerebrovascular mortality in the 20–39-year age-group of people with type 1 diabetes is increased five- to sevenfold in comparison with the general population but accounts only for 15% of all cardiovascular deaths (5). Of the cerebrovascular deaths in patients with type 1 diabetes, 23% are due to hemorrhagic strokes (5). However, the incidence of SAH in type 1 diabetes is unknown. […] In this prospective cohort study of 4,083 patients with type 1 diabetes, we aimed to determine the incidence and characteristics of SAH.”

“52% [of participants] were men, the mean age was 37.4 ± 11.8 years, and the duration of diabetes was 21.6 ± 12.1 years at enrollment. The FinnDiane Study is a nationwide multicenter cohort study of genetic, clinical, and environmental risk factors for microvascular and macrovascular complications in type 1 diabetes. […] all type 1 diabetic patients in the FinnDiane database with follow-up data and without a history of stroke at baseline were included. […] Fifteen patients were confirmed to have an SAH, and thus the crude incidence of SAH was 40.9 (95% CI 22.9–67.4) per 100,000 person-years. Ten out of these 15 SAHs were nonaneurysmal SAHs […] The crude incidence of nonaneurysmal SAH was 27.3 (13.1–50.1) per 100,000 person-years. None of the 10 nonaneurysmal SAHs were fatal. […] Only 3 out of 10 patients did not have verified diabetic microvascular or macrovascular complications prior to the nonaneurysmal SAH event. […] Four patients with type 1 diabetes had a fatal SAH, and all these patients died within 24 h after SAH.”

The presented study results suggest that the incidence of nonaneurysmal SAH is high among patients with type 1 diabetes. […] It is of note that smoking type 1 diabetic patients had a significantly increased risk of nonaneurysmal and all-cause SAHs. Smoking also increases the risk of microvascular complications in insulin-treated diabetic patients, and these patients more often have retinal and renal microangiopathy than never-smokers (8). […] Given the high incidence of nonaneurysmal SAH in patients with type 1 diabetes and microvascular changes (i.e., diabetic retinopathy and nephropathy), the results support the hypothesis that nonaneurysmal SAH is a microvascular rather than macrovascular subtype of stroke.”

“Only one patient with type 1 diabetes had a confirmed aneurysmal SAH. Four other patients died suddenly due to an SAH. If these four patients with type 1 diabetes and a fatal SAH had an aneurysmal SAH, which, taking into account the autopsy reports and imaging findings, is very likely, aneurysmal SAH may be an exceptionally deadly event in type 1 diabetes. Population-based evidence suggests that up to 45% of people die during the first 30 days after SAH, and 18% die at emergency rooms or outside hospitals (9). […] Contrary to aneurysmal SAH, nonaneurysmal SAH is virtually always a nonfatal event (1014). This also supports the view that nonaneurysmal SAH is a disease of small intracranial vessels, i.e., a microvascular disease. Diabetic retinopathy, a chronic microvascular complication, has been associated with an increased risk of stroke in patients with diabetes (15,16). Embryonically, the retina is an outgrowth of the brain and is similar in its microvascular properties to the brain (17). Thus, it has been suggested that assessments of the retinal vasculature could be used to determine the risk of cerebrovascular diseases, such as stroke […] Most interestingly, the incidence of nonaneurysmal SAH was at least two times higher than the incidence of aneurysmal SAH in type 1 diabetic patients. In comparison, the incidence of nonaneurysmal SAH is >10 times lower than the incidence of aneurysmal SAH in the general adult population (21).”

vii. HbA1c and the Risks for All-Cause and Cardiovascular Mortality in the General Japanese Population.

Keep in mind when looking at these data that this is type 2 data. Type 1 diabetes is very rare in Japan and the rest of East Asia.

“The risk for cardiovascular death was evaluated in a large cohort of participants selected randomly from the overall Japanese population. A total of 7,120 participants (2,962 men and 4,158 women; mean age 52.3 years) free of previous CVD were followed for 15 years. Adjusted hazard ratios (HRs) and 95% CIs among categories of HbA1c (<5.0%, 5.0–5.4%, 5.5–5.9%, 6.0–6.4%, and ≥6.5%) for participants without treatment for diabetes and HRs for participants with diabetes were calculated using a Cox proportional hazards model.

RESULTS During the study, there were 1,104 deaths, including 304 from CVD, 61 from coronary heart disease, and 127 from stroke (78 from cerebral infarction, 25 from cerebral hemorrhage, and 24 from unclassified stroke). Relations to HbA1c with all-cause mortality and CVD death were graded and continuous, and multivariate-adjusted HRs for CVD death in participants with HbA1c 6.0–6.4% and ≥6.5% were 2.18 (95% CI 1.22–3.87) and 2.75 (1.43–5.28), respectively, compared with participants with HbA1c <5.0%. Similar associations were observed between HbA1c and death from coronary heart disease and death from cerebral infarction.

CONCLUSIONS High HbA1c levels were associated with increased risk for all-cause mortality and death from CVD, coronary heart disease, and cerebral infarction in general East Asian populations, as in Western populations.”

November 15, 2017 Posted by | Cardiology, Diabetes, Epidemiology, Medicine, Neurology, Pharmacology, Studies | Leave a comment