Econstudentlog

A few diabetes papers of interest

i. Identical and Nonidentical Twins: Risk and Factors Involved in Development of Islet Autoimmunity and Type 1 Diabetes.

Some observations from the paper:

“Type 1 diabetes is preceded by the presence of preclinical, persistent islet autoantibodies (1). Autoantibodies against insulin (IAA) (2), GAD (GADA), insulinoma-associated antigen 2 (IA-2A) (3), and/or zinc transporter 8 (ZnT8A) (4) are typically present prior to development of symptomatic hyperglycemia and progression to clinical disease. These autoantibodies may develop many years before onset of type 1 diabetes, and increasing autoantibody number and titers have been associated with increased risk of progression to disease (57).

Identical twins have an increased risk of progression of islet autoimmunity and type 1 diabetes after one twin is diagnosed, although reported rates have been highly variable (30–70%) (811). This risk is increased if the proband twin develops diabetes at a young age (12). Concordance rates for type 1 diabetes in monozygotic twins with long-term follow-up is >50% (13). Risk for development of islet autoimmunity and type 1 diabetes for nonidentical twins is thought to be similar to non-twin siblings (risk of 6–10% for diabetes) (14). Full siblings who inherit both high-risk HLA (HLA DQA1*05:01 DR3/4*0302) haplotypes identical to their proband sibling with type 1 diabetes have a much higher risk for development of diabetes than those who share only one or zero haplotypes (55% vs. 5% by 12 years of age, respectively; P = 0.03) (15). Despite sharing both HLA haplotypes with their proband, siblings without the HLA DQA1*05:01 DR3/4*0302 genotype had only a 25% risk for type 1 diabetes by 12 years of age (15).”

“The TrialNet Pathway to Prevention Study (previously the TrialNet Natural History Study; 16) has been screening relatives of patients with type 1 diabetes since 2004 and follows these subjects with serial autoantibody testing for the development of islet autoantibodies and type 1 diabetes. The study offers longitudinal monitoring for autoantibody-positive subjects through HbA1c testing and oral glucose tolerance tests (OGTTs).”

“The purpose of this study was to evaluate the prevalence of islet autoantibodies and analyze a logistic regression model to test the effects of genetic factors and common twin environment on the presence or absence of islet autoantibodies in identical twins, nonidentical twins, and full siblings screened in the TrialNet Pathway to Prevention Study. In addition, this study analyzed the presence of islet autoantibodies (GADA, IA-2A, and IAA) and risk of type 1 diabetes over time in identical twins, nonidentical twins, and full siblings followed in the TrialNet Pathway to Prevention Study. […] A total of 48,051 sibling subjects were initially screened (288 identical twins, 630 nonidentical twins, and 47,133 full siblings). Of these, 48,026 had an initial screening visit with GADA, IA2A, and IAA results (287 identical twins, 630 nonidentical twins, and 47,109 full siblings). A total of 17,226 participants (157 identical twins, 283 nonidentical twins and 16,786 full siblings) were followed for a median of 2.1 years (25th percentile 1.1 year and 75th percentile 4.0 years), with follow-up defined as at least ≥12 months follow-up after initial screening visit.”

“At the initial screening visit, GADA was present in 20.2% of identical twins (58 out of 287), 5.6% of nonidentical twins (35 out of 630), and 4.7% of full siblings (2,205 out of 47,109) (P < 0.0001). Additionally, IA-2A was present primarily in identical twins (9.4%; 27 out of 287) and less so in nonidentical twins (3.3%; 21 out of 630) and full siblings (2.2%; 1,042 out of 47,109) (P = 0.0001). Nearly 12% of identical twins (34 out of 287) were positive for IAA at initial screen, whereas 4.6% of nonidentical twins (29 out of 630) and 2.5% of full siblings (1,152 out of 47,109) were initially IAA positive (P < 0.0001).”

“At 3 years of follow-up, the risk for development of GADA was 16% for identical twins, 5% for nonidentical twins, and 4% for full siblings (P < 0.0001) (Fig. 1A). The risk for development of IA-2A by 3 years of follow-up was 7% for identical twins, 4% for nonidentical twins, and 2% for full siblings (P = 0.0005) (Fig. 1B). At 3 years of follow-up, the risk of development of IAA was 10% for identical twins, 5% for nonidentical twins, and 4% for full siblings (P = 0.006) […] In initially autoantibody-negative subjects, 1.5% of identical twins, 0% of nonidentical twins, and 0.5% of full siblings progressed to diabetes at 3 years of follow-up (P = 0.18) […] For initially single autoantibody–positive subjects, at 3 years of follow-up, 69% of identical twins, 13% of nonidentical twins, and 12% of full siblings developed type 1 diabetes (P < 0.0001) […] Subjects who were positive for multiple autoantibodies at screening had a higher risk of developing type 1 diabetes at 3 years of follow-up with 69% of identical twins, 72% of nonidentical twins, and 47% of full siblings developing type 1 diabetes (P = 0.079)”

“Because TrialNet is not a birth cohort and the median age at screening visit was 11 years overall, this study would not capture subjects who had initial seroconversion at a young age and then progressed through the intermediate stage of multiple antibody positivity before developing diabetes.”

“This study of >48,000 siblings of patients with type 1 diabetes shows that at initial screening, identical twins were more likely to have at least one positive autoantibody and be positive for GADA, IA-2A, and IAA than either nonidentical twins or full siblings. […] risk for development of type 1 diabetes at 3 years of follow-up was high for both single and multiple autoantibody–positive identical twins (62–69%) and multiple autoantibody–positive nonidentical twins (72%) compared with 47% for initially multiple autoantibody–positive full siblings and 12–13% for initially single autoantibody–positive nonidentical twins and full siblings. To our knowledge, this is the largest prediagnosis study to evaluate the effects of genetic factors and common twin environment on the presence or absence of islet autoantibodies.

In this study, younger age, male sex, and genetic factors were significantly associated with expression of IA-2A, IAA, more than one autoantibody, and more than two autoantibodies, whereas only genetic factors were significant for GADA. An influence of common twin environment (E) was not seen. […] Previous studies have shown that identical twin siblings of patients with type 1 diabetes have a higher concordance rate for development of type 1 diabetes compared with nonidentical twins, although reported rates for identical twins have been highly variable (30–70%) […]. Studies from various countries (Australia, Denmark, Finland, Great Britain, and U.S.) have reported concordance rates for nonidentical twins ∼5–15% […]. Concordance rates have been higher when the proband was diagnosed at a younger age (8), which may explain the variability in these reported rates. In this study, autoantibody-negative nonidentical and identical twins had a low risk of type 1 diabetes by 3 years of follow-up. In contrast, once twins developed autoantibodies, risk for type 1 diabetes was high for multiple autoantibody nonidentical twins and both single and multiple autoantibody identical twins.”

ii. A Type 1 Diabetes Genetic Risk Score Can Identify Patients With GAD65 Autoantibody–Positive Type 2 Diabetes Who Rapidly Progress to Insulin Therapy.

This is another paper in the ‘‘ segment from the February edition of Diabetes Care – multiple other papers on related topics were also included in that edition, so if you’re interested in the genetics of diabetes it may be worth checking out.

Some observations from the paper:

“Type 2 diabetes is a progressive disease due to a gradual reduction in the capacity of the pancreatic islet cells (β-cells) to produce insulin (1). The clinical course of this progression is highly variable, with some patients progressing very rapidly to requiring insulin treatment, whereas others can be successfully treated with lifestyle changes or oral agents for many years (1,2). Being able to identify patients likely to rapidly progress may have clinical utility in prioritization monitoring and treatment escalation and in choice of therapy.

It has previously been shown that many patients with clinical features of type 2 diabetes have positive GAD65 autoantibodies (GADA) and that the presence of this autoantibody is associated with faster progression to insulin (3,4). This is often termed latent autoimmune diabetes in adults (LADA) (5,6). However, the predictive value of GADA testing is limited in a population with clinical type 2 diabetes, with many GADA-positive patients not requiring insulin treatment for many years (4,7). Previous research has suggested that genetic variants in the HLA region associated with type 1 diabetes are associated with more rapid progression to insulin in patients with clinically defined type 2 diabetes and positive GADA (8).

We have recently developed a type 1 diabetes genetic risk score (T1D GRS), which provides an inexpensive ($70 in our local clinical laboratory and <$20 where DNA has been previously extracted), integrated assessment of a person’s genetic susceptibility to type 1 diabetes (9). The score is composed of 30 type 1 diabetes risk variants weighted for effect size and aids discrimination of type 1 diabetes from type 2 diabetes. […] We aimed to determine if the T1D GRS could predict rapid progression to insulin (within 5 years of diagnosis) over and above GADA testing in patients with a clinical diagnosis of type 2 diabetes treated without insulin at diagnosis.”

“We examined the relationship between GADA, T1D GRS, and progression to insulin therapy using survival analysis in 8,608 participants with clinical type 2 diabetes initially treated without insulin therapy. […] In this large study of participants with a clinical diagnosis of type 2 diabetes, we have found that type 1 genetic susceptibility alters the clinical implications of a positive GADA when predicting rapid time to insulin. GADA-positive participants with high T1D GRS were more likely to require insulin within 5 years of diagnosis, with 48% progressing to insulin in this time in contrast to only 18% in participants with low T1D GRS. The T1D GRS was independent of and additive to participant’s age of diagnosis and BMI. However, T1D GRS was not associated with rapid insulin requirement in participants who were GADA negative.”

“Our findings have clear implications for clinical practice. The T1D GRS represents a novel clinical test that can be used to enhance the prognostic value of GADA testing. For predicting future insulin requirement in patients with apparent type 2 diabetes who are GADA positive, T1D GRS may be clinically useful and can be used as an additional test in the screening process. However, in patients with type 2 diabetes who are GADA negative, there is no benefit gained from genetic testing. This is unsurprising, as the prevalence of underlying autoimmunity in patients with a clinical phenotype of type 2 diabetes who are GADA negative is likely to be extremely low; therefore, most GADA-negative participants with high T1D GRS will have nonautoimmune diabetes. The use of this two-step testing approach may facilitate a precision medicine approach to patients with apparent type 2 diabetes; patients who are likely to progress rapidly are identified for targeted management, which may include increased monitoring, early therapy intensification, and/or interventions aimed at slowing progression (36,37).

The costs of analyzing the T1D GRS are relatively modest and may fall further, as genetic testing is rapidly becoming less expensive (38). […] In conclusion, a T1D GRS alters the clinical implications of a positive GADA test in patients with clinical type 2 diabetes and is independent of and additive to clinical features. This therefore represents a novel test for identifying patients with rapid progression in this population.”

iii. Retinopathy and RAAS Activation: Results From the Canadian Study of Longevity in Type 1 Diabetes.

“Diabetic retinopathy is the most common cause of preventable blindness in individuals ages 20–74 years and is the most common vascular complication in type 1 and type 2 diabetes (13). On the basis of increasing severity, diabetic retinopathy is classified into nonproliferative diabetic retinopathy (NPDR), defined in early stages by the presence of microaneurysms, retinal vascular closure, and alteration, or proliferative diabetic retinopathy (PDR), defined by the growth of new aberrant blood vessels (neovascularization) susceptible to hemorrhage, leakage, and fibrosis (4). Diabetic macular edema (DME) can be present at any stage of retinopathy and is characterized by increased vascular permeability leading to retinal thickening.

Important risk factors for the development of retinopathy continue to be chronic hyperglycemia, hyperlipidemia, hypertension, and diabetes duration (5,6). Given the systemic nature of these risk factors, cooccurrence of retinopathy with other vascular complications is common in patients with diabetes.”

“A key pathway implicated in diabetes-related small-vessel disease is overactivation of neurohormones. Activation of the neurohormonal renin-angiotensin-aldosterone system (RAAS) pathway predominates in diabetes in response to hyperglycemia and sodium retention. The RAAS plays a pivotal role in regulating systemic BP through vasoconstriction and fluid-electrolyte homeostasis. At the tissue level, angiotensin II (ANGII), the principal mediator of the RAAS, is implicated in fibrosis, oxidative stress, endothelial damage, thrombosis, inflammation, and vascular remodeling. Of note, systemic RAAS blockers reduce the risk of progression of eye disease but not DKD [Diabetic Kidney Disease, US] in adults with type 1 diabetes with normoalbuminuria (12).

Several longitudinal epidemiologic studies of diabetic retinopathy have been completed in type 1 diabetes; however, few have studied the relationships between eye, nerve, and renal complications and the influence of RAAS activation after prolonged duration (≥50 years) in adults with type 1 diabetes. As a result, less is known about mechanisms that persist in diabetes-related microvascular complications after long-standing diabetes. Accordingly, in this cross-sectional analysis from the Canadian Study of Longevity in Type 1 Diabetes involving adults with type 1 diabetes for ≥50 years, our aims were to phenotype retinopathy stage and determine associations between the presence of retinopathy and other vascular complications. In addition, we examined the relationship between retinopathy stage and renal and systemic hemodynamic function, including arterial stiffness, at baseline and dynamically after RAAS activation with an infusion of exogenous ANGII.”

“Of the 75 participants, 12 (16%) had NDR [no diabetic retinopathy], 24 (32%) had NPDR, and 39 (52%) had PDR […]. At baseline, those with NDR had lower mean HbA1c compared with those with NPDR and PDR (7.4 ± 0.7% and 7.5 ± 0.9%, respectively; P for trend = 0.019). Of note, those with more severe eye disease (PDR) had lower systolic and diastolic BP values but a significantly higher urine albumin-to-creatine ratio (UACR) […] compared with those with less severe eye disease (NPDR) or with NDR despite higher use of RAAS inhibitors among those with PDR compared with NPDR or NDR. History of cardiovascular and peripheral vascular disease history was significantly higher in participants with PDR (33.3%) than in those with NPDR (8.3%) or NDR (0%). Diabetic sensory polyneuropathy was prevalent across all groups irrespective of retinopathy status but was numerically higher in the PDR group (95%) than in the NPDR (86%) or NDR (75%) groups. No significant differences were observed in retinal thickness across the three groups.”

One quick note: This was mainly an eye study, but some of the other figures here are well worth taking note of. 3 out of 4 people in the supposedly low-risk group without eye complications had sensory polyneuropathy after 50 years of diabetes.

Conclusions

Hyperglycemia contributes to the pathogenesis of diabetic retinopathy through multiple interactive pathways, including increased production of advanced glycation end products, IGF-I, vascular endothelial growth factor, endothelin, nitric oxide, oxidative damage, and proinflammatory cytokines (2933). Overactivation of the RAAS in response to hyperglycemia also is implicated in the pathogenesis of diabetes-related complications in the retina, nerves, and kidney and is an important therapeutic target in type 1 diabetes. Despite what is known about these underlying pathogenic mechanisms in the early development of diabetes-related complications, whether the same mechanisms are active in the setting of long-standing type 1 diabetes is not known. […] In this study, we observed that participants with PDR were more likely to be taking RAAS inhibitors, to have a higher frequency of cardiovascular or peripheral vascular disease, and to have higher UACR levels, likely reflecting the higher overall risk profile of this group. Although it is not possible to determine why some patients in this cohort developed PDR while others did not after similar durations of type 1 diabetes, it seems unlikely that glycemic control alone is sufficient to fully explain the observed between-group differences and differing vascular risk profiles. Whereas the NDR group had significantly lower mean HbA1c levels than the NPDR and PDR groups, differences between participants with NPDR and those with PDR were modest. Accordingly, other factors, such as differences in vascular function, neurohormones, growth factors, genetics, and lifestyle, may play a role in determining retinopathy severity at the individual level.

The association between retinopathy and risk for DKD is well established in diabetes (34). In the setting of type 2 diabetes, patients with high levels of UACR have twice the risk of developing diabetic retinopathy than those with normal UACR levels. For example, Rodríguez-Poncelas et al. (35) demonstrated that impaired renal function is linked with increased diabetic retinopathy risk. Consistent with these studies and others, the PDR group in this Canadian Study of Longevity in Type 1 Diabetes demonstrated significantly higher UACR, which is associated with an increased risk of DKD progression, illustrating that the interaction between eye and kidney disease progression also may exist in patients with long-standing type 1 diabetes. […] In conclusion, retinopathy was prevalent after prolonged type 1 diabetes duration, and retinopathy severity associated with several measures of neuropathy and with higher UACR. Differential exaggerated responses to RAAS activation in the peripheral vasculature of the PDR group highlights that even in the absence of DKD, neurohormonal abnormalities are likely still operant, and perhaps accentuated, in patients with PDR even after long-standing type 1 diabetes duration.”

iv. Clinical and MRI Features of Cerebral Small-Vessel Disease in Type 1 Diabetes.

“Type 1 diabetes is associated with a fivefold increased risk of stroke (1), with cerebral small-vessel disease (SVD) as the most common etiology (2). Cerebral SVD in type 1 diabetes, however, remains scarcely investigated and is challenging to study in vivo per se owing to the size of affected vasculature (3); instead, MRI signs of SVD are studied. In this study, we aimed to assess the prevalence of cerebral SVD in subjects with type 1 diabetes compared with healthy control subjects and to characterize diabetes-related variables associated with SVD in stroke-free people with type 1 diabetes.”

RESEARCH DESIGN AND METHODS This substudy was cross-sectional in design and included 191 participants with type 1 diabetes and median age 40.0 years (interquartile range 33.0–45.1) and 30 healthy age- and sex-matched control subjects. All participants underwent clinical investigation and brain MRIs, assessed for cerebral SVD.

RESULTS Cerebral SVD was more common in participants with type 1 diabetes than in healthy control subjects: any marker 35% vs. 10% (P = 0.005), cerebral microbleeds (CMBs) 24% vs. 3.3% (P = 0.008), white matter hyperintensities 17% vs. 6.7% (P = 0.182), and lacunes 2.1% vs. 0% (P = 1.000). Presence of CMBs was independently associated with systolic blood pressure (odds ratio 1.03 [95% CI 1.00–1.05], P = 0.035).”

Conclusions

Cerebral SVD is more common in participants with type 1 diabetes than in healthy control subjects. CMBs especially are more prevalent and are independently associated with hypertension. Our results indicate that cerebral SVD starts early in type 1 diabetes but is not explained solely by diabetes-related vascular risk factors or the generalized microvascular disease that takes place in diabetes (7).

There are only small-scale studies on cerebral SVD, especially CMBs, in type 1 diabetes. Compared with the current study, one study with similar diabetes characteristics (i.e., diabetes duration, glycemic control, and blood pressure levels) as in the current study, but lacking a control population, showed a higher prevalence of WMHs, with more than half of the participants affected, but similar prevalence of lacunes and lower prevalence of CMBs (8). In another study, including 67 participants with type 1 diabetes and 33 control subjects, there was no difference in WMH prevalence but a higher prevalence of CMBs in participants with type 1 diabetes and retinopathy compared with control subjects (9). […] In type 1 diabetes, albuminuria and systolic blood pressure independently increase the risk for both ischemic and hemorrhagic stroke (12). […] We conclude that cerebral SVD is more common in subjects with type 1 diabetes than in healthy control subjects. Future studies will focus on longitudinal development of SVD in type 1 diabetes and the associations with brain health and cognition.”

v. The Legacy Effect in Type 2 Diabetes: Impact of Early Glycemic Control on Future Complications (The Diabetes & Aging Study).

“In the U.S., an estimated 1.4 million adults are newly diagnosed with diabetes every year and present an important intervention opportunity for health care systems. In patients newly diagnosed with type 2 diabetes, the benefits of maintaining an HbA1c <7.0% (<53 mmol/mol) are well established. The UK Prospective Diabetes Study (UKPDS) found that a mean HbA1c of 7.0% (53 mmol/mol) lowers the risk of diabetes-related end points by 12–32% compared with a mean HbA1c of 7.9% (63 mmol/mol) (1,2). Long-term observational follow-up of this trial revealed that this early glycemic control has durable effects: Reductions in microvascular events persisted, reductions in cardiovascular events and mortality were observed 10 years after the trial ended, and HbA1c values converged (1). Similar findings were observed in the Diabetes Control and Complications Trial (DCCT) in patients with type 1 diabetes (24). These posttrial observations have been called legacy effects (also metabolic memory) (5), and they suggest the importance of early glycemic control for the prevention of future complications of diabetes. Although these clinical trial long-term follow-up studies demonstrated legacy effects, whether legacy effects exist in real-world populations, how soon after diabetes diagnosis legacy effects may begin, or for what level of glycemic control legacy effects may exist are not known.

In a previous retrospective cohort study, we found that patients with newly diagnosed diabetes and an initial 10-year HbA1c trajectory that was unstable (i.e., changed substantially over time) had an increased risk for future microvascular events, even after adjusting for HbA1c exposure (6). In the same cohort population, this study evaluates associations between the duration and intensity of glycemic control immediately after diagnosis and the long-term incidence of future diabetic complications and mortality. We hypothesized that a glycemic legacy effect exists in real-world populations, begins as early as the 1st year after diabetes diagnosis, and depends on the level of glycemic exposure.”

RESEARCH DESIGN AND METHODS This cohort study of managed care patients with newly diagnosed type 2 diabetes and 10 years of survival (1997–2013, average follow-up 13.0 years, N = 34,737) examined associations between HbA1c <6.5% (<48 mmol/mol), 6.5% to <7.0% (48 to <53 mmol/mol), 7.0% to <8.0% (53 to <64 mmol/mol), 8.0% to <9.0% (64 to <75 mmol/mol), or ≥9.0% (≥75 mmol/mol) for various periods of early exposure (0–1, 0–2, 0–3, 0–4, 0–5, 0–6, and 0–7 years) and incident future microvascular (end-stage renal disease, advanced eye disease, amputation) and macrovascular (stroke, heart disease/failure, vascular disease) events and death, adjusting for demographics, risk factors, comorbidities, and later HbA1c.

RESULTS Compared with HbA1c <6.5% (<48 mmol/mol) for the 0-to-1-year early exposure period, HbA1c levels ≥6.5% (≥48 mmol/mol) were associated with increased microvascular and macrovascular events (e.g., HbA1c 6.5% to <7.0% [48 to <53 mmol/mol] microvascular: hazard ratio 1.204 [95% CI 1.063–1.365]), and HbA1c levels ≥7.0% (≥53 mmol/mol) were associated with increased mortality (e.g., HbA1c 7.0% to <8.0% [53 to <64 mmol/mol]: 1.290 [1.104–1.507]). Longer periods of exposure to HbA1c levels ≥8.0% (≥64 mmol/mol) were associated with increasing microvascular event and mortality risk.

CONCLUSIONS Among patients with newly diagnosed diabetes and 10 years of survival, HbA1c levels ≥6.5% (≥48 mmol/mol) for the 1st year after diagnosis were associated with worse outcomes. Immediate, intensive treatment for newly diagnosed patients may be necessary to avoid irremediable long-term risk for diabetic complications and mortality.”

Do note that the effect sizes here are very large and this stuff seems really quite important. Judging from the results of this study, if you’re newly diagnosed and you only obtain a HbA1c of say, 7.3% in the first year, that may translate into a close to 30% increased risk of death more than 10 years into the future, compared to a scenario of an HbA1c of 6.3%. People who did not get their HbA1c measured within the first 3 months after diagnosis had a more than 20% increased risk of mortality during the study period. This seems like critical stuff to get right.

vi. Event Rates and Risk Factors for the Development of Diabetic Ketoacidosis in Adult Patients With Type 1 Diabetes: Analysis From the DPV Registry Based on 46,966 Patients.

“Diabetic ketoacidosis (DKA) is a life-threatening complication of type 1 diabetes mellitus (T1DM) that results from absolute insulin deficiency and is marked by acidosis, ketosis, and hyperglycemia (1). Therefore, prevention of DKA is one goal in T1DM care, but recent data indicate increased incidence (2).

For adult patients, only limited data are available on rates and risk factors for development of DKA, and this complication remains epidemiologically poorly characterized. The Diabetes Prospective Follow-up Registry (DPV) has followed patients with diabetes from 1995. Data for this study were collected from 2000 to 2016. Inclusion criteria were diagnosis of T1DM, age at diabetes onset ≥6 months, patient age at follow-up ≥18 years, and diabetes duration ≥1 year to exclude DKA at manifestation. […] In total, 46,966 patients were included in this study (average age 38.5 years [median 21.2], 47.6% female). The median HbA1c was 7.7% (61 mmol/mol), median diabetes duration was 13.6 years, and 58.3% of the patients were treated in large diabetes centers.

On average, 2.5 DKA-related hospital admissions per 100 patient-years (PY) were observed (95% CI 2.1–3.0). The rate was highest in patients aged 18–30 years (4.03/100 PY) and gradually declined with increasing age […] No significant differences between males (2.46/100 PY) and females (2.59/100 PY) were found […] Patients with HbA1c levels <7% (53 mmol/mol) had significantly fewer DKA admissions than patients with HbA1c ≥9% (75 mmol/mol) (0.88/100 PY vs. 6.04/100 PY; P < 0.001)”

“Regarding therapy, use of an insulin pump (continuous subcutaneous insulin infusion [CSII]) was not associated with higher DKA rates […], while patients aged 31–50 years on CSII showed lower rates than patients using multiple daily injections (2.21 vs. 3.12/100 PY; adjusted P < 0.05) […]. Treatment in a large center was associated with lower DKA-related hospital admissions […] In both adults and children, poor metabolic control was the strongest predictor of hospital admission due to DKA. […] In conclusion, the results of this study identify patients with T1DM at risk for DKA (high HbA1c, diabetes duration 5–10 years, migrants, age 30 years and younger) in real-life diabetes care. These at-risk individuals may need specific attention since structured diabetes education has been demonstrated to specifically reduce and prevent this acute complication.”

August 13, 2019 Posted by | Cardiology, Diabetes, Genetics, Immunology, Medicine, Molecular biology, Nephrology, Neurology, Ophthalmology, Studies | Leave a comment

Learning Phylogeny Through Simple Statistical Genetics

From a brief skim I concluded that a lot of the stuff Patterson talks about in this lecture, particularly in terms of the concepts and methods part (…which, as he also alludes to in his introduction, makes up a substantial proportion of the talk), is included/covered in this Ancient Admixture in Human History paper he coauthored, so if you’re either curious to know more, or perhaps just wondering what the talk might be about, it’s probably worth checking it out. In the latter case I would also recommend perhaps just watching the first few minutes of the talk; he provides a very informative outline of the talk in the first four and a half minutes of the video.

A few other links of relevance:

Martingale (probability theory).
GitHub – DReichLab/AdmixTools.
Human Genome Diversity Project.
Jackknife resampling.
Ancient North Eurasian.
Upper Palaeolithic Siberian genome reveals dual ancestry of Native Americans (Raghavan et al, 2014).
General theory for stochastic admixture graphs and F-statistics. This one is only very slightly related to the talk; I came across it while looking for stuff about admixture graphs, a topic he does briefly discuss in the lecture.

July 29, 2019 Posted by | Archaeology, Biology, Genetics, Lectures, Molecular biology, Statistics | Leave a comment

A few diabetes papers of interest

i. The dynamic origins of type 1 diabetes.

“Over a century ago, there was diabetes and only diabetes. Subsequently, diabetes came to be much more discretely defined (1) by age at onset (childhood or adult onset), clinical phenotype (lean or obese), treatment (insulin dependent or not insulin dependent), and, more recently, immune genotype (type 1 or type 2 diabetes). Although these categories broadly describe groups, they are often insufficient to categorize specific individuals, such as children having non–insulin-dependent diabetes and adults having type 1 diabetes (T1D) even when not requiring insulin. Indeed, ketoacidosis at presentation can be a feature of either T1D or type 2 diabetes. That heterogeneity extends to the origins and character of both major types of diabetes. In this issue of Diabetes Care, Redondo et al. (2) leverage the TrialNet study of subjects with a single diabetes-associated autoantibody at screening in order to explore factors determining progression to multiple autoantibodies and, subsequently, the pathogenesis of T1D.

T1D is initiated by presumed nongenetic event(s) operating in children with potent genetic susceptibility. But there is substantial heterogeneity even within the origins of this disease. Those nongenetic events evoke different autoantibodies such that T1D patients with insulin autoantibodies (IAA) have different features from those with GAD autoantibodies (GADA) (3,4). The former, in contrast with the latter, are younger both at seroconversion and at development of clinical diabetes, the two groups having different genetic risk and those with IAA having greater insulin secretory loss […]. These observations hint at distinct disease-associated networks leading to T1D, perhaps induced by distinct nongenetic events. Such disease-associated pathways could operate in unison, especially in children with T1D, who often have multiple autoantibodies. […]

Genetic analyses of autoimmune diseases suggest that only a small number of pathways contribute to disease risk. These pathways include NF-κB signaling, T-cell costimulation, interleukin-2, and interleukin-21 pathways and type 1 interferon antiviral responses (5,6). T1D shares most risk loci with celiac disease and rheumatoid arthritis (5), while paradoxically most risk loci shared with inflammatory bowel disease are protective or involve different haplotypes at the same locus. […] Events leading to islet autoimmunity may be encountered very early in life and invoke disease risk or disease protection (4,7) […]. Islet autoantibodies rarely appear before age 6 months, and among children with a family history of T1D there are two peaks for autoantibody seroconversion (3,4), the first for IAA at approximately age 1–2 years, while GADA-restricted autoimmunity develops after age 3 years up to adolescence, with a peak at about age 11 years”

“The precise nature of […] disease-associated nongenetic events remains unclear, but knowledge of the disease heterogeneity (1,9) has cast light on their character. Nongenetic events are implicated in increasing disease incidence, disease discordance even between identical twins, and geographical variation; e.g., Finland has 100-fold greater childhood T1D incidence than China (9,10). That effect likely increases with older age at onset […] disease incidence in Finland is sixfold greater than in an adjacent, relatively impoverished Russian province, despite similar racial origins and frequencies of high-risk HLA DQ genotypes […] Viruses, especially enteroviruses, and dietary factors have been invoked (1215). The former have been implicated because of the genetic association with antiviral interferon networks, seasonal pattern of autoantibody conversion, seroconversion being associated with enterovirus infections, and protection from seroconversion by maternal gestational respiratory infection, while respiratory infections even in the first year of life predispose to seroconversion (14) […]. Dietary factors also predispose to seroconversion and include the time of introduction of solid foods and the use of vitamin C and vitamin D (13,15). The Diabetes Autoimmunity Study in the Young (DAISY) found that early exposure to solid food (1–3 months of age) and vitamin C and late exposure to vitamin D and gluten (after 6 and 9 months of age, respectively) are T1D risk factors, leading the researchers to suggest that genetically at-risk children should have solid foods introduced at about 4 months of age with a diet high in dairy and fruit (13).” [my bold, US]

“This TCF7L2 locus is of particular interest in the context of T1D (9) as it is usually seen as the major type 2 diabetes signal worldwide. The rs7903146 SNP optimally captures that TCF7L2 disease association and is likely the causal variant. Intriguingly, this locus is associated, in some populations, with those adult-onset autoimmune diabetes patients with GADA alone who masquerade as having type 2 diabetes, since they initially do not require insulin therapy, and also markedly increases the diabetes risk in cystic fibrosis patients. One obvious explanation for these associations is that adult-onset autoimmune diabetes is simply a heterogeneous disease, an admixture of both T1D and type 2 diabetes (9), in which shared genes alter the threshold for diabetes. […] A high proportion of T1D cases present in adulthood (17,18), likely more than 50%, and many do not require insulin initially. The natural history, phenotype, and metabolic changes in adult-onset diabetes with GADA resemble a separate cluster of cases with type 2 diabetes but without GADA, which together constitute up to 24% of adult-onset diabetes (19). […] Knowledge of heterogeneity enables understanding of disease processes. In particular, identification of distinct pathways to clinical diabetes offers the possibility of defining distinct nongenetic events leading to T1D and, by implication, modulating those events could limit or eliminate disease progression. There is a growing appreciation that the two major types of diabetes may share common etiopathological factors. Just as there are a limited number of genes and pathways contributing to autoimmunity risk, there may also be a restricted number of pathways contributing to β-cell fragility.”

ii. The Association of Severe Diabetic Retinopathy With Cardiovascular Outcomes in Long-standing Type 1 Diabetes: A Longitudinal Follow-up.

OBJECTIVE It is well established that diabetic nephropathy increases the risk of cardiovascular disease (CVD), but how severe diabetic retinopathy (SDR) impacts this risk has yet to be determined.

RESEARCH DESIGN AND METHODS The cumulative incidence of various CVD events, including coronary heart disease (CHD), peripheral artery disease (PAD), and stroke, retrieved from registries, was evaluated in 1,683 individuals with at least a 30-year duration of type 1 diabetes drawn from the Finnish Diabetic Nephropathy Study (FinnDiane).”

RESULTS During 12,872 person-years of follow-up, 416 incident CVD events occurred. Even in the absence of DKD [Diabetic Kidney Disease], SDR increased the risk of any CVD (hazard ratio 1.46 [95% CI 1.11–1.92]; P < 0.01), after adjustment for diabetes duration, age at diabetes onset, sex, smoking, blood pressure, waist-to-hip ratio, history of hypoglycemia, and serum lipids. In particular, SDR alone was associated with the risk of PAD (1.90 [1.13–3.17]; P < 0.05) and CHD (1.50 [1.09–2.07; P < 0.05) but not with any stroke. Moreover, DKD increased the CVD risk further (2.85 [2.13–3.81]; P < 0.001). […]

CONCLUSIONS SDR alone, even without DKD, increases cardiovascular risk, particularly for PAD, independently of common cardiovascular risk factors in long-standing type 1 diabetes. More remains to be done to fully understand the link between SDR and CVD. This knowledge could help combat the enhanced cardiovascular risk beyond currently available regimens.”

“The 15-year cumulative incidence of any CVD in patients with and without SDR was 36.8% (95% CI 33.4–40.1) and 27.3% (23.3–31.0), respectively (P = 0.0004 for log-rank test) […] Patients without DKD and SDR at baseline had 4.0-fold (95% CI 3.3–4.7) increased risk of CVD compared with control subjects without diabetes up to 70 years of age […]. Intriguingly, after this age, the CVD incidence was similar to that in the matched control subjects (SIR 0.9 [95% CI 0.3–1.9]) in this subgroup of patients with diabetes. However, in patients without DKD but with SDR, the CVD risk was still increased after the patients had reached 70 years of age (SIR 3.4 [95% CI 1.8–6.2]) […]. Of note, in patients with both DKD and SDR, the CVD burden was high already at young ages.”

“This study highlights the role of SDR on a complete range of CVD outcomes in a large sample of patients with long-standing T1D and longitudinal follow-up. We showed that SDR alone, without concomitant DKD, increases the risk of macrovascular disease, independently of the traditional risk factors. The risk is further increased in case of accompanying DKD, especially if SDR is present together with DKD. Findings from this large and well-characterized cohort of patients have a direct impact on clinical practice, emphasizing the importance of regular screening for SDR in individuals with T1D and intensive multifactorial interventions for CVD prevention throughout their life span.

This study also confirms and complements previous data on the continuum of diabetic vascular disease, by which microvascular and macrovascular disease do not seem to be separate diseases, but rather interconnected (10,12,18). The link is most obvious for DKD, which clearly emerges as a major predictor of cardiovascular morbidity and mortality (2,24,25). The association of SDR with CVD is less clear. However, our recent cross-sectional study with the Joslin Medalist Study showed that the CVD risk was in fact increased in patients with SDR on top of DKD compared with DKD alone (19). In the present longitudinal study, we were able to extend those results also to show that SDR alone, without DKD and after the adjustment for other traditional risk factors, increases CVD risk substantially. SDR further increases CVD risk in case DKD is present as well. In addition, the role of SDR as an independent CVD risk predictor is also supported by our data using albuminuria as a marker of DKD. This is important because albuminuria is a known predictor of diabetic retinopathy progression (26) as well as a recognized biomarker for CVD.”

“A novel finding is that, independently of any signs of DKD, the risk of PAD is increased twofold in the presence of SDR. Although this association has recently been highlighted in individuals with type 2 diabetes (10,29), the data in T1D are scarce (16,30). Notably, the previous studies mostly lack adjustments for DKD, the major predictor of mortality in patients with shorter diabetes duration. Both complications, besides sharing some conventional cardiovascular risk factors, may be linked by additional pathological processes involving changes in the microvasculature in both the retina and the vasa vasorum of the conductance vessels (31). […] Patients with T1D duration of >30 years face a continuously increased CVD risk that is further increased by the occurrence of advanced PDR. Therefore, by examining the retina, additional insight into individual CVD risk is gained and can guide the clinician to a more tailored approach to CVD prevention. Moreover, our findings suggest that the link between SDR and CVD is at least partially independent of traditional risk factors, and the mechanism behind the phenomenon warrants further research, aiming to find new therapies to alleviate the CVD burden more efficiently.”

The model selection method employed in the paper is far from optimal [“Variables for the model were chosen based on significant univariable associations.” – This is not the way to do things!], but regardless these are interesting results.

iii. Fasting Glucose Variability in Young Adulthood and Cognitive Function in Middle Age: The Coronary Artery Risk Development in Young Adults (CARDIA) Study.

“Individuals with type 2 diabetes (T2D) have 50% greater risk for the development of neurocognitive dysfunction relative to those without T2D (13). The American Diabetes Association recommends screening for the early detection of cognitive impairment for adults ≥65 years of age with diabetes (4). Coupled with the increasing prevalence of prediabetes and diabetes, this calls for better understanding of the impact of diabetes on cerebral structure and function (5,6). Among older individuals with diabetes, higher intraindividual variability in glucose levels around the mean is associated with worse cognition and the development of Alzheimer disease (AD) (7,8). […] Our objectives were to characterize fasting glucose (FG) variability during young adulthood before the onset of diabetes and to assess whether such variability in FG is associated with cognitive function in middle adulthood. We hypothesized that a higher variability of FG during young adulthood would be associated with a lower level of cognitive function in midlife compared with lower FG variability.”

“We studied 3,307 CARDIA (Coronary Artery Risk Development in Young Adults) Study participants (age range 18–30 years and enrolled in 1985–1986) at baseline and calculated two measures of long-term glucose variability: the coefficient of variation about the mean FG (CV-FG) and the absolute difference between successive FG measurements (average real variability [ARV-FG]) before the onset of diabetes over 25 and 30 years of follow-up. Cognitive function was assessed at years 25 (2010–2011) and 30 (2015–2016) with the Digit Symbol Substitution Test (DSST), Rey-Auditory Verbal Learning Test (RAVLT), Stroop Test, Montreal Cognitive Assessment, and category and letter fluency tests. We estimated the association between glucose variability and cognitive function test score with adjustment for clinical and behavioral risk factors, mean FG level, change in FG level, and diabetes development, medication use, and duration.

RESULTS After multivariable adjustment, 1-SD increment of CV-FG was associated with worse cognitive scores at year 25: DSST, standardized regression coefficient −0.95 (95% CI −1.54, −0.36); RAVLT, −0.14 (95% CI −0.27, −0.02); and Stroop Test, 0.49 (95% CI 0.04, 0.94). […] We did not find evidence for effect modification by race or sex for any variability-cognitive function association”

CONCLUSIONS Higher intraindividual FG variability during young adulthood below the threshold of diabetes was associated with worse processing speed, memory, and language fluency in midlife independent of FG levels. […] In this cohort of black and white adults followed from young adulthood into middle age, we observed that greater intraindividual variability in FG below a diabetes threshold was associated with poorer cognitive function independent of behavioral and clinical risk factors. This association was observed above and beyond adjustment for concurrent glucose level; change in FG level during young adulthood; and diabetes status, duration, and medication use. Intraindividual glucose variability as determined by CV was more strongly associated with cognitive function than was absolute average glucose variability.”

iv. Maternal Antibiotic Use During Pregnancy and Type 1 Diabetes in Children — A National Prospective Cohort Study. It is important that papers like these get published and read, even if the results may not sound particularly exciting:

“Prenatal prescription of antibiotics is common but may perturb the composition of the intestinal microbiota in the offspring. In childhood the latter may alter the developing immune system to affect the pathogenesis of type 1 diabetes (1). Previous epidemiological studies reported conflicting results regarding the association between early exposure to antibiotics and childhood type 1 diabetes (2,3). Here we investigated the association in a Danish register setting.

The Danish National Birth Cohort (DNBC) provided data from 100,418 pregnant women recruited between 1996 and 2002 and their children born between 1997 and 2003 (n = 96,840). The women provided information on exposures during and after pregnancy. Antibiotic prescription during pregnancy was obtained from the Danish National Prescription Registry (anatomical therapeutic chemical code J01) [it is important to note that: “In Denmark, purchasing antibiotics requires a prescription, and all purchases are registered at the Danish National Prescription Registry”], and type 1 diabetes diagnoses (diagnostic codes DE10 and DE14) during childhood and adolescence were obtained from the Danish National Patient Register. The children were followed until 2014 (mean follow-up time 14.3 years [range 11.5–18.4 years, SD 1.4]).”

“A total of 336 children developed type 1 diabetes during follow-up. Neither overall exposure (hazard ratio [HR] 0.90; 95% CI 0.68–1.18), number of courses (HR 0.36–0.97[…]), nor trimester-specific exposure (HR 0.81–0.89 […]) of antibiotics in utero was associated with childhood diabetes. Moreover, exposure to specific types of antibiotics in utero did not change the risk of childhood type 1 diabetes […] This large prospective Danish cohort study demonstrated that maternal use of antibiotics during pregnancy was not associated with childhood type 1 diabetes. Thus, the results from this study do not support a revision of the clinical recommendations on treatment with antibiotics during pregnancy.”

v. Decreasing Cumulative Incidence of End-Stage Renal Disease in Young Patients With Type 1 Diabetes in Sweden: A 38-Year Prospective Nationwide Study.

“Diabetic nephropathy is a devastating complication to diabetes. It can lead to end-stage renal disease (ESRD), which demands renal replacement therapy (RRT) with dialysis or kidney transplantation. In addition, diabetic nephropathy is associated with increased risk of cardiovascular morbidity and mortality (1,2). As a nation, Sweden, next to Finland, has the highest incidence of type 1 diabetes in the world (3), and the incidence of childhood-onset diabetes is increasing globally (4,5). The incidence of ESRD caused by diabetic nephropathy in these Nordic countries is fairly low as shown in recent studies, 3–8% at maximum 30 years’ of diabetes duration (6,7). This is to be compared with studies from Denmark in the 1980s that showed a cumulative incidence of diabetic nephropathy of 41% at 40 years of diabetes duration. Older, hospital-based cohort studies found that the incidence of persistent proteinuria seemed to peak at 25 years of diabetes duration; after that, the incidence levels off (8,9). This implies the importance of genetic susceptibility as a risk factor for diabetic nephropathy, which has also been indicated in recent genome-wide scan studies (10,11). Still, modifiable factors such as metabolic control are clearly of major importance in the development of diabetic nephropathy (1215). Already in 1994, a decreasing incidence of diabetic nephropathy was seen in a hospital-based study in Sweden, and the authors concluded that this was mainly driven by better metabolic control (16). Young age at onset of diabetes has previously been found to protect, or postpone, the development of ESRD caused by diabetic nephropathy, while diabetes onset at older ages is associated with increased risk (7,9,17). In a previous study, we found that age at onset of diabetes affects men and women differently (7). Earlier studies have indicated a male predominance (8,18), while our previous study showed that the incidence of ESRD was similar in men and women with diabetes onset before 20 years of age, but with diabetes onset after 20 years of age, men had increased risk of developing ESRD compared with women. The current study analyzes the incidence of ESRD due to type 1 diabetes, and changes over time, in a large Swedish population-based cohort with a maximum follow-up of 38 years.”

“Earlier studies have shown that it takes ∼15 years to develop persistent proteinuria and another 10 to proceed to ESRD (9,25). In the current study population, no patients developed ESRD because of type 1 diabetes at a duration <14 years; thus only patients with diabetes duration of ≥14 years were included in the study. […] A total of 18,760 unique patients were included in the study: 10,560 (56%) men and 8,200 (44%) women. The mean age at the end of the study was somewhat lower for women, 38.9 years, compared with 40.2 years for men. Women tend to develop type 1 diabetes about a year earlier than men: mean age 15.0 years for women compared with 16.5 years for men. There was no difference regarding mean diabetes duration between men and women in the study (23.8 years for women and 23.7 years for men). A total of 317 patients had developed ESRD due to diabetes. The maximum diabetes duration was 38.1 years for patients in the SCDR and 32.6 years for the NDR and the DISS. The median time from onset of diabetes to ESRD was 22.9 years (minimum 14.1 and maximum 36.6). […] At follow-up, 77 patients with ESRD and 379 without ESRD had died […]. The risk of dying during the course of the study was almost 12 times higher among the ESRD patients (HR 11.9 [95% CI 9.3–15.2]) when adjusted for sex and age. Males had almost twice as high a risk of dying as female patients (HR 1.7 [95% CI 1.4–2.1]), adjusted for ESRD and age.”

“The overall incidence rate of ESRD during 445,483 person-years of follow-up was 0.71 per 1,000 person-years. […] The incidence rate increases with diabetes duration. For patients with diabetes onset at 0–9 and 10–19 years of age, there was an increase in incidence up to 36 years of duration; at longer durations, the number of cases is too small and results must be interpreted with caution. With diabetes onset at 20–34 years of age the incidence rate increases until 25 years of diabetes duration, and then a decrease can be observed […] In comparison of different time periods, the risk of developing ESRD was lower in patients with diabetes onset in 1991–2001 compared with onset in 1977–1984 (HR 3.5 [95% CI 2.3–5.3]) and 1985–1990 (HR 2.6 [95% CI 1.7–3.8]), adjusted for age at follow-up and sex. […] The lowest risk of developing ESRD was found in the group with onset of diabetes before the age of 10 years — both for males and females […]. With this group as reference, males diagnosed with diabetes at 10–19 or 20–34 years of age had increased risk of ESRD (HR 2.4 [95% CI 1.6–3.5] and HR 2.2 [95% CI 1.4–3.3]), respectively. For females, the risk of developing ESRD was also increased with diabetes onset at 10–19 years of age (HR 2.4 [95% CI 1.5–3.6]); however, when diabetes was diagnosed after the age of 20 years, the risk of developing ESRD was not increased compared with an early onset of diabetes (HR 1.4 [95% CI 0.8–3.4]).”

“By combining data from the SCDR, DISS, and NDR registers and identifying ESRD cases via the SRR, we have included close to all patients with type 1 diabetes in Sweden with diabetes duration >14 years who developed ESRD since 1991. The cumulative incidence of ESRD in this study is low: 5.6% (5.9% and 5.3% for males and females, respectively) at maximum 38 years of diabetes duration. For the first time, we could see a clear decrease in ESRD incidence in Sweden by calendar year of diabetes onset. The results are in line with a recent study from Norway that reported a modest incidence of 5.3% after 40 years of diabetes duration (27). In the current study, we found a decrease in the incidence rate after 25 years of diabetes duration in the group with diabetes onset at 20–34 years. With age at onset of diabetes 0–9 or 10–19 years, the ESRD incidence rate increases until 35 years of diabetes duration, but owing to the limited number of patients with longer duration we cannot determine whether the peak incidence has been reached or not. We can, however, conclude that the onset of ESRD has been postponed at least 10 years compared with that in older prospective cohort studies (8,9). […] In conclusion, this large population-based study shows a low incidence of ESRD in Swedish patients with onset of type 1 diabetes after 1977 and an encouraging decrease in risk of ESRD, which is probably an effect of improved diabetes care. We confirm that young age at onset of diabetes protects against, or prolongs, the time until development of severe complications.”

vi. Hypoglycemia and Incident Cognitive Dysfunction: A Post Hoc Analysis From the ORIGIN Trial. Another potentially important negative result, this one related to the link between hypoglycemia and cognitive impairment:

“Epidemiological studies have reported a relationship between severe hypoglycemia, cognitive dysfunction, and dementia in middle-aged and older people with type 2 diabetes. However, whether severe or nonsevere hypoglycemia precedes cognitive dysfunction is unclear. Thus, the aim of this study was to analyze the relationship between hypoglycemia and incident cognitive dysfunction in a group of carefully followed patients using prospectively collected data in the Outcome Reduction with Initial Glargine Intervention (ORIGIN) trial.”

“This prospective cohort analysis of data from a randomized controlled trial included individuals with dysglycemia who had additional cardiovascular risk factors and a Mini-Mental State Examination (MMSE) score ≥24 (N = 11,495). Severe and nonsevere hypoglycemic events were collected prospectively during a median follow-up time of 6.2 years. Incident cognitive dysfunction was defined as either reported dementia or an MMSE score of <24. The hazard of at least one episode of severe or nonsevere hypoglycemia for incident cognitive dysfunction (i.e., the dependent variable) from the time of randomization was estimated using a Cox proportional hazards model after adjusting for baseline cardiovascular disease, diabetes status, treatment allocation, and a propensity score for either form of hypoglycemia.

RESULTS This analysis did not demonstrate an association between severe hypoglycemia and incident cognitive impairment either before (hazard ratio [HR] 1.16; 95% CI 0.89, 1.52) or after (HR 1.00; 95% CI 0.76, 1.31) adjusting for the severe hypoglycemia propensities. Conversely, nonsevere hypoglycemia was inversely related to incident cognitive impairment both before (HR 0.59; 95% CI 0.52, 0.68) and after (HR 0.58; 95% CI 0.51, 0.67) adjustment.

CONCLUSIONS Hypoglycemia did not increase the risk of incident cognitive dysfunction in 11,495 middle-aged individuals with dysglycemia. […] These findings provide no support for the hypothesis that hypoglycemia causes long-term cognitive decline and are therefore reassuring for patients and their health care providers.”

vii. Effects of Severe Hypoglycemia on Cardiovascular Outcomes and Death in the Veterans Affairs Diabetes Trial.

“The VADT was a large randomized controlled trial aimed at determining the effects of intensive treatment of T2DM in U.S. veterans (9). In the current study, we examine predictors and consequences of severe hypoglycemia within the VADT and report several key findings. First, we identified risk factors for severe hypoglycemia that included intensive therapy, insulin use, proteinuria, and autonomic neuropathy. Consistent with prior reports in glucose-lowering studies, severe hypoglycemia occurred at a threefold significantly greater rate in those assigned to intensive glucose lowering. Second, severe hypoglycemia was associated with an increased risk of cardiovascular events, cardiovascular mortality, and all-cause mortality in both the standard and the intensive treatment groups. Of importance, however, severe hypoglycemia was associated with an even greater risk of all-cause mortality in the standard compared with the intensive treatment group. Third, the association between severe hypoglycemia and serious cardiovascular events was greater in individuals with an elevated risk for CVD at baseline.”

“Mean participant characteristics were as follows: age, 60.4 years; duration of diabetes, 11.5 years; BMI, 31.3 kg/m2; and HbA1c, 9.4%. Seventy-two percent had hypertension, 40% had a previous cardiovascular event, 62% had a microvascular complication, and 52% had baseline insulin use. The standard and intensive treatment groups included 899 and 892 participants, respectively. […] During the study, the standard treatment group averaged 3.7 severe hypoglycemic events per 100 patient-years versus 10.3 events per 100 patient-years in the intensive treatment group (P < 0.001). Overall, the combined rate of severe hypoglycemia during follow-up in the VADT from both study arms was 7.0 per 100 patient-years. […] Severe hypoglycemia within the prior 3 months was associated with an increased risk for composite cardiovascular outcome (HR 1.9 [95% CI 1.1, 3.5]; P = 0.03), cardiovascular mortality (3.7 [1.3, 10.4]; P = 0.01), and all-cause mortality (2.4 [1.1, 5.1]; P = 0.02) […]. More distant hypoglycemia (4–6 months prior) had no independently associated increased risk with adverse events or death. The association of severe hypoglycemia with cardiovascular events or cardiovascular mortality were not significantly different between the intensive and standard treatment groups […]. In contrast, the association of severe hypoglycemia with all-cause mortality was significantly greater in the standard versus the intensive treatment group (6.7 [2.7, 16.6] vs. 0.92 [0.2, 3.8], respectively; P = 0.019 for interaction). Because of the relative paucity of repeated severe hypoglycemic events in either study group, there was insufficient power to determine whether more than one episode of severe hypoglycemia increased the risk of subsequent outcomes.”

“Although recent severe hypoglycemia increased the risk of major cardiovascular events for those with a 10-year cardiovascular risk score of 35% (HR 2.88 [95% CI 1.57, 5.29]; absolute risk increase per 10 episodes = 0.252; number needed to harm = 4), hypoglycemia was not significantly associated with increased major cardiovascular events for those with a risk score of ≤7.5%. The absolute associated risk of major adverse cardiovascular events, cardiovascular mortality, and all-cause mortality increased with higher CVD risk for all three outcomes […]. We were not able to identify, however, any group of patients in either treatment arm in which severe hypoglycemia did not increase the risk of CVD events and mortality at least to some degree.”

“Although the explanation for the relatively greater risk of serious adverse events after severe hypoglycemia in the standard treatment group is unknown, we agree with previous reports that milder episodes of hypoglycemia, which are more frequent in the intensive treatment group, may quantitatively blunt the release of neuroendocrine and autonomic nervous system responses and their resultant metabolic and cardiovascular responses to hypoglycemia, thereby lessening the impact of subsequent severe hypoglycemic episodes (18,19). Episodes of prior hypoglycemia have rapid and significant effects on reducing (i.e., blunting) subsequent counterregulatory responses to a falling plasma glucose level (20,21). Thus, if one of the homeostatic counterregulatory responses (e.g., epinephrine) also can initiate unwanted intravascular atherothrombotic consequences, it may follow that severe hypoglycemia in a more intensively treated and metabolically well-controlled individual would provoke a reduced counterregulatory response. Although hypoglycemia frequency may be increased in these individuals, this may also lower unwanted and deleterious effects on the vasculature from counterregulatory responses. On the other hand, an isolated severe hypoglycemic event in a less well-controlled individual could provoke a relatively greater counterregulatory response with a proportionally attendant elevated risk for adverse vascular effects (22). In support of this, we previously reported in a subset of VADT participants that despite more frequent serious hypoglycemia in the intensive therapy group, progression of coronary artery calcium scores after severe hypoglycemia only occurred in the standard treatment group (23).”

“In the current study, we demonstrate that the association of severe hypoglycemia with subsequent serious adverse cardiovascular events and death occurred within the preceding 3 months but not beyond. The temporal relationship and proximity of severe hypoglycemia to a subsequent serious cardiovascular event and/or death has been investigated in a number of recent clinical trials in T2DM (25,13,14). All these trials consistently reported an association between severe hypoglycemic and subsequent serious adverse events. However, the proximity of severe hypoglycemic events to subsequent adverse events and death varies. In ADVANCE, a severe hypoglycemic episode increased the risk of major cardiovascular events for both the next 3 months and the following 6 months. In A Trial Comparing Cardiovascular Safety of Insulin Degludec Versus Insulin Glargine in Subjects With Type 2 Diabetes at High Risk of Cardiovascular Events (DEVOTE) and the Liraglutide Effect and Action in Diabetes: Evaluation of Cardiovascular Outcome Results (LEADER) trial, there was an increased risk of either serious cardiovascular events or all-cause mortality starting 15 days and extending (albeit with decreasing risk) up to 1 year after severe hypoglycemia (13,14).”

June 15, 2019 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Nephrology, Neurology, Ophthalmology, Studies | Leave a comment

Random stuff

i. Your Care Home in 120 Seconds. Some quotes:

“In order to get an overall estimate of mental power, psychologists have chosen a series of tasks to represent some of the basic elements of problem solving. The selection is based on looking at the sorts of problems people have to solve in everyday life, with particular attention to learning at school and then taking up occupations with varying intellectual demands. Those tasks vary somewhat, though they have a core in common.

Most tests include Vocabulary, examples: either asking for the definition of words of increasing rarity; or the names of pictured objects or activities; or the synonyms or antonyms of words.

Most tests include Reasoning, examples: either determining which pattern best completes the missing cell in a matrix (like Raven’s Matrices); or putting in the word which completes a sequence; or finding the odd word out in a series.

Most tests include visualization of shapes, examples: determining the correspondence between a 3-D figure and alternative 2-D figures; determining the pattern of holes that would result from a sequence of folds and a punch through folded paper; determining which combinations of shapes are needed to fill a larger shape.

Most tests include episodic memory, examples: number of idea units recalled across two or three stories; number of words recalled from across 1 to 4 trials of a repeated word list; number of words recalled when presented with a stimulus term in a paired-associate learning task.

Most tests include a rather simple set of basic tasks called Processing Skills. They are rather humdrum activities, like checking for errors, applying simple codes, and checking for similarities or differences in word strings or line patterns. They may seem low grade, but they are necessary when we try to organise ourselves to carry out planned activities. They tend to decline with age, leading to patchy, unreliable performance, and a tendency to muddled and even harmful errors. […]

A brain scan, for all its apparent precision, is not a direct measure of actual performance. Currently, scans are not as accurate in predicting behaviour as is a simple test of behaviour. This is a simple but crucial point: so long as you are willing to conduct actual tests, you can get a good understanding of a person’s capacities even on a very brief examination of their performance. […] There are several tests which have the benefit of being quick to administer and powerful in their predictions.[..] All these tests are good at picking up illness related cognitive changes, as in diabetes. (Intelligence testing is rarely criticized when used in medical settings). Delayed memory and working memory are both affected during diabetic crises. Digit Symbol is reduced during hypoglycaemia, as are Digits Backwards. Digit Symbol is very good at showing general cognitive changes from age 70 to 76. Again, although this is a limited time period in the elderly, the decline in speed is a notable feature. […]

The most robust and consistent predictor of cognitive change within old age, even after control for all the other variables, was the presence of the APOE e4 allele. APOE e4 carriers showed over half a standard deviation more general cognitive decline compared to noncarriers, with particularly pronounced decline in their Speed and numerically smaller, but still significant, declines in their verbal memory.

It is rare to have a big effect from one gene. Few people carry it, and it is not good to have.

ii. What are common mistakes junior data scientists make?

Apparently the OP had second thoughts about this query so s/he deleted the question and marked the thread nsfw (??? …nothing remotely nsfw in that thread…). Fortunately the replies are all still there, there are quite a few good responses in the thread. I added some examples below:

“I think underestimating the domain/business side of things and focusing too much on tools and methodology. As a fairly new data scientist myself, I found myself humbled during this one project where I had I spent a lot of time tweaking parameters and making sure the numbers worked just right. After going into a meeting about it became clear pretty quickly that my little micro-optimizations were hardly important, and instead there were X Y Z big picture considerations I was missing in my analysis.”

[…]

  • Forgetting to check how actionable the model (or features) are. It doesn’t matter if you have amazing model for cancer prediction, if it’s based on features from tests performed as part of the post-mortem. Similarly, predicting account fraud after the money has been transferred is not going to be very useful.

  • Emphasis on lack of understanding of the business/domain.

  • Lack of communication and presentation of the impact. If improving your model (which is a quarter of the overall pipeline) by 10% in reducing customer churn is worth just ~100K a year, then it may not be worth putting into production in a large company.

  • Underestimating how hard it is to productionize models. This includes acting on the models outputs, it’s not just “run model, get score out per sample”.

  • Forgetting about model and feature decay over time, concept drift.

  • Underestimating the amount of time for data cleaning.

  • Thinking that data cleaning errors will be complicated.

  • Thinking that data cleaning will be simple to automate.

  • Thinking that automation is always better than heuristics from domain experts.

  • Focusing on modelling at the expense of [everything] else”

“unhealthy attachments to tools. It really doesn’t matter if you use R, Python, SAS or Excel, did you solve the problem?”

“Starting with actual modelling way too soon: you’ll end up with a model that’s really good at answering the wrong question.
First, make sure that you’re trying to answer the right question, with the right considerations. This is typically not what the client initially told you. It’s (mainly) a data scientist’s job to help the client with formulating the right question.”

iii. Some random wikipedia links: Ottoman–Habsburg wars. Planetshine. Anticipation (genetics). Cloze test. Loop quantum gravity. Implicature. Starfish Prime. Stall (fluid dynamics). White Australia policy. Apostatic selection. Deimatic behaviour. Anti-predator adaptation. Lefschetz fixed-point theorem. Hairy ball theorem. Macedonia naming dispute. Holevo’s theorem. Holmström’s theorem. Sparse matrix. Binary search algorithm. Battle of the Bismarck Sea.

iv. 5-HTTLPR: A Pointed Review. This one is hard to quote, you should read all of it. I did however decide to add a few quotes from the post, as well as a few quotes from the comments:

“…what bothers me isn’t just that people said 5-HTTLPR mattered and it didn’t. It’s that we built whole imaginary edifices, whole castles in the air on top of this idea of 5-HTTLPR mattering. We “figured out” how 5-HTTLPR exerted its effects, what parts of the brain it was active in, what sorts of things it interacted with, how its effects were enhanced or suppressed by the effects of other imaginary depression genes. This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot.

This is why I start worrying when people talk about how maybe the replication crisis is overblown because sometimes experiments will go differently in different contexts. The problem isn’t just that sometimes an effect exists in a cold room but not in a hot room. The problem is more like “you can get an entire field with hundreds of studies analyzing the behavior of something that doesn’t exist”. There is no amount of context-sensitivity that can help this. […] The problem is that the studies came out positive when they shouldn’t have. This was a perfectly fine thing to study before we understood genetics well, but the whole point of studying is that, once you have done 450 studies on something, you should end up with more knowledge than you started with. In this case we ended up with less. […] I think we should take a second to remember that yes, this is really bad. That this is a rare case where methodological improvements allowed a conclusive test of a popular hypothesis, and it failed badly. How many other cases like this are there, where there’s no geneticist with a 600,000 person sample size to check if it’s true or not? How many of our scientific edifices are built on air? How many useless products are out there under the guise of good science? We still don’t know.”

A few more quotes from the comment section of the post:

“most things that are obviously advantageous or deleterious in a major way aren’t gonna hover at 10%/50%/70% allele frequency.

Population variance where they claim some gene found in > [non trivial]% of the population does something big… I’ll mostly tend to roll to disbelieve.

But if someone claims a family/village with a load of weirdly depressed people (or almost any other disorder affecting anything related to the human condition in any horrifying way you can imagine) are depressed because of a genetic quirk… believable but still make sure they’ve confirmed it segregates with the condition or they’ve got decent backing.

And a large fraction of people have some kind of rare disorder […]. Long tail. Lots of disorders so quite a lot of people with something odd.

It’s not that single variants can’t have a big effect. It’s that really big effects either win and spread to everyone or lose and end up carried by a tiny minority of families where it hasn’t had time to die out yet.

Very few variants with big effect sizes are going to be half way through that process at any given time.

Exceptions are

1: mutations that confer resistance to some disease as a tradeoff for something else […] 2: Genes that confer a big advantage against something that’s only a very recent issue.”

“I think the summary could be something like:
A single gene determining 50% of the variance in any complex trait is inherently atypical, because variance depends on the population plus environment and the selection for such a gene would be strong, rapidly reducing that variance.
However, if the environment has recently changed or is highly variable, or there is a trade-off against adverse effects it is more likely.
Furthermore – if the test population is specifically engineered to target an observed trait following an apparently Mendelian inheritance pattern – such as a family group or a small genetically isolated population plus controls – 50% of the variance could easily be due to a single gene.”

v. Less research is needed.

“The most over-used and under-analyzed statement in the academic vocabulary is surely “more research is needed”. These four words, occasionally justified when they appear as the last sentence in a Masters dissertation, are as often to be found as the coda for a mega-trial that consumed the lion’s share of a national research budget, or that of a Cochrane review which began with dozens or even hundreds of primary studies and progressively excluded most of them on the grounds that they were “methodologically flawed”. Yet however large the trial or however comprehensive the review, the answer always seems to lie just around the next empirical corner.

With due respect to all those who have used “more research is needed” to sum up months or years of their own work on a topic, this ultimate academic cliché is usually an indicator that serious scholarly thinking on the topic has ceased. It is almost never the only logical conclusion that can be drawn from a set of negative, ambiguous, incomplete or contradictory data.” […]

“Here is a quote from a typical genome-wide association study:

“Genome-wide association (GWA) studies on coronary artery disease (CAD) have been very successful, identifying a total of 32 susceptibility loci so far. Although these loci have provided valuable insights into the etiology of CAD, their cumulative effect explains surprisingly little of the total CAD heritability.”  [1]

The authors conclude that not only is more research needed into the genomic loci putatively linked to coronary artery disease, but that – precisely because the model they developed was so weak – further sets of variables (“genetic, epigenetic, transcriptomic, proteomic, metabolic and intermediate outcome variables”) should be added to it. By adding in more and more sets of variables, the authors suggest, we will progressively and substantially reduce the uncertainty about the multiple and complex gene-environment interactions that lead to coronary artery disease. […] We predict tomorrow’s weather, more or less accurately, by measuring dynamic trends in today’s air temperature, wind speed, humidity, barometric pressure and a host of other meteorological variables. But when we try to predict what the weather will be next month, the accuracy of our prediction falls to little better than random. Perhaps we should spend huge sums of money on a more sophisticated weather-prediction model, incorporating the tides on the seas of Mars and the flutter of butterflies’ wings? Of course we shouldn’t. Not only would such a hyper-inclusive model fail to improve the accuracy of our predictive modeling, there are good statistical and operational reasons why it could well make it less accurate.”

vi. Why software projects take longer than you think – a statistical model.

Anyone who built software for a while knows that estimating how long something is going to take is hard. It’s hard to come up with an unbiased estimate of how long something will take, when fundamentally the work in itself is about solving something. One pet theory I’ve had for a really long time, is that some of this is really just a statistical artifact.

Let’s say you estimate a project to take 1 week. Let’s say there are three equally likely outcomes: either it takes 1/2 week, or 1 week, or 2 weeks. The median outcome is actually the same as the estimate: 1 week, but the mean (aka average, aka expected value) is 7/6 = 1.17 weeks. The estimate is actually calibrated (unbiased) for the median (which is 1), but not for the the mean.

A reasonable model for the “blowup factor” (actual time divided by estimated time) would be something like a log-normal distribution. If the estimate is one week, then let’s model the real outcome as a random variable distributed according to the log-normal distribution around one week. This has the property that the median of the distribution is exactly one week, but the mean is much larger […] Intuitively the reason the mean is so large is that tasks that complete faster than estimated have no way to compensate for the tasks that take much longer than estimated. We’re bounded by 0, but unbounded in the other direction.”

I like this way to conceptually frame the problem, and I definitely do not think it only applies to software development.

“I filed this in my brain under “curious toy models” for a long time, occasionally thinking that it’s a neat illustration of a real world phenomenon I’ve observed. But surfing around on the interwebs one day, I encountered an interesting dataset of project estimation and actual times. Fantastic! […] The median blowup factor turns out to be exactly 1x for this dataset, whereas the mean blowup factor is 1.81x. Again, this confirms the hunch that developers estimate the median well, but the mean ends up being much higher. […]

If my model is right (a big if) then here’s what we can learn:

  • People estimate the median completion time well, but not the mean.
  • The mean turns out to be substantially worse than the median, due to the distribution being skewed (log-normally).
  • When you add up the estimates for n tasks, things get even worse.
  • Tasks with the most uncertainty (rather the biggest size) can often dominate the mean time it takes to complete all tasks.”

vii. Attraction inequality and the dating economy.

“…the relentless focus on inequality among politicians is usually quite narrow: they tend to consider inequality only in monetary terms, and to treat “inequality” as basically synonymous with “income inequality.” There are so many other types of inequality that get air time less often or not at all: inequality of talent, height, number of friends, longevity, inner peace, health, charm, gumption, intelligence, and fortitude. And finally, there is a type of inequality that everyone thinks about occasionally and that young single people obsess over almost constantly: inequality of sexual attractiveness. […] One of the useful tools that economists use to study inequality is the Gini coefficient. This is simply a number between zero and one that is meant to represent the degree of income inequality in any given nation or group. An egalitarian group in which each individual has the same income would have a Gini coefficient of zero, while an unequal group in which one individual had all the income and the rest had none would have a Gini coefficient close to one. […] Some enterprising data nerds have taken on the challenge of estimating Gini coefficients for the dating “economy.” […] The Gini coefficient for [heterosexual] men collectively is determined by [-ll-] women’s collective preferences, and vice versa. If women all find every man equally attractive, the male dating economy will have a Gini coefficient of zero. If men all find the same one woman attractive and consider all other women unattractive, the female dating economy will have a Gini coefficient close to one.”

“A data scientist representing the popular dating app “Hinge” reported on the Gini coefficients he had found in his company’s abundant data, treating “likes” as the equivalent of income. He reported that heterosexual females faced a Gini coefficient of 0.324, while heterosexual males faced a much higher Gini coefficient of 0.542. So neither sex has complete equality: in both cases, there are some “wealthy” people with access to more romantic experiences and some “poor” who have access to few or none. But while the situation for women is something like an economy with some poor, some middle class, and some millionaires, the situation for men is closer to a world with a small number of super-billionaires surrounded by huge masses who possess almost nothing. According to the Hinge analyst:

On a list of 149 countries’ Gini indices provided by the CIA World Factbook, this would place the female dating economy as 75th most unequal (average—think Western Europe) and the male dating economy as the 8th most unequal (kleptocracy, apartheid, perpetual civil war—think South Africa).”

Btw., I’m reasonably certain “Western Europe” as most people think of it is not average in terms of Gini, and that half-way down the list should rather be represented by some other region or country type, like, say Mongolia or Bulgaria. A brief look at Gini lists seemed to support this impression.

Quartz reported on this finding, and also cited another article about an experiment with Tinder that claimed that that “the bottom 80% of men (in terms of attractiveness) are competing for the bottom 22% of women and the top 78% of women are competing for the top 20% of men.” These studies examined “likes” and “swipes” on Hinge and Tinder, respectively, which are required if there is to be any contact (via messages) between prospective matches. […] Yet another study, run by OkCupid on their huge datasets, found that women rate 80 percent of men as “worse-looking than medium,” and that this 80 percent “below-average” block received replies to messages only about 30 percent of the time or less. By contrast, men rate women as worse-looking than medium only about 50 percent of the time, and this 50 percent below-average block received message replies closer to 40 percent of the time or higher.

If these findings are to be believed, the great majority of women are only willing to communicate romantically with a small minority of men while most men are willing to communicate romantically with most women. […] It seems hard to avoid a basic conclusion: that the majority of women find the majority of men unattractive and not worth engaging with romantically, while the reverse is not true. Stated in another way, it seems that men collectively create a “dating economy” for women with relatively low inequality, while women collectively create a “dating economy” for men with very high inequality.”

I think the author goes a bit off the rails later in the post, but the data is interesting. It’s however important keeping in mind in contexts like these that sexual selection pressures apply at multiple levels, not just one, and that partner preferences can be non-trivial to model satisfactorily; for example as many women have learned the hard way, males may have very different standards for whom to a) ‘engage with romantically’ and b) ‘consider a long-term partner’.

viii. Flipping the Metabolic Switch: Understanding and Applying Health Benefits of Fasting.

“Intermittent fasting (IF) is a term used to describe a variety of eating patterns in which no or few calories are consumed for time periods that can range from 12 hours to several days, on a recurring basis. Here we focus on the physiological responses of major organ systems, including the musculoskeletal system, to the onset of the metabolic switch – the point of negative energy balance at which liver glycogen stores are depleted and fatty acids are mobilized (typically beyond 12 hours after cessation of food intake). Emerging findings suggest the metabolic switch from glucose to fatty acid-derived ketones represents an evolutionarily conserved trigger point that shifts metabolism from lipid/cholesterol synthesis and fat storage to mobilization of fat through fatty acid oxidation and fatty-acid derived ketones, which serve to preserve muscle mass and function. Thus, IF regimens that induce the metabolic switch have the potential to improve body composition in overweight individuals. […] many experts have suggested IF regimens may have potential in the treatment of obesity and related metabolic conditions, including metabolic syndrome and type 2 diabetes.()”

“In most studies, IF regimens have been shown to reduce overall fat mass and visceral fat both of which have been linked to increased diabetes risk.() IF regimens ranging in duration from 8 to 24 weeks have consistently been found to decrease insulin resistance.(, , , , , , , , , ) In line with this, many, but not all,() large-scale observational studies have also shown a reduced risk of diabetes in participants following an IF eating pattern.”

“…we suggest that future randomized controlled IF trials should use biomarkers of the metabolic switch (e.g., plasma ketone levels) as a measure of compliance and the magnitude of negative energy balance during the fasting period. It is critical for this switch to occur in order to shift metabolism from lipidogenesis (fat storage) to fat mobilization for energy through fatty acid β-oxidation. […] As the health benefits and therapeutic efficacies of IF in different disease conditions emerge from RCTs, it is important to understand the current barriers to widespread use of IF by the medical and nutrition community and to develop strategies for broad implementation. One argument against IF is that, despite the plethora of animal data, some human studies have failed to show such significant benefits of IF over CR [Calorie Restriction].() Adherence to fasting interventions has been variable, some short-term studies have reported over 90% adherence,() whereas in a one year ADMF study the dropout rate was 38% vs 29% in the standard caloric restriction group.()”

ix. Self-repairing cells: How single cells heal membrane ruptures and restore lost structures.

June 2, 2019 Posted by | Astronomy, Biology, Data, Diabetes, Economics, Evolutionary biology, Genetics, Geography, History, Mathematics, Medicine, Physics, Psychology, Statistics, Wikipedia | Leave a comment

Oncology (I)

I really disliked the ‘Pocket…’ part of this book, so I’ll sort of pretend to overlook this aspect also in my coverage of the book here. This’ll be a hard thing to do, given the way the book is written – I refer to my goodreads review for details, I’ll include only one illustrative quote from that review here:

“In terms of content, the book probably compares favourably with many significantly longer oncology texts (mainly, but certainly not only, because of the publication date). In terms of readability it compares unfavourably to an Egyptian translation of Alan Sokal’s 1996 article in Social Text, if it were translated by a 12-year old dyslexic girl.”

I don’t yet know in how much detail I’ll blog the book; this may end up being the only post about the book, or I may decide to post a longer sequence of posts. The book is hard to blog, which is an argument against covering it in detail – and also the reason why I haven’t already blogged it – but some of the content included in the book is really, really nice stuff to know, which is a strong argument in favour of covering at least some of the material here. The book has a lot of stuff, so regardless of the level of detail of my future coverage a lot of interesting stuff will of necessity have been left out.

My coverage below includes some observations and links related to the first 100 pages of the book.

“Understanding Radiation Response: The 4 Rs of Radiobiology
Repair of sublethal damage
Reassortment of cells w/in the cell cycle
Repopulation of cells during the course of radiotherapy
Reoxygenation of hypoxic cells […]

*Oxygen enhances DNA damage induced by free radicals, thereby facilitating the indirect action of IR [ionizing radiation, US] *Biologically equivalent dose can vary by a factor of 2–3 depending upon the presence or absence of oxygen (referred to as the oxygen enhancement ratio) *Poorly oxygenated postoperative beds frequently require higher doses of RT than preoperative RT [radiation therapy] […] Chemotherapy is frequently used sequentially or concurrently w/radiotherapy to maximize therapeutic benefit. This has improved pt outcomes although also a/w ↑ overall tox. […] [Many chemotherapeutic agents] show significant synergy with RT […] Mechanisms for synergy vary widely: Include cell cycle effects, hypoxic cell sensitization, & modulation of the DNA damage response”.

“Specific dose–volume relationships have been linked to the risk of late organ tox. […] *Dose, volume, underlying genetics, and age of the pt at the time of RT are critical determinants of the risk for 2° malignancy *The likelihood of 2° CA is correlated w/dose, but there is no threshold dose below which there is no additional risk of 2° malignancy *Latent period for radiation-induced solid tumors is generally between 10 and 60 y […]. Latent period for leukemias […] is shorter — peak between 5 & 7 y.”

“The immune system plays an important role in CA surveillance; Rx’s that modulate & amplify the immune system are referred to as immunotherapies […] tumors escape the immune system via loss of molecules on tumor cells important for immune activation […]; tumors can secrete immunosuppressing cytokines (IL-10 & TGF-β) & downregulate IFN-γ; in addition, tumors often express nonmutated self-Ag, w/c the immune system will, by definition, not react against; tumors can express molecules that inhibit T-cell function […] Ubiquitous CD47 (Don’t eat me signal) with ↑ expression on tumor cells mediates escape from phagocytosis. *Tumor microenvironment — immune cells are found in tumors, the exact composition of these cells has been a/w [associated with, US] pt outcomes; eg, high concentration of tumor-infiltrating lymphocytes (CD8+ cells) are a/w better outcomes & ↑ response to chemotherapy, Tregs & myeloid-derived suppressor cells are a/w worse outcomes, the exact role of Th17 in tumors is still being elucidated; the milieu of cytokines & chemokines also plays a role in outcome; some cytokines (VEGF, IL-1, IL-8) lead to endothelial cell proliferation, migration, & activation […] Expression of PD-L1 in tumor microenvironment can be indicator of improved likelihood of response to immune checkpoint blockade. […] Tumor mutational load correlates w/increased response to immunotherapy (NEJM; 2014;371:2189.).”

“Over 200 hereditary CA susceptibility syndromes, most are rare […]. Inherited CAs arise from highly penetrant germline mts [mutations, US]; “familial” CAss may be caused by interaction of low-penetrance genes, gene–environment interactions, or both. […] Genetic testing should be done based on individual’s probability of being a mt carrier & after careful discussion & informed consent”.

Pharmacogenetics: Effect of heritable genes on response to drugs. Study of single genes & interindividual differences in drug metabolizing enzymes. Pharmacogenomics: Effect of inherited & acquired genetic variation on drug response. Study of the functions & interactions of all genes in the genome & how the overall variability of drug response may be used to predict the right tx in individual pts & to design new drugs. Polymorphisms: Common variations in a DNA sequence that may lead to ↓ or ↑ activity of the encoded gene (SNP, micro- & minisatellites). SNPs: Single nucleotide polymorphisms that may cause an amino acid exchange in the encoded protein, account for >90% of genetic variation in the human genome.”

Tumor lysis syndrome [TLS] is an oncologic emergency caused by electrolyte abnormalities a/w spontaneous and/or tx-induced cell death that can be potentially fatal. […] 4 key electrolyte abnormalities 2° to excessive tumor/cell lysis: *Hyperkalemia *Hyperphosphatemia *Hypocalcemia *Hyperuricemia (2° to catabolism of nucleic acids) […] Common Malignancies Associated with a High Risk of Developing TLS in Adult Patients [include] *Acute leukemias [and] *High-grade lymphomas such as Burkitt lymphoma & DLBCL […] [Disease] characteristics a/w TLS risk: Rapidly progressive, chemosensitive, myelo- or lymphoproliferative [disease] […] [Patient] characteristics a/w TLS risk: *Baseline impaired renal function, oliguria, exposure to nephrotoxins, hyperuricemia *Volume depletion/inadequate hydration, acidic urine”.

Hypercalcemia [affects] ~10–30% of all pts w/malignancy […] Symptoms: Polyuria/polydipsia, intravascular volume depletion, AKI, lethargy, AMS [Altered Mental Status, US], rarely coma/seizures; N/V [nausea/vomiting, US] […] Osteolytic Bone Lesions [are seen in] ~20% of all hyperCa of malignancy […] [Treat] underlying malignancy, only way to effectively treat, all other tx are temporizing”.

“National Consensus Project definition: Palliative care means patient and family-centered care that optimizes quality of life by anticipating, preventing, and treating suffering. Palliative care throughout the continuum of illness involves addressing physical, intellectual, emotional, social, and spiritual needs to facilitate patient autonomy, access to information, and choice.” […] *Several RCTs have supported the integration of palliative care w/oncologic care, but specific interventions & models of care have varied. Expert panels at NCCN & ASCO recently reviewed the data to release evidence-based guidelines. *NCCN guidelines (2016): “Palliative care should be initiated by the primary oncology team and then augmented by collaboration with an interdisciplinary team of palliative care experts… All cancer patients should be screened for palliative care needs at their initial visit, at appropriate intervals, and as clinically indicated.” *ASCO guideline update (2016): “Inpatients and outpatients with advanced cancer should receive dedicated palliative care services, early in the disease course, concurrent with active tx. Referral of patients to interdisciplinary palliative care teams is optimal […] Essential Components of Palliative Care (ASCO) *Rapport & relationship building w/pts & family caregivers *Symptom, distress, & functional status mgmt (eg, pain, dyspnea, fatigue, sleep disturbance, mood, nausea, or constipation) *Exploration of understanding & education about illness & prognosis *Clarification of tx goals *Assessment & support of coping needs (eg, provision of dignity therapy) *Assistance w/medical decision making *Coordination w/other care providers *Provision of referrals to other care providers as indicated […] Useful Communication Tips *Use open-ended questions to elicit pt concerns *Clarify how much information the pt would like to know […] Focus on what can be done (not just what can’t be done) […] Remove the phrase “do everything” from your medical vocabulary […] Redefine hope by supporting realistic & achievable goals […] make empathy explicit”.

Some links:

Radiation therapy.
Brachytherapy.
External beam radiotherapy.
Image-guided radiation therapy.
Stereotactic Radiosurgery.
Total body irradiation.
Cancer stem cell.
Cell cycle.
Carcinogenesis. Oncogene. Tumor suppressor gene. Principles of Cancer Therapy: Oncogene and Non-oncogene Addiction.
Cowden syndrome. Peutz–Jeghers syndrome. Familial Atypical Multiple Mole Melanoma Syndrome. Li–Fraumeni syndrome. Lynch syndrome. Turcot syndrome. Muir–Torre syndrome. Von Hippel–Lindau disease. Gorlin syndrome. Werner syndrome. Birt–Hogg–Dubé syndrome. Neurofibromatosis type I. -ll- type 2.
Knudson hypothesis.
DNA sequencing.
Cytogenetics.
Fluorescence in situ hybridization.
CAR T Cell therapy.
Antimetabolite. Alkylating antineoplastic agentAntimicrotubule agents/mitotic inhibitors. Chemotherapeutic agentsTopoisomerase inhibitorMonoclonal antibodiesBisphosphonatesProteasome inhibitors. [The book covers all of these agents, and others I for one reason or another decided not to include, in great detail, listing many different types of agents and including notes on dosing, pharmacokinetics & pharmacodynamics, associated adverse events and drug interactions etc. These parts of the book were very interesting, but they are impossible to blog – US).
Syndrome of inappropriate antidiuretic hormone secretion.
Acute lactic acidosis (“Often seen w/liver mets or rapidly dividing heme malignancies […] High mortality despite aggressive tx [treatment]”).
Superior vena cava syndrome.

October 12, 2018 Posted by | Biology, Books, Cancer/oncology, Genetics, Immunology, Medicine, Pharmacology | Leave a comment

Circadian Rhythms (II)

Below I have added some more observations from the book, as well as some links of interest.

“Most circadian clocks make use of a sun-based mechanism as the primary synchronizing (entraining) signal to lock the internal day to the astronomical day. For the better part of four billion years, dawn and dusk has been the main zeitgeber that allows entrainment. Circadian clocks are not exactly 24 hours. So to prevent daily patterns of activity and rest from drifting (freerunning) over time, light acts rather like the winder on a mechanical watch. If the clock is a few minutes fast or slow, turning the winder sets the clock back to the correct time. Although light is the critical zeitgeber for much behaviour, and provides the overarching time signal for the circadian system of most organisms, it is important to stress that many, if not all cells within an organism possess the capacity to generate a circadian rhythm, and that these independent oscillators are regulated by a variety of different signals which, in turn, drive countless outputs […]. Colin Pittendrigh was one of the first to study entrainment, and what he found in Drosophila has been shown to be true across all organisms, including us. For example, if you keep Drosophila, or a mouse or bird, in constant darkness it will freerun. If you then expose the animal to a short pulse of light at different times the shifting (phase shifting) effects on the freerunning rhythm vary. Light pulses given when the clock ‘thinks’ it is daytime (subjective day) will have little effect on the clock. However, light falling during the first half of the subjective night causes the animal to delay the start of its activity the following day, while light exposure during the second half of the subjective night advances activity onset. Pittendrigh called this the ‘phase response curve’ […] Remarkably, the PRC of all organisms looks very similar, with light exposure around dusk and during the first half of the night causing a delay in activity the next day, while light during the second half of the night and around dawn generates an advance. The precise shape of the PRC varies between species. Some have large delays and small advances (typical of nocturnal species) while others have small delays and big advances (typical of diurnal species). Light at dawn and dusk pushes and pulls the freerunning rhythm towards an exactly 24-hour cycle. […] Light can act directly to modify behaviour. In nocturnal rodents such as mice, light encourages these animals to seek shelter, reduce activity, and even sleep, while in diurnal species light promotes alertness and vigilance. So circadian patterns of activity are not only entrained by dawn and dusk but also driven directly by light itself. This direct effect of light on activity has been called ‘masking’, and combines with the predictive action of the circadian system to restrict activity to that period of the light/dark cycle to which the organism has evolved and is optimally adapted.”

“[B]irds, reptiles, amphibians, and fish (but not mammals) have ‘extra-ocular’ photoreceptors located within the pineal complex, hypothalamus, and other areas of the brain, and like the invertebrates, eye loss in many cases has little impact upon the ability of these animals to entrain. […] Mammals are strikingly different from all other vertebrates as they possess photoreceptor cells only within their eyes. Eye loss in all groups of mammals […] abolishes the capacity of these animals to entrain their circadian rhytms to the light/dark cycle. But astonishingly, the visual cells of the retina – the rods and cones – are not required for the detection of the dawn/dusk signal. There exists a third class of photoreceptors within the eye […] Studies in the late 1990s by Russell Foster and his colleagues showed that mice lacking all their rod and cone photoreceptors could still regulate their circadian rhythms to light perfectly normally. But when their eyes were covered the ability to entrain was lost […] work on the rodless/coneless mouse, along with [other] studies […], clearly demonstrated that the mammalian retina contains a small population of photosensitive retinal ganglion cells or pRGCs, which comprise approximately 1-2 per cent of all retinal ganglion cells […] Ophthalmologists now appreciate that eye loss deprives us of both vision and a proper sense of time. Furthermore, genetic diseases that result in the loss of the rods and cones and cause visual blindness, often spare the pRGCs. Under these circumstances, individuals who have their eyes but are visually blind, yet possess functional pRGCs, need to be advised to seek out sufficient light to entrain their circadian system. The realization that the eye provides us with both our sense of space and our sense of time has redefined the diagnosis, treatment, and appreciation of human blindness.”

“But where is ‘the’ circadian clock of mammals? […] [Robert] Moore and [Irving] Zucker’s work pinpointed the SCN as the likely neural locus of the light-entrainable circadian pacemaker in mammals […] and a decade later this was confirmed by definitive experiments from Michael Menaker’s laboratory undertaken at the University of Virginia. […] These experiments established the SCN as the ‘master circadian pacemaker’ of mammals. […] There are around 20,000 or so neurons in the mouse SCN, but they are not identical. Some receive light information from the pRGCs and pass this information on to other SCN neurons, while others project to the thalamus and other regions of the brain, and collectively these neurons secrete more than one hundred different neurotransmitters, neuropeptides, cytokines, and growth factors. The SCN itself is composed of several regions or clusters of neurons, which have different jobs. Furthermore, there is considerable variability in the oscillations of the individual cells, ranging from 21.25 to 26.25 hours. Although the individual cells in the SCN have their own clockwork mechanisms with varying periods, the cell autonomous oscillations in neural activity are synchronized at the system level within the SCN, providing a coherent near 24-hour signal to the rest of the mammal. […] SCN neurons exhibit a circadian rhythm of spontaneous action potentials (SAPs), with higher frequency during the daytime than the night which in turn drives many rhythmic changes by alternating stimulatory and inhibitory inputs to the appropriate target neurons in the brain and neuroendocrine systems. […] The SCN projects directly to thirty-five brain regions, mostly located in the hypothalamus, and particularly those regions of the hypothalamus that regulate hormone release. Indeed, many pituitary hormones, such as cortisol, are under tight circadian control. Furthermore, the SCN regulates the activity of the autonomous nervous system, which in turn places multiple aspects of physiology, including the sensitivity of target tissues to hormonal signals, under circadian control. In addition to these direct neuronal connections, the SCN communicates to the rest of the body using diffusible chemical signals.”

“The SCN is the master clock in mammals but it is not the only clock. There are liver clocks, muscle clocks, pancreas clocks, adipose tissue clocks, and clocks of some sort in every organ and tissue examined to date. While lesioning of the SCN disrupts global behavioural rhythms such as locomotor activity, the disruption of clock function within just the liver or lung leads to circadian disorder that is confined to the target organ. In tissue culture, liver, heart, lung, skeletal muscle, and other organ tissues such as mammary glands express circadian rhythms, but these rhythms dampen and disappear after only a few cycles. This occurs because some individual clock cells lose rhythmicity, but more commonly because the individual cellular clocks become uncoupled from each other. The cells continue to tick, but all at different phases so that an overall 24-hour rhythm within the tissue or organ is lost. The discovery that virtually all cells of the body have clocks was one of the big surprises in circadian rhythms research. […] the SCN, entrained by pRGCs, acts as a pacemaker to coordinate, but not drive, the circadian activity of billions of individual peripheral circadian oscillators throughout the tissues and organs of the body. The signalling pathways used by the SCN to phase-entrain peripheral clocks are still uncertain, but we know that the SCN does not send out trillions of separate signals around the body that target specific cellular clocks. Rather there seems to be a limited number of neuronal and humoral signals which entrain peripheral clocks that in turn time their local physiology and gene expression.”

“As in Drosophilia […], the mouse clockwork also comprises three transcriptional-translational feedback loops with multiple interacting components. […] [T]he generation of a robust circadian rhythm that can be entrained by the environment is achieved via multiple elements, including the rate of transcription, translation, protein complex assembly, phosphorylation, other post-translation modification events, movement into the nucleus, transcriptional inhibition, and protein degradation. […] [A] complex arrangement is needed because from the moment a gene is switched on, transcription and translation usually takes two hours at most. As a result, substantial delays must be imposed at different stages to produce a near 24-hour oscillation. […] Although the molecular players may differ from Drosophilia and mice, and indeed even between different insects, the underlying principles apply across the spectrum of animal life. […] In fungi, plants, and cyanobacteria the clock genes are all different from each other and different again from the animal clock genes, suggesting that clocks evolved independently in the great evolutionary lineages of life on earth. Despite these differences, all these clocks are based upon a fundamental TTFL.”

“Circadian entrainment is surprisingly slow, taking several days to adjust to an advanced or delayed light/dark cycle. In most mammals, including jet-lagged humans, behavioural shifts are limited to approximately one hour (one time zone) per day. […] Changed levels of PER1 and PER2 act to shift the molecular clockwork, advancing the clock at dawn and delaying the clock at dusk. However, per mRNA and PER protein levels fall rapidly even if the animal remains exposed to light. As a result, the effects of light on the molecular clock are limited and entrainment is a gradual process requiring repeated shifting stimuli over multiple days. This phenomenon explains why we get jet lag: the clock cannot move immediately to a new dawn/dusk cycle because there is a ‘brake’ on the effects of light on the clock. […] The mechanism that provides this molecular brake is the production of SLK1 protein. […] Experiments on mice in which SLK1 has been suppressed show very rapid entrainment to simulated jet-lag.”

“We spend approximately 36 per cent of our entire lives asleep, and while asleep we do not eat, drink, or knowingly pass on our genes. This suggests that this aspect of our 24-hour behaviour provides us with something of huge value. If we are deprived of sleep, the sleep drive becomes so powerful that it can only be satisfied by sleep. […] Almost all life shows a 24-hour pattern of activity and rest, as we live on a planet that revolves once every 24 hours causing profound changes in light, temperature, and food availability. […] Life seems to have made an evolutionary ‘decision’ to be active at a specific part of the day/night cycle, and a species specialized to be active during the day will be far less effective at night. Conversely, nocturnal animals that are beautifully adapted to move around and hunt under dim or no light fail miserably during the day. […] no species can operate with the same effectiveness across the 24-hour light/dark environment. Species are adapted to a particular temporal niche just as they are to a physical niche. Activity at the wrong time often means death. […] Sleep may be the suspension of most physical activity, but a huge amount of essential physiology occurs during this time. Many diverse processes associated with the restoration and rebuilding of metabolic pathways are known to be up-regulated during sleep […] During sleep the body performs a broad range of essential ‘housekeeping’ functions without which performance and health during the active phase deteriorates rapidly. But these housekeeping functions would not be why sleep evolved in the first place. […] Evolution has allocated these key activities to the most appropriate time of day. […] In short, sleep has probably evolved as a species-specific response to a 24-hour world in which light, temperature, and food availability change dramatically. Sleep is a period of physical inactivity when individuals avoid movement within an environment to which they are poorly adapted, while using this time to undertake essential housekeeping functions demanded by their biology.”

“Sleep propensity in humans is closely correlated with the melatonin profile but this may be correlation and not causation. Indeed, individuals who do not produce melatonin (e.g. tetraplegic individuals, people on beta-blockers, or pinealectomized patients) still exhibit circadian sleep/wake rhythms with only very minor detectable changes. Another correlation between melatonin and sleep relates to levels of alertness. When melatonin is suppressed by light at night alertness levels increase, suggesting that melatonin and sleep propensity are directly connected. However, increases in alertness occur before a significant drop in blood melatonin. Furthermore, increased light during the day will also improve alertness when melatonin levels are already low. These findings suggest that melatonin is not a direct mediator of alertness and hence sleepiness. Taking synthetic melatonin or synthetic analogues of melatonin produces a mild sleepiness in about 70 per cent of people, especially when no natural melatonin is being released. The mechanism whereby melatonin produces mild sedation remains unclear.”

Links:

Teleost multiple tissue (tmt) opsin.
Melanopsin.
Suprachiasmatic nucleus.
Neuromedin S.
Food-entrainable circadian oscillators in the brain.
John Harrison. Seymour Benzer. Ronald Konopka. Jeffrey C. Hall. Michael Rosbash. Michael W. Young.
Circadian Oscillators: Around the Transcription-Translation Feedback Loop and on to Output.
Period (gene). Timeless (gene). CLOCK. Cycle (gene). Doubletime (gene). Cryptochrome. Vrille Gene.
Basic helix-loop-helix.
The clockwork orange Drosophila protein functions as both an activator and a repressor of clock gene expression.
RAR-related orphan receptor. RAR-related orphan receptor alpha.
BHLHE41.
The two-process model of sleep regulation: a reappraisal.

September 30, 2018 Posted by | Books, Genetics, Medicine, Molecular biology, Neurology, Ophthalmology | Leave a comment

A few diabetes papers of interest

i. Islet Long Noncoding RNAs: A Playbook for Discovery and Characterization.

“This review will 1) highlight what is known about lncRNAs in the context of diabetes, 2) summarize the strategies used in lncRNA discovery pipelines, and 3) discuss future directions and the potential impact of studying the role of lncRNAs in diabetes.”

“Decades of mouse research and advances in genome-wide association studies have identified several genetic drivers of monogenic syndromes of β-cell dysfunction, as well as 113 distinct type 2 diabetes (T2D) susceptibility loci (1) and ∼60 loci associated with an increased risk of developing type 1 diabetes (T1D) (2). Interestingly, these studies discovered that most T1D and T2D susceptibility loci fall outside of coding regions, which suggests a role for noncoding elements in the development of disease (3,4). Several studies have demonstrated that many causal variants of diabetes are significantly enriched in regions containing islet enhancers, promoters, and transcription factor binding sites (5,6); however, not all diabetes susceptibility loci can be explained by associations with these regulatory regions. […] Advances in RNA sequencing (RNA-seq) technologies have revealed that mammalian genomes encode tens of thousands of RNA transcripts that have similar features to mRNAs, yet are not translated into proteins (7). […] detailed characterization of many of these transcripts has challenged the idea that the central role for RNA in a cell is to give rise to proteins. Instead, these RNA transcripts make up a class of molecules called noncoding RNAs (ncRNAs) that function either as “housekeeping” ncRNAs, such as transfer RNAs (tRNAs) and ribosomal RNAs (rRNAs), that are expressed ubiquitously and are required for protein synthesis or as “regulatory” ncRNAs that control gene expression. While the functional mechanisms of short regulatory ncRNAs, such as microRNAs (miRNAs), small interfering RNAs (siRNAs), and Piwi-interacting RNAs (piRNAs), have been described in detail (810), the most abundant and functionally enigmatic regulatory ncRNAs are called long noncoding RNAs (lncRNAs) that are loosely defined as RNAs larger than 200 nucleotides (nt) that do not encode for protein (1113). Although using a definition based strictly on size is somewhat arbitrary, this definition is useful both bioinformatically […] and technically […]. While the 200-nt size cutoff has simplified identification of lncRNAs, this rather broad classification means several features of lncRNAs, including abundance, cellular localization, stability, conservation, and function, are inherently heterogeneous (1517). Although this represents one of the major challenges of lncRNA biology, it also highlights the untapped potential of lncRNAs to provide a novel layer of gene regulation that influences islet physiology and pathophysiology.”

“Although the role of miRNAs in diabetes has been well established (9), analyses of lncRNAs in islets have lagged behind their short ncRNA counterparts. However, several recent studies provide evidence that lncRNAs are crucial components of the islet regulome and may have a role in diabetes (27). […] misexpression of several lncRNAs has been correlated with diabetes complications, such as diabetic nephropathy and retinopathy (2931). There are also preliminary studies suggesting that circulating lncRNAs, such as Gas5, MIAT1, and SENCR, may represent effective molecular biomarkers of diabetes and diabetes-related complications (32,33). Finally, several recent studies have explored the role of lncRNAs in the peripheral metabolic tissues that contribute to energy homeostasis […]. In addition to their potential as genetic drivers and/or biomarkers of diabetes and diabetes complications, lncRNAs can be exploited for the treatment of diabetes. For example, although tremendous efforts have been dedicated to generating replacement β-cells for individuals with diabetes (35,36), human pluripotent stem cell–based β-cell differentiation protocols remain inefficient, and the end product is still functionally and transcriptionally immature compared with primary human β-cells […]. This is largely due to our incomplete knowledge of in vivo differentiation regulatory pathways, which likely include a role for lncRNAs. […] Inherent characteristics of lncRNAs have also made them attractive candidates for drug targeting, which could be exploited for developing new diabetes therapies.”

“With the advancement of high-throughput sequencing techniques, the list of islet-specific lncRNAs is growing exponentially; however, functional characterization is missing for the majority of these lncRNAs. […] Tens of thousands of lncRNAs have been identified in different cell types and model organisms; however, their functions largely remain unknown. Although the tools for determining lncRNA function are technically restrictive, uncovering novel regulatory mechanisms will have the greatest impact on understanding islet function and identifying novel therapeutics for diabetes. To date, no biochemical assay has been used to directly determine the molecular mechanisms by which islet lncRNAs function, which highlights both the infancy of the field and the difficulty in implementing these techniques. […] Due to the infancy of the lncRNA field, most of the biochemical and genetic tools used to interrogate lncRNA function have only recently been developed or are adapted from techniques used to study protein-coding genes and we are only beginning to appreciate the limits and challenges of borrowing strategies from the protein-coding world.”

“The discovery of lncRNAs as a novel class of tissue-specific regulatory molecules has spawned an exciting new field of biology that will significantly impact our understanding of pancreas physiology and pathophysiology. As the field continues to grow, there is growing appreciation that lncRNAs will provide many of the missing components to existing molecular pathways that regulate islet biology and contribute to diabetes when they become dysfunctional. However, to date, most of the experimental emphasis on lncRNAs has focused on large-scale discovery using genome-wide approaches, and there remains a paucity of functional analysis.”

ii. Diabetes and Trajectories of Estimated Glomerular Filtration Rate: A Prospective Cohort Analysis of the Atherosclerosis Risk in Communities Study.

“Diabetes is among the strongest common risk factors for end-stage renal disease, and in industrialized countries, diabetes contributes to ∼50% of cases (3). Less is known about the pattern of kidney function decline associated with diabetes that precedes end-stage renal disease. Identifying patterns of estimated glomerular filtration rate (eGFR) decline could inform monitoring practices for people at high risk of chronic kidney disease (CKD) progression. A better understanding of when and in whom eGFR decline occurs would be useful for the design of clinical trials because eGFR decline >30% is now often used as a surrogate end point for CKD progression (4). Trajectories among persons with diabetes are of particular interest because of the possibility for early intervention and the prevention of CKD development. However, eGFR trajectories among persons with new diabetes may be complex due to the hypothesized period of hyperfiltration by which GFR increases, followed by progressive, rapid decline (5). Using data from the Atherosclerosis Risk in Communities (ARIC) study, an ongoing prospective community-based cohort of >15,000 participants initiated in 1987 with serial measurements of creatinine over 26 years, our aim was to characterize patterns of eGFR decline associated with diabetes, identify demographic, genetic, and modifiable risk factors within the population with diabetes that were associated with steeper eGFR decline, and assess for evidence of early hyperfiltration.”

“We categorized people into groups of no diabetes, undiagnosed diabetes, and diagnosed diabetes at baseline (visit 1) and compared baseline clinical characteristics using ANOVA for continuous variables and Pearson χ2 tests for categorical variables. […] To estimate individual eGFR slopes over time, we used linear mixed-effects models with random intercepts and random slopes. These models were fit on diabetes status at baseline as a nominal variable to adjust the baseline level of eGFR and included an interaction term between diabetes status at baseline and time to estimate annual decline in eGFR by diabetes categories. Linear mixed models were run unadjusted and adjusted, with the latter model including the following diabetes and kidney disease–related risk factors: age, sex, race–center, BMI, systolic blood pressure, hypertension medication use, HDL, prevalent coronary heart disease, annual family income, education status, and smoking status, as well as each variable interacted with time. Continuous covariates were centered at the analytic population mean. We tested model assumptions and considered different covariance structures, comparing nested models using Akaike information criteria. We identified the unstructured covariance model as the most optimal and conservative approach. From the mixed models, we described the overall mean annual decline by diabetes status at baseline and used the random effects to estimate best linear unbiased predictions to describe the distributions of yearly slopes in eGFR by diabetes status at baseline and displayed them using kernel density plots.”

“Because of substantial variation in annual eGFR slope among people with diagnosed diabetes, we sought to identify risk factors that were associated with faster decline. Among those with diagnosed diabetes, we compared unadjusted and adjusted mean annual decline in eGFR by race–APOL1 risk status (white, black– APOL1 low risk, and black–APOL1 high risk) [here’s a relevant link, US], systolic blood pressure […], smoking status […], prevalent coronary heart disease […], diabetes medication use […], HbA1c […], and 1,5-anhydroglucitol (≥10 and <10 μg/mL) [relevant link, US]. Because some of these variables were only available at visit 2, we required that participants included in this subgroup analysis attend both visits 1 and 2 and not be missing information on APOL1 or the variables assessed at visit 2 to ensure a consistent sample size. In addition to diabetes and kidney disease–related risk factors in the adjusted model, we also included diabetes medication use and HbA1c to account for diabetes severity in these analyses. […] to explore potential hyperfiltration, we used a linear spline model to allow the slope to change for each diabetes category between the first 3 years of follow-up (visit 1 to visit 2) and the subsequent time period (visit 2 to visit 5).”

“There were 15,517 participants included in the analysis: 13,698 (88%) without diabetes, 634 (4%) with undiagnosed diabetes, and 1,185 (8%) with diagnosed diabetes at baseline. […] At baseline, participants with undiagnosed and diagnosed diabetes were older, more likely to be black or have hypertension and coronary heart disease, and had higher mean BMI and lower mean HDL compared with those without diabetes […]. Income and education levels were also lower among those with undiagnosed and diagnosed diabetes compared with those without diabetes. […] Overall, there was a nearly linear association between eGFR and age over time, regardless of diabetes status […]. The crude mean annual decline in eGFR was slowest among those without diabetes at baseline (decline of −1.6 mL/min/1.73 m2/year [95% CI −1.6 to −1.5]), faster among those with undiagnosed diabetes compared with those without diabetes (decline of −2.1 mL/min/1.73 m2/year [95% CI −2.2 to −2.0][…]), and nearly twice as rapid among those with diagnosed diabetes compared with those without diabetes (decline of −2.9 mL/min/1.73 m2/year [95% CI −3.0 to −2.8][…]). Adjustment for diabetes and kidney disease–related risk factors attenuated the results slightly, but those with undiagnosed and diagnosed diabetes still had statistically significantly steeper declines than those without diabetes (decline among no diabetes −1.4 mL/min/1.73 m2/year [95% CI −1.5 to −1.4] and decline among undiagnosed diabetes −1.8 mL/min/1.73 m2/year [95% CI −2.0 to −1.7], difference vs. no diabetes of −0.4 mL/min/1.73 m2/year [95% CI −0.5 to −0.3; P < 0.001]; decline among diagnosed diabetes −2.5 mL/min/1.73 m2/year [95% CI −2.6 to −2.4], difference vs. no diabetes of −1.1 mL/min/1.73 m2/ year [95% CI −1.2 to −1.0; P < 0.001]). […] The decline in eGFR per year varied greatly across individuals, particularly among those with diabetes at baseline […] Among participants with diagnosed diabetes at baseline, those who were black, had systolic blood pressure ≥140 mmHg, used diabetes medications, had an HbA1c ≥7% [≥53 mmol/mol], or had 1,5-anhydroglucitol <10 μg/mL were at risk for steeper annual declines than their counterparts […]. Smoking status and prevalent coronary heart disease were not associated with significantly steeper eGFR decline in unadjusted analyses. Adjustment for risk factors, diabetes medication use, and HbA1c attenuated the differences in decline for all subgroups with the exception of smoking status, leaving black race along with APOL1-susceptible genotype, systolic blood pressure ≥140 mmHg, current smoking, insulin use, and HbA1c ≥9% [≥75 mmol/mol] as the risk factors indicative of steeper decline.”

CONCLUSIONS Diabetes is an important risk factor for kidney function decline. Those with diagnosed diabetes declined almost twice as rapidly as those without diabetes. Among people with diagnosed diabetes, steeper declines were seen in those with modifiable risk factors, including hypertension and glycemic control, suggesting areas for continued targeting in kidney disease prevention. […] Few other community-based studies have evaluated differences in kidney function decline by diabetes status over a long period through mid- and late life. One study of 10,184 Canadians aged ≥66 years with creatinine measured during outpatient visits showed results largely consistent with our findings but with much shorter follow-up (median of 2 years) (19). Other studies of eGFR change in a general population have found smaller declines than our results (20,21). A study conducted in Japanese participants aged 40–79 years found a decline of only −0.4 mL/min/1.73 m2/year over the course of two assessments 10 years apart (compared with our estimate among those without diabetes: −1.6 mL/min/1.73 m2/year). This is particularly interesting, as Japan is known to have a higher prevalence of CKD and end-stage renal disease than the U.S. (20). However, this study evaluated participants over a shorter time frame and required attendance at both assessments, which may have decreased the likelihood of capturing severe cases and resulted in underestimation of decline.”

“The Baltimore Longitudinal Study of Aging also assessed kidney function over time in a general population of 446 men, ranging in age from 22 to 97 years at baseline, each with up to 14 measurements of creatinine clearance assessed between 1958 and 1981 (21). They also found a smaller decline than we did (−0.8 mL/min/year), although this study also had notable differences. Their main analysis excluded participants with hypertension and history of renal disease or urinary tract infection and those treated with diuretics and/or antihypertensive medications. Without those exclusions, their overall estimate was −1.1 mL/min/year, which better reflects a community-based population and our results. […] In our evaluation of risk factors that might explain the variation in decline seen among those with diagnosed diabetes, we observed that black race, systolic blood pressure ≥140 mmHg, insulin use, and HbA1c ≥9% (≥75 mmol/mol) were particularly important. Although the APOL1 high-risk genotype is a known risk factor for eGFR decline, African Americans with low-risk APOL1 status continued to be at higher risk than whites even after adjustment for traditional risk factors, diabetes medication use, and HbA1c.”

“Our results are relevant to the design and conduct of clinical trials. Hard clinical outcomes like end-stage renal disease are relatively rare, and a 30–40% decline in eGFR is now accepted as a surrogate end point for CKD progression (4). We provide data on patient subgroups that may experience accelerated trajectories of kidney function decline, which has implications for estimating sample size and ensuring adequate power in future clinical trials. Our results also suggest that end points of eGFR decline might not be appropriate for patients with new-onset diabetes, in whom declines may actually be slower than among persons without diabetes. Slower eGFR decline among those with undiagnosed diabetes, who are likely early in the course of diabetes, is consistent with the hypothesis of hyperfiltration. Similar to other studies, we found that persons with undiagnosed diabetes had higher GFR at the outset, but this was a transient phenomenon, as they ultimately experienced larger declines in kidney function than those without diabetes over the course of follow-up (2325). Whether hyperfiltration is a universal aspect of early disease and, if not, whether it portends worse long-term outcomes is uncertain. Existing studies investigating hyperfiltration as a precursor to adverse kidney outcomes are inconsistent (24,26,27) and often confounded by diabetes severity factors like duration (27). We extended this literature by separating undiagnosed and diagnosed diabetes to help address that confounding.”

iii. Saturated Fat Is More Metabolically Harmful for the Human Liver Than Unsaturated Fat or Simple Sugars.

OBJECTIVE Nonalcoholic fatty liver disease (i.e., increased intrahepatic triglyceride [IHTG] content), predisposes to type 2 diabetes and cardiovascular disease. Adipose tissue lipolysis and hepatic de novo lipogenesis (DNL) are the main pathways contributing to IHTG. We hypothesized that dietary macronutrient composition influences the pathways, mediators, and magnitude of weight gain-induced changes in IHTG.

RESEARCH DESIGN AND METHODS We overfed 38 overweight subjects (age 48 ± 2 years, BMI 31 ± 1 kg/m2, liver fat 4.7 ± 0.9%) 1,000 extra kcal/day of saturated (SAT) or unsaturated (UNSAT) fat or simple sugars (CARB) for 3 weeks. We measured IHTG (1H-MRS), pathways contributing to IHTG (lipolysis ([2H5]glycerol) and DNL (2H2O) basally and during euglycemic hyperinsulinemia), insulin resistance, endotoxemia, plasma ceramides, and adipose tissue gene expression at 0 and 3 weeks.

RESULTS Overfeeding SAT increased IHTG more (+55%) than UNSAT (+15%, P < 0.05). CARB increased IHTG (+33%) by stimulating DNL (+98%). SAT significantly increased while UNSAT decreased lipolysis. SAT induced insulin resistance and endotoxemia and significantly increased multiple plasma ceramides. The diets had distinct effects on adipose tissue gene expression.”

CONCLUSIONS NAFLD has been shown to predict type 2 diabetes and cardiovascular disease in multiple studies, even independent of obesity (1), and also to increase the risk of progressive liver disease (17). It is therefore interesting to compare effects of different diets on liver fat content and understand the underlying mechanisms. We examined whether provision of excess calories as saturated (SAT) or unsaturated (UNSAT) fats or simple sugars (CARB) influences the metabolic response to overfeeding in overweight subjects. All overfeeding diets increased IHTGs. The SAT diet induced a greater increase in IHTGs than the UNSAT diet. The composition of the diet altered sources of excess IHTGs. The SAT diet increased lipolysis, whereas the CARB diet stimulated DNL. The SAT but not the other diets increased multiple plasma ceramides, which increase the risk of cardiovascular disease independent of LDL cholesterol (18). […] Consistent with current dietary recommendations (3638), the current study shows that saturated fat is the most harmful dietary constituent regarding IHTG accumulation.”

iv. Primum Non Nocere: Refocusing Our Attention on Severe Hypoglycemia Prevention.

“Severe hypoglycemia, defined as low blood glucose requiring assistance for recovery, is arguably the most dangerous complication of type 1 diabetes as it can result in permanent cognitive impairment, seizure, coma, accidents, and death (1,2). Since the Diabetes Control and Complications Trial (DCCT) demonstrated that intensive intervention to normalize glucose prevents long-term complications but at the price of a threefold increase in the rate of severe hypoglycemia (3), hypoglycemia has been recognized as the major limitation to achieving tight glycemic control. Severe hypoglycemia remains prevalent among adults with type 1 diabetes, ranging from ∼1.4% per year in the DCCT/EDIC (Epidemiology of Diabetes Interventions and Complications) follow-up cohort (4) to ∼8% in the T1D Exchange clinic registry (5).

One the greatest risk factors for severe hypoglycemia is impaired awareness of hypoglycemia (6), which increases risk up to sixfold (7,8). Hypoglycemia unawareness results from deficient counterregulation (9), where falling glucose fails to activate the autonomic nervous system to produce neuroglycopenic symptoms that normally help patients identify and respond to episodes (i.e., sweating, palpitations, hunger) (2). An estimated 20–25% of adults with type 1 diabetes have impaired hypoglycemia awareness (8), which increases to more than 50% after 25 years of disease duration (10).

Screening for hypoglycemia unawareness to identify patients at increased risk of severe hypoglycemic events should be part of routine diabetes care. Self-identified impairment in awareness tends to agree with clinical evaluation (11). Therefore, hypoglycemia unawareness can be easily and effectively screened […] Interventions for hypoglycemia unawareness include a range of behavioral and medical options. Avoiding hypoglycemia for at least several weeks may partially reverse hypoglycemia unawareness and reduce risk of future episodes (1). Therefore, patients with hypoglycemia and unawareness may be advised to raise their glycemic and HbA1c targets (1,2). Diabetes technology can play a role, including continuous subcutaneous insulin infusion (CSII) to optimize insulin delivery, continuous glucose monitoring (CGM) to give technological awareness in the absence of symptoms (14), or the combination of the two […] Aside from medical management, structured or hypoglycemia-specific education programs that aim to prevent hypoglycemia are recommended for all patients with severe hypoglycemia or hypoglycemia unawareness (14). In randomized trials, psychoeducational programs that incorporate increased education, identification of personal risk factors, and behavior change support have improved hypoglycemia unawareness and reduced the incidence of both nonsevere and severe hypoglycemia over short periods of follow-up (17,18) and extending up to 1 year (19).”

“Given that the presence of hypoglycemia unawareness increases the risk of severe hypoglycemia, which is the strongest predictor of a future episode (2,4), the implication that intervention can break the life-threatening and traumatizing cycle of hypoglycemia unawareness and severe hypoglycemia cannot be overstated. […] new evidence of durability of effect across treatment regimen without increasing the risk for long-term complications creates an imperative for action. In combination with existing screening tools and a body of literature investigating novel interventions for hypoglycemia unawareness, these results make the approach of screening, recognition, and intervention very compelling as not only a best practice but something that should be incorporated in universal guidelines on diabetes care, particularly for individuals with type 1 diabetes […] Hyperglycemia is […] only part of the puzzle in diabetes management. Long-term complications are decreasing across the population with improved interventions and their implementation (24). […] it is essential to shift our historical obsession with hyperglycemia and its long-term complications to equally emphasize the disabling, distressing, and potentially fatal near-term complication of our treatments, namely severe hypoglycemia. […] The health care providers’ first dictum is primum non nocere — above all, do no harm. ADA must refocus our attention on severe hypoglycemia as an iatrogenic and preventable complication of our interventions.”

v. Anti‐vascular endothelial growth factor combined with intravitreal steroids for diabetic macular oedema.

“Background

The combination of steroid and anti‐vascular endothelial growth factor (VEGF) intravitreal therapeutic agents could potentially have synergistic effects for treating diabetic macular oedema (DMO). On the one hand, if combined treatment is more effective than monotherapy, there would be significant implications for improving patient outcomes. Conversely, if there is no added benefit of combination therapy, then people could be potentially exposed to unnecessary local or systemic side effects.

Objectives

To assess the effects of intravitreal agents that block vascular endothelial growth factor activity (anti‐VEGF agents) plus intravitreal steroids versus monotherapy with macular laser, intravitreal steroids or intravitreal anti‐VEGF agents for managing DMO.”

“There were eight RCTs (703 participants, 817 eyes) that met our inclusion criteria with only three studies reporting outcomes at one year. The studies took place in Iran (3), USA (2), Brazil (1), Czech Republic (1) and South Korea (1). […] When comparing anti‐VEGF/steroid with anti‐VEGF monotherapy as primary therapy for DMO, we found no meaningful clinical difference in change in BCVA [best corrected visual acuity] […] or change in CMT [central macular thickness] […] at one year. […] There was very low‐certainty evidence on intraocular inflammation from 8 studies, with one event in the anti‐VEGF/steroid group (313 eyes) and two events in the anti‐VEGF group (322 eyes). There was a greater risk of raised IOP (Peto odds ratio (OR) 8.13, 95% CI 4.67 to 14.16; 635 eyes; 8 RCTs; moderate‐certainty evidence) and development of cataract (Peto OR 7.49, 95% CI 2.87 to 19.60; 635 eyes; 8 RCTs; moderate‐certainty evidence) in eyes receiving anti‐VEGF/steroid compared with anti‐VEGF monotherapy. There was low‐certainty evidence from one study of an increased risk of systemic adverse events in the anti‐VEGF/steroid group compared with the anti‐VEGF alone group (Peto OR 1.32, 95% CI 0.61 to 2.86; 103 eyes).”

“One study compared anti‐VEGF/steroid versus macular laser therapy. At one year investigators did not report a meaningful difference between the groups in change in BCVA […] or change in CMT […]. There was very low‐certainty evidence suggesting an increased risk of cataract in the anti‐VEGF/steroid group compared with the macular laser group (Peto OR 4.58, 95% 0.99 to 21.10, 100 eyes) and an increased risk of elevated IOP in the anti‐VEGF/steroid group compared with the macular laser group (Peto OR 9.49, 95% CI 2.86 to 31.51; 100 eyes).”

“Authors’ conclusions

Combination of intravitreal anti‐VEGF plus intravitreal steroids does not appear to offer additional visual benefit compared with monotherapy for DMO; at present the evidence for this is of low‐certainty. There was an increased rate of cataract development and raised intraocular pressure in eyes treated with anti‐VEGF plus steroid versus anti‐VEGF alone. Patients were exposed to potential side effects of both these agents without reported additional benefit.”

vi. Association between diabetic foot ulcer and diabetic retinopathy.

“More than 25 million people in the United States are estimated to have diabetes mellitus (DM), and 15–25% will develop a diabetic foot ulcer (DFU) during their lifetime [1]. DFU is one of the most serious and disabling complications of DM, resulting in significantly elevated morbidity and mortality. Vascular insufficiency and associated neuropathy are important predisposing factors for DFU, and DFU is the most common cause of non-traumatic foot amputation worldwide. Up to 70% of all lower leg amputations are performed on patients with DM, and up to 85% of all amputations are preceded by a DFU [2, 3]. Every year, approximately 2–3% of all diabetic patients develop a foot ulcer, and many require prolonged hospitalization for the treatment of ensuing complications such as infection and gangrene [4, 5].

Meanwhile, a number of studies have noted that diabetic retinopathy (DR) is associated with diabetic neuropathy and microvascular complications [610]. Despite the magnitude of the impact of DFUs and their consequences, little research has been performed to investigate the characteristics of patients with a DFU and DR. […] the aim of this study was to investigate the prevalence of DR in patients with a DFU and to elucidate the potential association between DR and DFUs.”

“A retrospective review was conducted on DFU patients who underwent ophthalmic and vascular examinations within 6 months; 100 type 2 diabetic patients with DFU were included. The medical records of 2496 type 2 diabetic patients without DFU served as control data. DR prevalence and severity were assessed in DFU patients. DFU patients were compared with the control group regarding each clinical variable. Additionally, DFU patients were divided into two groups according to DR severity and compared. […] Out of 100 DFU patients, 90 patients (90%) had DR and 55 (55%) had proliferative DR (PDR). There was no significant association between DR and DFU severities (R = 0.034, p = 0.734). A multivariable analysis comparing type 2 diabetic patients with and without DFUs showed that the presence of DR [OR, 226.12; 95% confidence interval (CI), 58.07–880.49; p < 0.001] and proliferative DR [OR, 306.27; 95% CI, 64.35–1457.80; p < 0.001), higher HbA1c (%, OR, 1.97, 95% CI, 1.46–2.67; p < 0.001), higher serum creatinine (mg/dL, OR, 1.62, 95% CI, 1.06–2.50; p = 0.027), older age (years, OR, 1.12; 95% CI, 1.06–1.17; p < 0.001), higher pulse pressure (mmHg, OR, 1.03; 95% CI, 1.00–1.06; p = 0.025), lower cholesterol (mg/dL, OR, 0.94; 95% CI, 0.92–0.97; p < 0.001), lower BMI (kg/m2, OR, 0.87, 95% CI, 0.75–1.00; p = 0.044) and lower hematocrit (%, OR, 0.80, 95% CI, 0.74–0.87; p < 0.001) were associated with DFUs. In a subgroup analysis of DFU patients, the PDR group had a longer duration of diabetes mellitus, higher serum BUN, and higher serum creatinine than the non-PDR group. In the multivariable analysis, only higher serum creatinine was associated with PDR in DFU patients (OR, 1.37; 95% CI, 1.05–1.78; p = 0.021).

Conclusions

Diabetic retinopathy is prevalent in patients with DFU and about half of DFU patients had PDR. No significant association was found in terms of the severity of these two diabetic complications. To prevent blindness, patients with DFU, and especially those with high serum creatinine, should undergo retinal examinations for timely PDR diagnosis and management.”

August 29, 2018 Posted by | Diabetes, Epidemiology, Genetics, Medicine, Molecular biology, Nephrology, Ophthalmology, Statistics, Studies | Leave a comment

Developmental Biology (II)

Below I have included some quotes from the middle chapters of the book and some links related to the topic coverage. As I already pointed out earlier, this is an excellent book on these topics.

Germ cells have three key functions: the preservation of the genetic integrity of the germline; the generation of genetic diversity; and the transmission of genetic information to the next generation. In all but the simplest animals, the cells of the germline are the only cells that can give rise to a new organism. So, unlike body cells, which eventually all die, germ cells in a sense outlive the bodies that produced them. They are, therefore, very special cells […] In order that the number of chromosomes is kept constant from generation to generation, germ cells are produced by a specialized type of cell division, called meiosis, which halves the chromosome number. Unless this reduction by meiosis occurred, the number of chromosomes would double each time the egg was fertilized. Germ cells thus contain a single copy of each chromosome and are called haploid, whereas germ-cell precursor cells and the other somatic cells of the body contain two copies and are called diploid. The halving of chromosome number at meiosis means that when egg and sperm come together at fertilization, the diploid number of chromosomes is restored. […] An important property of germ cells is that they remain pluripotent—able to give rise to all the different types of cells in the body. Nevertheless, eggs and sperm in mammals have certain genes differentially switched off during germ-cell development by a process known as genomic imprinting […] Certain genes in eggs and sperm are imprinted, so that the activity of the same gene is different depending on whether it is of maternal or paternal origin. Improper imprinting can lead to developmental abnormalities in humans. At least 80 imprinted genes have been identified in mammals, and some are involved in growth control. […] A number of developmental disorders in humans are associated with imprinted genes. Infants with Prader-Willi syndrome fail to thrive and later can become extremely obese; they also show mental retardation and mental disturbances […] Angelman syndrome results in severe motor and mental retardation. Beckwith-Wiedemann syndrome is due to a generalized disruption of imprinting on a region of chromosome 7 and leads to excessive foetal overgrowth and an increased predisposition to cancer.”

“Sperm are motile cells, typically designed for activating the egg and delivering their nucleus into the egg cytoplasm. They essentially consist of a nucleus, mitochondria to provide an energy source, and a flagellum for movement. The sperm contributes virtually nothing to the organism other than its chromosomes. In mammals, sperm mitochondria are destroyed following fertilization, and so all mitochondria in the animal are of maternal origin. […] Different organisms have different ways of ensuring fertilization by only one sperm. […] Early development is similar in both male and female mammalian embryos, with sexual differences only appearing at later stages. The development of the individual as either male or female is genetically fixed at fertilization by the chromosomal content of the egg and sperm that fuse to form the fertilized egg. […] Each sperm carries either an X or Y chromosome, while the egg has an X. The genetic sex of a mammal is thus established at the moment of conception, when the sperm introduces either an X or a Y chromosome into the egg. […] In the absence of a Y chromosome, the default development of tissues is along the female pathway. […] Unlike animals, plants do not set aside germ cells in the embryo and germ cells are only specified when a flower develops. Any meristem cell can, in principle, give rise to a germ cell of either sex, and there are no sex chromosomes. The great majority of flowering plants give rise to flowers that contain both male and female sexual organs, in which meiosis occurs. The male sexual organs are the stamens; these produce pollen, which contains the male gamete nuclei corresponding to the sperm of animals. At the centre of the flower are the female sex organs, which consist of an ovary of two carpels, which contain the ovules. Each ovule contains an egg cell.”

“The character of specialized cells such as nerve, muscle, or skin is the result of a particular pattern of gene activity that determines which proteins are synthesized. There are more than 200 clearly recognizable differentiated cell types in mammals. How these particular patterns of gene activity develop is a central question in cell differentiation. Gene expression is under a complex set of controls that include the actions of transcription factors, and chemical modification of DNA. External signals play a key role in differentiation by triggering intracellular signalling pathways that affect gene expression. […] the central feature of cell differentiation is a change in gene expression, which brings about a change in the proteins in the cells. The genes expressed in a differentiated cell include not only those for a wide range of ‘housekeeping’ proteins, such as the enzymes involved in energy metabolism, but also genes encoding cell-specific proteins that characterize a fully differentiated cell: hemoglobin in red blood cells, keratin in skin epidermal cells, and muscle-specific actin and myosin protein filaments in muscle. […] several thousand different genes are active in any given cell in the embryo at any one time, though only a small number of these may be involved in specifying cell fate or differentiation. […] Cell differentiation is known to be controlled by a wide range of external signals but it is important to remember that, while these external signals are often referred to as being ‘instructive’, they are ‘selective’, in the sense that the number of developmental options open to a cell at any given time is limited. These options are set by the cell’s internal state which, in turn, reflects its developmental history. External signals cannot, for example, convert an endodermal cell into a muscle or nerve cell. Most of the molecules that act as developmentally important signals between cells during development are proteins or peptides, and their effect is usually to induce a change in gene expression. […] The same external signals can be used again and again with different effects because the cells’ histories are different. […] At least 1,000 different transcription factors are encoded in the genomes of the fly and the nematode, and as many as 3,000 in the human genome. On average, around five different transcription factors act together at a control region […] In general, it can be assumed that activation of each gene involves a unique combination of transcription factors.”

“Stem cells involve some special features in relation to differentiation. A single stem cell can divide to produce two daughter cells, one of which remains a stem cell while the other gives rise to a lineage of differentiating cells. This occurs in our skin and gut all the time and also in the production of blood cells. It also occurs in the embryo. […] Embryonic stem (ES) cells from the inner cell mass of the early mammalian embryo when the primitive streak forms, can, in culture, differentiate into a wide variety of cell types, and have potential uses in regenerative medicine. […] it is now possible to make adult body cells into stem cells, which has important implications for regenerative medicine. […] The goal of regenerative medicine is to restore the structure and function of damaged or diseased tissues. As stem cells can proliferate and differentiate into a wide range of cell types, they are strong candidates for use in cell-replacement therapy, the restoration of tissue function by the introduction of new healthy cells. […] The generation of insulin-producing pancreatic β cells from ES cells to replace those destroyed in type 1 diabetes is a prime medical target. Treatments that direct the differentiation of ES cells towards making endoderm derivatives such as pancreatic cells have been particularly difficult to find. […] The neurodegenerative Parkinson disease is another medical target. […] To generate […] stem cells of the patient’s own tissue type would be a great advantage, and the recent development of induced pluripotent stem cells (iPS cells) offers […] exciting new opportunities. […] There is [however] risk of tumour induction in patients undergoing cell-replacement therapy with ES cells or iPS cells; undifferentiated pluripotent cells introduced into the patient could cause tumours. Only stringent selection procedures that ensure no undifferentiated cells are present in the transplanted cell population will overcome this problem. And it is not yet clear how stable differentiated ES cells and iPS cells will be in the long term.”

“In general, the success rate of cloning by body-cell nuclear transfer in mammals is low, and the reasons for this are not yet well understood. […] Most cloned mammals derived from nuclear transplantation are usually abnormal in some way. The cause of failure is incomplete reprogramming of the donor nucleus to remove all the earlier modifications. A related cause of abnormality may be that the reprogrammed genes have not gone through the normal imprinting process that occurs during germ-cell development, where different genes are silenced in the male and female parents. The abnormalities in adults that do develop from cloned embryos include early death, limb deformities and hypertension in cattle, and immune impairment in mice. All these defects are thought to be due to abnormalities of gene expression that arise from the cloning process. Studies have shown that some 5% of the genes in cloned mice are not correctly expressed and that almost half of the imprinted genes are incorrectly expressed.”

“Organ development involves large numbers of genes and, because of this complexity, general principles can be quite difficult to distinguish. Nevertheless, many of the mechanisms used in organogenesis are similar to those of earlier development, and certain signals are used again and again. Pattern formation in development in a variety of organs can be specified by position information, which is specified by a gradient in some property. […] Not surprisingly, the vascular system, including blood vessels and blood cells, is among the first organ systems to develop in vertebrate embryos, so that oxygen and nutrients can be delivered to the rapidly developing tissues. The defining cell type of the vascular system is the endothelial cell, which forms the lining of the entire circulatory system, including the heart, veins, and arteries. Blood vessels are formed by endothelial cells and these vessels are then covered by connective tissue and smooth muscle cells. Arteries and veins are defined by the direction of blood flow as well as by structural and functional differences; the cells are specified as arterial or venous before they form blood vessels but they can switch identity. […] Differentiation of the vascular cells requires the growth factor VEGF (vascular endothelial growth factor) and its receptors, and VEGF stimulates their proliferation. Expression of the Vegf gene is induced by lack of oxygen and thus an active organ using up oxygen promotes its own vascularization. New blood capillaries are formed by sprouting from pre-existing blood vessels and proliferation of cells at the tip of the sprout. […] During their development, blood vessels navigate along specific paths towards their targets […]. Many solid tumours produce VEGF and other growth factors that stimulate vascular development and so promote the tumour’s growth, and blocking new vessel formation is thus a means of reducing tumour growth. […] In humans, about 1 in 100 live-born infants has some congenital heart malformation, while in utero, heart malformation leading to death of the embryo occurs in between 5 and 10% of conceptions.”

“Separation of the digits […] is due to the programmed cell death of the cells between these digits’ cartilaginous elements. The webbed feet of ducks and other waterfowl are simply the result of less cell death between the digits. […] the death of cells between the digits is essential for separating the digits. The development of the vertebrate nervous system also involves the death of large numbers of neurons.”

Links:

Budding.
Gonad.
Down Syndrome.
Fertilization. In vitro fertilisation. Preimplantation genetic diagnosis.
SRY gene.
X-inactivation. Dosage compensation.
Cellular differentiation.
MyoD.
Signal transduction. Enhancer (genetics).
Epigenetics.
Hematopoiesis. Hematopoietic stem cell transplantation. Hemoglobin. Sickle cell anemia.
Skin. Dermis. Fibroblast. Epidermis.
Skeletal muscle. Myogenesis. Myoblast.
Cloning. Dolly.
Organogenesis.
Limb development. Limb bud. Progress zone model. Apical ectodermal ridge. Polarizing region/Zone of polarizing activity. Sonic hedgehog.
Imaginal disc. Pax6. Aniridia. Neural tube.
Branching morphogenesis.
Pistil.
ABC model of flower development.

July 16, 2018 Posted by | Biology, Books, Botany, Cancer/oncology, Diabetes, Genetics, Medicine, Molecular biology, Ophthalmology | Leave a comment

A few diabetes papers of interest

i. Clinical Inertia in Type 2 Diabetes Management: Evidence From a Large, Real-World Data Set.

Despite clinical practice guidelines that recommend frequent monitoring of HbA1c (every 3 months) and aggressive escalation of antihyperglycemic therapies until glycemic targets are reached (1,2), the intensification of therapy in patients with uncontrolled type 2 diabetes (T2D) is often inappropriately delayed. The failure of clinicians to intensify therapy when clinically indicated has been termed “clinical inertia.” A recently published systematic review found that the median time to treatment intensification after an HbA1c measurement above target was longer than 1 year (range 0.3 to >7.2 years) (3). We have previously reported a rather high rate of clinical inertia in patients uncontrolled on metformin monotherapy (4). Treatment was not intensified early (within 6 months of metformin monotherapy failure) in 38%, 31%, and 28% of patients when poor glycemic control was defined as an HbA1c >7% (>53 mmol/mol), >7.5% (>58 mmol/mol), and >8% (>64 mmol/mol), respectively.

Using the electronic health record system at Cleveland Clinic (2005–2016), we identified a cohort of 7,389 patients with T2D who had an HbA1c value ≥7% (≥53 mmol/mol) (“index HbA1c”) despite having been on a stable regimen of two oral antihyperglycemic drugs (OADs) for at least 6 months prior to the index HbA1c. This HbA1c threshold would generally be expected to trigger treatment intensification based on current guidelines. Patient records were reviewed for the 6-month period following the index HbA1c, and changes in diabetes therapy were evaluated for evidence of “intensification” […] almost two-thirds of patients had no evidence of intensification in their antihyperglycemic therapy during the 6 months following the index HbA1c ≥7% (≥53 mmol/mol), suggestive of poor glycemic control. Most alarming was the finding that even among patients in the highest index HbA1c category (≥9% [≥75 mmol/mol]), therapy was not intensified in 44% of patients, and slightly more than half (53%) of those with an HbA1c between 8 and 8.9% (64 and 74 mmol/mol) did not have their therapy intensified.”

“Unfortunately, these real-world findings confirm a high prevalence of clinical inertia with regard to T2D management. The unavoidable conclusion from these data […] is that physicians are not responding quickly enough to evidence of poor glycemic control in a high percentage of patients, even in those with HbA1c levels far exceeding typical treatment targets.

ii. Gestational Diabetes Mellitus and Diet: A Systematic Review and Meta-analysis of Randomized Controlled Trials Examining the Impact of Modified Dietary Interventions on Maternal Glucose Control and Neonatal Birth Weight.

“Medical nutrition therapy is a mainstay of gestational diabetes mellitus (GDM) treatment. However, data are limited regarding the optimal diet for achieving euglycemia and improved perinatal outcomes. This study aims to investigate whether modified dietary interventions are associated with improved glycemia and/or improved birth weight outcomes in women with GDM when compared with control dietary interventions. […]

From 2,269 records screened, 18 randomized controlled trials involving 1,151 women were included. Pooled analysis demonstrated that for modified dietary interventions when compared with control subjects, there was a larger decrease in fasting and postprandial glucose (−4.07 mg/dL [95% CI −7.58, −0.57]; P = 0.02 and −7.78 mg/dL [95% CI −12.27, −3.29]; P = 0.0007, respectively) and a lower need for medication treatment (relative risk 0.65 [95% CI 0.47, 0.88]; P = 0.006). For neonatal outcomes, analysis of 16 randomized controlled trials including 841 participants showed that modified dietary interventions were associated with lower infant birth weight (−170.62 g [95% CI −333.64, −7.60]; P = 0.04) and less macrosomia (relative risk 0.49 [95% CI 0.27, 0.88]; P = 0.02). The quality of evidence for these outcomes was low to very low. Baseline differences between groups in postprandial glucose may have influenced glucose-related outcomes. […] we were unable to resolve queries regarding potential concerns for sources of bias because of lack of author response to our queries. We have addressed this by excluding these studies in the sensitivity analysis. […] after removal of the studies with the most substantial methodological concerns in the sensitivity analysis, differences in the change in fasting plasma glucose were no longer significant. Although differences in the change in postprandial glucose and birth weight persisted, they were attenuated.”

“This review highlights limitations of the current literature examining dietary interventions in GDM. Most studies are too small to demonstrate significant differences in our primary outcomes. Seven studies had fewer than 50 participants and only two had more than 100 participants (n = 125 and 150). The short duration of many dietary interventions and the late gestational age at which they were started (38) may also have limited their impact on glycemic and birth weight outcomes. Furthermore, we cannot conclude if the improvements in maternal glycemia and infant birth weight are due to reduced energy intake, improved nutrient quality, or specific changes in types of carbohydrate and/or protein. […] These data suggest that dietary interventions modified above and beyond usual dietary advice for GDM have the potential to offer better maternal glycemic control and infant birth weight outcomes. However, the quality of evidence was judged as low to very low due to the limitations in the design of included studies, the inconsistency between their results, and the imprecision in their effect estimates.”

iii. Lifetime Prevalence and Prognosis of Prediabetes Without Progression to Diabetes.

Impaired fasting glucose, also termed prediabetes, is increasingly prevalent and is associated with adverse cardiovascular risk (1). The cardiovascular risks attributed to prediabetes may be driven primarily by the conversion from prediabetes to overt diabetes (2). Given limited data on outcomes among nonconverters in the community, the extent to which some individuals with prediabetes never go on to develop diabetes and yet still experience adverse cardiovascular risk remains unclear. We therefore investigated the frequency of cardiovascular versus noncardiovascular deaths in people who developed early- and late-onset prediabetes without ever progressing to diabetes.”

“We used data from the Framingham Heart Study collected on the Offspring Cohort participants aged 18–77 years at the time of initial fasting plasma glucose (FPG) assessment (1983–1987) who had serial FPG testing over subsequent examinations with continuous surveillance for outcomes including cause-specific mortality (3). As applied in prior epidemiological investigations (4), we used a case-control design focusing on the cause-specific outcome of cardiovascular death to minimize the competing risk issues that would be encountered in time-to-event analyses. To focus on outcomes associated with a given chronic glycemic state maintained over the entire lifetime, we restricted our analyses to only those participants for whom data were available over the life course and until death. […] We excluded individuals with unknown age of onset of glycemic impairment (i.e., age ≥50 years with prediabetes or diabetes at enrollment). […] We analyzed cause-specific mortality, allowing for relating time-varying exposures with lifetime risk for an event (4). We related glycemic phenotypes to cardiovascular versus noncardiovascular cause of death using a case-control design, where cases were defined as individuals who died of cardiovascular disease (death from stroke, heart failure, or other vascular event) or coronary heart disease (CHD) and controls were those who died of other causes.”

“The mean age of participants at enrollment was 42 ± 7 years (43% women). The mean age at death was 73 ± 10 years. […] In our study, approximately half of the individuals presented with glycemic impairment in their lifetime, of whom two-thirds developed prediabetes but never diabetes. In our study, these individuals had lower cardiovascular-related mortality compared with those who later developed diabetes, even if the prediabetes onset was early in life. However, individuals with early-onset prediabetes, despite lifelong avoidance of overt diabetes, had greater propensity for death due to cardiovascular or coronary versus noncardiovascular disease compared with those who maintained lifelong normal glucose status. […] Prediabetes is a heterogeneous entity. Whereas some forms of prediabetes are precursors to diabetes, other types of prediabetes never progress to diabetes but still confer increased propensity for death from a cardiovascular cause.”

iv. Learning From Past Failures of Oral Insulin Trials.

Very recently one of the largest type 1 diabetes prevention trials using daily administration of oral insulin or placebo was completed. After 9 years of study enrollment and follow-up, the randomized controlled trial failed to delay the onset of clinical type 1 diabetes, which was the primary end point. The unfortunate outcome follows the previous large-scale trial, the Diabetes Prevention Trial–Type 1 (DPT-1), which again failed to delay diabetes onset with oral insulin or low-dose subcutaneous insulin injections in a randomized controlled trial with relatives at risk for type 1 diabetes. These sobering results raise the important question, “Where does the type 1 diabetes prevention field move next?” In this Perspective, we advocate for a paradigm shift in which smaller mechanistic trials are conducted to define immune mechanisms and potentially identify treatment responders. […] Mechanistic trials will allow for better trial design and patient selection based upon molecular markers prior to large randomized controlled trials, moving toward a personalized medicine approach for the prevention of type 1 diabetes.

“Before a disease can be prevented, it must be predicted. The ability to assess risk for developing type 1 diabetes (T1D) has been well documented over the last two decades (1). Using genetic markers, human leukocyte antigen (HLA) DQ and DR typing (2), islet autoantibodies (1), and assessments of glucose tolerance (intravenous or oral glucose tolerance tests) has led to accurate prediction models for T1D development (3). Prospective birth cohort studies Diabetes Autoimmunity Study in the Young (DAISY) in Colorado (4), Type 1 Diabetes Prediction and Prevention (DIPP) study in Finland (5), and BABYDIAB studies in Germany have followed genetically at-risk children for the development of islet autoimmunity and T1D disease onset (6). These studies have been instrumental in understanding the natural history of T1D and making T1D a predictable disease with the measurement of antibodies in the peripheral blood directed against insulin and proteins within β-cells […]. Having two or more islet autoantibodies confers an ∼85% risk of developing T1D within 15 years and nearly 100% over time (7). […] T1D can be predicted by measuring islet autoantibodies, and thousands of individuals including young children are being identified through screening efforts, necessitating the need for treatments to delay and prevent disease onset.”

“Antigen-specific immunotherapies hold the promise of potentially inducing tolerance by inhibiting effector T cells and inducing regulatory T cells, which can act locally at tissue-specific sites of inflammation (12). Additionally, side effects are minimal with these therapies. As such, insulin and GAD have both been used as antigen-based approaches in T1D (13). Oral insulin has been evaluated in two large randomized double-blinded placebo-controlled trials over the last two decades. First in the Diabetes Prevention Trial–Type 1 (DPT-1) and then in the TrialNet clinical trials network […] The DPT-1 enrolled relatives at increased risk for T1D having islet autoantibodies […] After 6 years of treatment, there was no delay in T1D onset. […] The TrialNet study screened, enrolled, and followed 560 at-risk relatives over 9 years from 2007 to 2016, and results have been recently published (16). Unfortunately, this trial failed to meet the primary end point of delaying or preventing diabetes onset.”

“Many factors influence the potency and efficacy of antigen-specific therapy such as dose, frequency of dosing, route of administration, and, importantly, timing in the disease process. […] Over the last two decades, most T1D clinical trial designs have randomized participants 1:1 or 2:1, drug to placebo, in a double-blind two-arm design, especially those intervention trials in new-onset T1D (18). Primary end points have been delay in T1D onset for prevention trials or stimulated C-peptide area under the curve at 12 months with new-onset trials. These designs have served the field well and provided reliable human data for efficacy. However, there are limitations including the speed at which these trials can be completed, the number of interventions evaluated, dose optimization, and evaluation of mechanistic hypotheses. Alternative clinical trial designs, such as adaptive trial designs using Bayesian statistics, can overcome some of these issues. Adaptive designs use accumulating data from the trial to modify certain aspects of the study, such as enrollment and treatment group assignments. This “learn as we go” approach relies on biomarkers to drive decisions on planned trial modifications. […] One of the significant limitations for adaptive trial designs in the T1D field, at the present time, is the lack of validated biomarkers for short-term readouts to inform trial adaptations. However, large-scale collaborative efforts are ongoing to define biomarkers of T1D-specific immune dysfunction and β-cell stress and death (9,22).”

T1D prevention has proven much more difficult than originally thought, challenging the paradigm that T1D is a single disease. T1D is indeed a heterogeneous disease in terms of age of diagnosis, islet autoantibody profiles, and the rate of loss of residual β-cell function after clinical onset. Children have a much more rapid loss of residual insulin production (measured as C-peptide area under the curve following a mixed-meal tolerance test) after diagnosis than older adolescents and adults (23,24), indicating that childhood and adult-onset T1D are not identical. Further evidence for subtypes of T1D come from studies of human pancreata of T1D organ donors in which children (0–14 years of age) within 1 year of diagnosis had many more inflamed islets compared with older adolescents and adults aged 15–39 years old (25). Additionally, a younger age of T1D onset (<7 years) has been associated with higher numbers of CD20+ B cells within islets and fewer insulin-containing islets compared with an age of onset ≥13 years associated with fewer CD20+ islet infiltrating cells and more insulin-containing islets (26,27). This suggests a much more aggressive autoimmune process in younger children and distinct endotypes (a subtype of a condition defined by a distinct pathophysiologic mechanism), which has recently been proposed for T1D (27).”

“Safe and specific therapies capable of being used in children are needed for T1D prevention. The vast majority of drug development involves small biotechnology companies, specialty pharmaceutical firms, and large pharmaceutical companies, more so than traditional academia. A large amount of preclinical and clinical research (phase 1, 2, and 3 studies) are needed to advance a drug candidate through the development pipeline to achieve U.S. Food and Drug Administration (FDA) approval for a given disease. A recent analysis of over 4,000 drugs from 835 companies in development during 2003–2011 revealed that only 10.4% of drugs that enter clinical development at phase 1 (safety studies) advance to FDA approval (32). However, the success rate increases 50% for the lead indication of a drug, i.e., a drug specifically developed for one given disease (32). Reasons for this include strong scientific rationale and early efficacy signals such as correlating pharmacokinetic (drug levels) to pharmacodynamic (drug target effects) tests for the lead indication. Lead indications also tend to have smaller, better-defined “homogenous” patient populations than nonlead indications for the same drug. This would imply that the T1D field needs more companies developing drugs specifically for T1D, not type 2 diabetes or other autoimmune diseases with later testing to broaden a drug’s indication. […] In a similar but separate analysis, selection biomarkers were found to substantially increase the success rate of drug approvals across all phases of drug development. Using a selection biomarker as part of study inclusion criteria increased drug approval threefold from 8.4% to 25.9% when used in phase 1 trials, 28% to 46% when transitioning from a phase 2 to phase 3 efficacy trial, and 55% to 76% for a phase 3 trial to likelihood of approval (33). These striking data support the concept that enrichment of patient enrollment at the molecular level is a more successful strategy than heterogeneous enrollment in clinical intervention trials. […] Taken together, new drugs designed specifically for children at risk for T1D and a biomarker selecting patients for a treatment response may increase the likelihood for a successful prevention trial; however, experimental confirmation in clinical trials is needed.”

v. Metabolic Karma — The Atherogenic Legacy of Diabetes: The 2017 Edwin Bierman Award Lecture.

“Cardiovascular (CV) disease remains the major cause of mortality and is associated with significant morbidity in both type 1 and type 2 diabetes (14). Despite major improvements in the management of traditional risk factors, including hypertension, dyslipidemia, and glycemic control prevention, retardation and reversal of atherosclerosis, as manifested clinically by myocardial infarction, stroke, and peripheral vascular disease, remain a major unmet need in the population with diabetes. For example, in the Steno-2 study and in its most recent report of the follow-up phase, at least a decade after cessation of the active treatment phase, there remained a high risk of death, primarily from CV disease despite aggressive control of the traditional risk factors, in this originally microalbuminuric population with type 2 diabetes (5,6). In a meta-analysis of major CV trials where aggressive glucose lowering was instituted […] the beneficial effect of intensive glycemic control on CV disease was modest, at best (7). […] recent trials with two sodium–glucose cotransporter 2 inhibitors, empagliflozin and canagliflozin (11,12), and two long-acting glucagon-like peptide 1 agonists, liraglutide and semaglutide (13,14), have reported CV benefits that have led in some of these trials to a decrease in CV and all-cause mortality. However, even with these recent positive CV outcomes, CV disease remains the major burden in the population with diabetes (15).”

“This unmet need of residual CV disease in the population with diabetes remains unexplained but may occur as a result of a range of nontraditional risk factors, including low-grade inflammation and enhanced thrombogenicity as a result of the diabetic milieu (16). Furthermore, a range of injurious pathways as a result of chronic hyperglycemia previously studied in vitro in endothelial cells (17) or in models of microvascular complications may also be relevant and are a focus of this review […] [One] major factor that is likely to promote atherosclerosis in the diabetes setting is increased oxidative stress. There is not only increased generation of ROS from diverse sources but also reduced antioxidant defense in diabetes (40). […] vascular ROS accumulation is closely linked to atherosclerosis and vascular inflammation provide the impetus to consider specific antioxidant strategies as a novel therapeutic approach to decrease CV disease, particularly in the setting of diabetes.”

“One of the most important findings from numerous trials performed in subjects with type 1 and type 2 diabetes has been the identification that previous episodes of hyperglycemia can have a long-standing impact on the subsequent development of CV disease. This phenomenon known as “metabolic memory” or the “legacy effect” has been reported in numerous trials […] The underlying explanation at a molecular and/or cellular level for this phenomenon remains to be determined. Our group, as well as others, has postulated that epigenetic mechanisms may participate in conferring metabolic memory (5153). In in vitro studies initially performed in aortic endothelial cells, transient incubation of these cells in high glucose followed by subsequent return of these cells to a normoglycemic environment was associated with increased gene expression of the p65 subunit of NF-κB, NF-κB activation, and expression of NF-κB–dependent proteins, including MCP-1 and VCAM-1 (54).

In further defining a potential epigenetic mechanism that could explain the glucose-induced upregulation of genes implicated in vascular inflammation, a specific histone methylation mark was identified in the promoter region of the p65 gene (54). This histone 3 lysine 4 monomethylation (H3K4m1) occurred as a result of mobilization of the histone methyl transferase, Set7. Furthermore, knockdown of Set7 attenuated glucose-induced p65 upregulation and prevented the persistent upregulation of this gene despite these endothelial cells returning to a normoglycemic milieu (55). These findings, confirmed in animal models exposed to transient hyperglycemia (54), provide the rationale to consider Set7 as an appropriate target for end-organ protective therapies in diabetes. Although specific Set7 inhibitors are currently unavailable for clinical development, the current interest in drugs that block various enzymes, such as Set7, that influence histone methylation in diseases, such as cancer (56), could lead to agents that warrant testing in diabetes. Studies addressing other sites of histone methylation as well as other epigenetic pathways including DNA methylation and acetylation have been reported or are currently in progress (55,57,58), particularly in the context of diabetes complications. […] As in vitro and preclinical studies increase our knowledge and understanding of the pathogenesis of diabetes complications, it is likely that we will identify new molecular targets leading to better treatments to reduce the burden of macrovascular disease. Nevertheless, these new treatments will need to be considered in the context of improved management of traditional risk factors.”

vi. Perceived risk of diabetes seriously underestimates actual diabetes risk: The KORA FF4 study.

“According to the International Diabetes Federation (IDF), almost half of the people with diabetes worldwide are unaware of having the disease, and even in high-income countries, about one in three diabetes cases is not diagnosed [1,2]. In the USA, 28% of diabetes cases are undiagnosed [3]. In DEGS1, a recent population-based German survey, 22% of persons with HbA1c ≥ 6.5% were unaware of their disease [4]. Persons with undiagnosed diabetes mellitus (UDM) have a more than twofold risk of mortality compared to persons with normal glucose tolerance (NGT) [5,6]; many of them also have undiagnosed diabetes complications like retinopathy and chronic kidney disease [7,8]. […] early detection of diabetes and prediabetes is beneficial for patients, but may be delayed by patients´ being overly optimistic about their own health. Therefore, it is important to address how persons with UDM or prediabetes perceive their diabetes risk.”

“The proportion of persons who perceived their risk of having UDM at the time of the interview as “negligible”, “very low” or “low” was 87.1% (95% CI: 85.0–89.0) in NGT [normal glucose tolerance individuals], 83.9% (81.2–86.4) in prediabetes, and 74.2% (64.5–82.0) in UDM […]. The proportion of persons who perceived themselves at risk of developing diabetes in the following years ranged from 14.6% (95% CI: 12.6–16.8) in NGT to 20.6% (17.9–23.6) in prediabetes to 28.7% (20.5–38.6) in UDM […] In univariate regression models, perceiving oneself at risk of developing diabetes was associated with younger age, female sex, higher school education, obesity, self-rated poor general health, and parental diabetes […] the proportion of better educated younger persons (age ≤ 60 years) with prediabetes, who perceived themselves at risk of developing diabetes was 35%, whereas this figure was only 13% in less well educated older persons (age > 60 years).”

The present study shows that three out of four persons with UDM [undiagnosed diabetes mellitus] believed that the probability of having undetected diabetes was low or very low. In persons with prediabetes, more than 70% believed that they were not at risk of developing diabetes in the next years. People with prediabetes were more inclined to perceive themselves at risk of diabetes if their self-rated general health was poor, their mother or father had diabetes, they were obese, they were female, their educational level was high, and if they were younger. […] People with undiagnosed diabetes or prediabetes considerably underestimate their probability of having or developing diabetes. […] perceived diabetes risk was lower in men, lower educated and older persons. […] Our results showed that people with low and intermediate education strongly underestimate their risk of diabetes and may qualify as target groups for detection of UDM and prediabetes.”

“The present results were in line with results from the Dutch Hoorn Study [18,19]. Adriaanse et al. reported that among persons with UDM, only 28.3% perceived their likeliness of having diabetes to be at least 10% [18], and among persons with high risk of diabetes (predicted from a symptom risk questionnaire), the median perceived likeliness of having diabetes was 10.8% [19]. Again, perceived risk did not fully reflect the actual risk profiles. For BMI, there was barely any association with perceived risk of diabetes in the Dutch study [19].”

July 2, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Immunology, Medicine, Molecular biology, Pharmacology, Studies | Leave a comment

Developmental Biology (I)

On goodreads I called the book “[a]n excellent introduction to the field of developmental biology” and I gave it five stars.

Below I have included some sample observations from the first third of the book or so, as well as some supplementary links.

“The major processes involved in development are: pattern formation; morphogenesis or change in form; cell differentiation by which different types of cell develop; and growth. These processes involve cell activities, which are determined by the proteins present in the cells. Genes control cell behaviour by controlling where and when proteins are synthesized, and cell behaviour provides the link between gene action and developmental processes. What a cell does is determined very largely by the proteins it contains. The hemoglobin in red blood cells enables them to transport oxygen; the cells lining the vertebrate gut secrete specialized digestive enzymes. These activities require specialized proteins […] In development we are concerned primarily with those proteins that make cells different from one another and make them carry out the activities required for development of the embryo. Developmental genes typically code for proteins involved in the regulation of cell behaviour. […] An intriguing question is how many genes out of the total genome are developmental genes – that is, genes specifically required for embryonic development. This is not easy to estimate. […] Some studies suggest that in an organism with 20,000 genes, about 10% of the genes may be directly involved in development.”

“The fate of a group of cells in the early embryo can be determined by signals from other cells. Few signals actually enter the cells. Most signals are transmitted through the space outside of cells (the extracellular space) in the form of proteins secreted by one cell and detected by another. Cells may interact directly with each other by means of molecules located on their surfaces. In both these cases, the signal is generally received by receptor proteins in the cell membrane and is subsequently relayed through other signalling proteins inside the cell to produce the cellular response, usually by turning genes on or off. This process is known as signal transduction. These pathways can be very complex. […] The complexity of the signal transduction pathway means that it can be altered as the cell develops so the same signal can have a different effect on different cells. How a cell responds to a particular signal depends on its internal state and this state can reflect the cell’s developmental history — cells have good memories. Thus, different cells can respond to the same signal in very different ways. So the same signal can be used again and again in the developing embryo. There are thus rather few signalling proteins.”

“All vertebrates, despite their many outward differences, have a similar basic body plan — the segmented backbone or vertebral column surrounding the spinal cord, with the brain at the head end enclosed in a bony or cartilaginous skull. These prominent structures mark the antero-posterior axis with the head at the anterior end. The vertebrate body also has a distinct dorso-ventral axis running from the back to the belly, with the spinal cord running along the dorsal side and the mouth defining the ventral side. The antero-posterior and dorso-ventral axes together define the left and right sides of the animal. Vertebrates have a general bilateral symmetry around the dorsal midline so that outwardly the right and left sides are mirror images of each other though some internal organs such as the heart and liver are arranged asymmetrically. How these axes are specified in the embryo is a key issue. All vertebrate embryos pass through a broadly similar set of developmental stages and the differences are partly related to how and when the axes are set up, and how the embryo is nourished. […] A quite rare but nevertheless important event before gastrulation in mammalian embryos, including humans, is the splitting of the embryo into two, and identical twins can then develop. This shows the remarkable ability of the early embryo to regulate [in this context, regulation refers to ‘the ability of an embryo to restore normal development even if some portions are removed or rearranged very early in development’ – US] and develop normally when half the normal size […] In mammals, there is no sign of axes or polarity in the fertilized egg or during early development, and it only occurs later by an as yet unknown mechanism.”

“How is left–right established? Vertebrates are bilaterally symmetric about the midline of the body for many structures, such as eyes, ears, and limbs, but most internal organs are asymmetric. In mice and humans, for example, the heart is on the left side, the right lung has more lobes than the left, the stomach and spleen lie towards the left, and the bulk of the liver is towards the right. This handedness of organs is remarkably consistent […] Specification of left and right is fundamentally different from specifying the other axes of the embryo, as left and right have meaning only after the antero-posterior and dorso-ventral axes have been established. If one of these axes were reversed, then so too would be the left–right axis and this is the reason that handedness is reversed when you look in a mirror—your dorsoventral axis is reversed, and so left becomes right and vice versa. The mechanisms by which left–right symmetry is initially broken are still not fully understood, but the subsequent cascade of events that leads to organ asymmetry is better understood. The ‘leftward’ flow of extracellular fluid across the embryonic midline by a population of ciliated cells has been shown to be critical in mouse embryos in inducing asymmetric expression of genes involved in establishing left versus right. The antero-posterior patterning of the mesoderm is most clearly seen in the differences in the somites that form vertebrae: each individual vertebra has well defined anatomical characteristics depending on its location along the axis. Patterning of the skeleton along the body axis is based on the somite cells acquiring a positional value that reflects their position along the axis and so determines their subsequent development. […] It is the Hox genes that define positional identity along the antero-posterior axis […]. The Hox genes are members of the large family of homeobox genes that are involved in many aspects of development and are the most striking example of a widespread conservation of developmental genes in animals. The name homeobox comes from their ability to bring about a homeotic transformation, converting one region into another. Most vertebrates have clusters of Hox genes on four different chromosomes. A very special feature of Hox gene expression in both insects and vertebrates is that the genes in the clusters are expressed in the developing embryo in a temporal and spatial order that reflects their order on the chromosome. Genes at one end of the cluster are expressed in the head region, while those at the other end are expressed in the tail region. This is a unique feature in development, as it is the only known case where a spatial arrangement of genes on a chromosome corresponds to a spatial pattern in the embryo. The Hox genes provide the somites and adjacent mesoderm with positional values that determine their subsequent development.”

“Many of the genes that control the development of flies are similar to those controlling development in vertebrates, and indeed in many other animals. it seems that once evolution finds a satisfactory way of developing animal bodies, it tends to use the same mechanisms and molecules over and over again with, of course, some important modifications. […] The insect body is bilaterally symmetrical and has two distinct and largely independent axes: the antero-posterior and dorso-ventral axes, which are at right angles to each other. These axes are already partly set up in the fly egg, and become fully established and patterned in the very early embryo. Along the antero-posterior axis the embryo becomes divided into a number of segments, which will become the head, thorax, and abdomen of the larva. A series of evenly spaced grooves forms more or less simultaneously and these demarcate parasegments, which later give rise to the segments of the larva and adult. Of the fourteen larval parasegments, three contribute to mouthparts of the head, three to the thoracic region, and eight to the abdomen. […] Development is initiated by a gradient of the protein Bicoid, along the axis running from anterior to posterior in the egg; this provides the positional information required for further patterning along this axis. Bicoid is a transcription factor and acts as a morphogen—a graded concentration of a molecule that switches on particular genes at different threshold concentrations, thereby initiating a new pattern of gene expression along the axis. Bicoid activates anterior expression of the gene hunchback […]. The hunchback gene is switched on only when Bicoid is present above a certain threshold concentration. The protein of the hunchback gene, in turn, is instrumental in switching on the expression of the other genes, along the antero-posterior axis. […] The dorso-ventral axis is specified by a different set of maternal genes from those that specify the anterior-posterior axis, but by a similar mechanism. […] Once each parasegment is delimited, it behaves as an independent developmental unit, under the control of a particular set of genes. The parasegments are initially similar but each will soon acquire its own unique identity mainly due to Hox genes.”

“Because plant cells have rigid cell walls and, unlike animal cells, cannot move, a plant’s development is very much the result of patterns of oriented cell divisions and increase in cell size. Despite this difference, cell fate in plant development is largely determined by similar means as in animals – by a combination of positional signals and intercellular communication. […] The logic behind the spatial layouts of gene expression that pattern a developing flower is similar to that of Hox gene action in patterning the body axis in animals, but the genes involved are completely different. One general difference between plant and animal development is that most of the development occurs not in the embryo but in the growing plant. Unlike an animal embryo, the mature plant embryo inside a seed is not simply a smaller version of the organism it will become. All the ‘adult’ structures of the plant – shoots, roots, stalks, leaves, and flowers – are produced in the adult plant from localized groups of undifferentiated cells known as meristems. […] Another important difference between plant and animal cells is that a complete, fertile plant can develop from a single differentiated somatic cell and not just from a fertilized egg. This suggests that, unlike the differentiated cells of adult animals, some differentiated cells of the adult plant may retain totipotency and so behave like animal embryonic stem cells. […] The small organic molecule auxin is one of the most important and ubiquitous chemical signals in plant development and plant growth.”

“All animal embryos undergo a dramatic change in shape during their early development. This occurs primarily during gastrulation, the process that transforms a two-dimensional sheet of cells into the complex three-dimensional animal body, and involves extensive rearrangements of cell layers and the directed movement of cells from one location to another. […] Change in form is largely a problem in cell mechanics and requires forces to bring about changes in cell shape and cell migration. Two key cellular properties involved in changes in animal embryonic form are cell contraction and cell adhesiveness. Contraction in one part of a cell can change the cell’s shape. Changes in cell shape are generated by forces produced by the cytoskeleton, an internal protein framework of filaments. Animal cells stick to one another, and to the external support tissue that surrounds them (the extracellular matrix), through interactions involving cell-surface proteins. Changes in the adhesion proteins at the cell surface can therefore determine the strength of cell–cell adhesion and its specificity. These adhesive interactions affect the surface tension at the cell membrane, a property that contributes to the mechanics of the cell behaviour. Cells can also migrate, with contraction again playing a key role. An additional force that operates during morphogenesis, particularly in plants but also in a few aspects of animal embryogenesis, is hydrostatic pressure, which causes cells to expand. In plants there is no cell movement or change in shape, and changes in form are generated by oriented cell division and cell expansion. […] Localized contraction can change the shape of the cells as well as the sheet they are in. For example, folding of a cell sheet—a very common feature in embryonic development—is caused by localized changes in cell shape […]. Contraction on one side of a cell results in it acquiring a wedge-like form; when this occurs among a few cells locally in a sheet, a bend occurs at the site, deforming the sheet.”

“The integrity of tissues in the embryo is maintained by adhesive interactions between cells and between cells and the extracellular matrix; differences in cell adhesiveness also help maintain the boundaries between different tissues and structures. Cells stick to each other by means of cell adhesion molecules, such as cadherins, which are proteins on the cell surface that can bind strongly to proteins on other cell surfaces. About 30 different types of cadherins have been identified in vertebrates. […] Adhesion of a cell to the extracellular matrix, which contains proteins such as collagen, is by the binding of integrins in the cell membrane to these matrix molecules. […] Convergent extension plays a key role in gastrulation of [some] animals and […] morphogenetic processes. It is a mechanism for elongating a sheet of cells in one direction while narrowing its width, and occurs by rearrangement of cells within the sheet, rather than by cell migration or cell division. […] For convergent extension to take place, the axes along which the cells will intercalate and extend must already have been defined. […] Gastrulation in vertebrates involves a much more dramatic and complex rearrangement of tissues than in sea urchins […] But the outcome is the same: the transformation of a two-dimensional sheet of cells into a three-dimensional embryo, with ectoderm, mesoderm, and endoderm in the correct positions for further development of body structure. […] Directed dilation is an important force in plants, and results from an increase in hydrostatic pressure inside a cell. Cell enlargement is a major process in plant growth and morphogenesis, providing up to a fiftyfold increase in the volume of a tissue. The driving force for expansion is the hydrostatic pressure exerted on the cell wall as a result of the entry of water into cell vacuoles by osmosis. Plant-cell expansion involves synthesis and deposition of new cell-wall material, and is an example of directed dilation. The direction of cell growth is determined by the orientation of the cellulose fibrils in the cell wall.”

Links:

Developmental biology.
August Weismann. Hans Driesch. Hans Spemann. Hilde Mangold. Spemann-Mangold organizer.
Induction. Cleavage.
Developmental model organisms.
Blastula. Embryo. Ectoderm. Mesoderm. Endoderm.
Gastrulation.
Xenopus laevis.
Notochord.
Neurulation.
Organogenesis.
DNA. Gene. Protein. Transcription factor. RNA polymerase.
Epiblast. Trophoblast/trophectoderm. Inner cell mass.
Pluripotency.
Polarity in embryogenesis/animal-vegetal axis.
Primitive streak.
Hensen’s node.
Neural tube. Neural fold. Neural crest cells.
Situs inversus.
Gene silencing. Morpholino.
Drosophila embryogenesis.
Pair-rule gene.
Cell polarity.
Mosaic vs regulative development.
Caenorhabditis elegans.
Fate mapping.
Plasmodesmata.
Arabidopsis thaliana.
Apical-basal axis.
Hypocotyl.
Phyllotaxis.
Primordium.
Quiescent centre.
Filopodia.
Radial cleavage. Spiral cleavage.

June 11, 2018 Posted by | Biology, Books, Botany, Genetics, Molecular biology | Leave a comment

Molecular biology (III)

Below I have added a few quotes and links related to the last few chapters of the book‘s coverage.

“Normal ageing results in part from exhaustion of stem cells, the cells that reside in most organs to replenish damaged tissue. As we age DNA damage accumulates and this eventually causes the cells to enter a permanent non-dividing state called senescence. This protective ploy however has its downside as it limits our lifespan. When too many stem cells are senescent the body is compromised in its capacity to renew worn-out tissue, causing the effects of ageing. This has a knock-on effect of poor intercellular communication, mitochondrial dysfunction, and loss of protein balance (proteostasis). Low levels of chronic inflammation also increase with ageing and could be the trigger for changes associated with many age-related disorders.”

“There has been a dramatic increase in ageing research using yeast and invertebrates, leading to the discovery of more ‘ageing genes’ and their pathways. These findings can be extrapolated to humans since longevity pathways are conserved between species. The major pathways known to influence ageing have a common theme, that of sensing and metabolizing nutrients. […] The field was advanced by identification of the mammalian Target Of Rapamycin, aptly named mTOR. mTOR acts as a molecular sensor that integrates growth stimuli with nutrient and oxygen availability. Small molecules such as rapamycin that reduce mTOR signalling act in a similar way to severe dietary restriction in slowing the ageing process in organisms such as yeast and worms. […] Rapamycin and its derivatives (rapalogs) have been involved in clinical trials on reducing age-related pathologies […] Another major ageing pathway is telomere maintenance. […] Telomere attrition is a hallmark of ageing and studies have established an association between shorter telomere length (TL) and the risk of various common age-related ailments […] Telomere loss is accelerated by known determinants of ill health […] The relationship between TL and cancer appears complex.”

“Cancer is not a single disease but a range of diseases caused by abnormal growth and survival of cells that have the capacity to spread. […] One of the early stages in the acquisition of an invasive phenotype is epithelial-mesenchymal transition (EMT). Epithelial cells form skin and membranes and for this they have a strict polarity (a top and a bottom) and are bound in position by close connections with adjacent cells. Mesenchymal cells on the other hand are loosely associated, have motility, and lack polarization. The transition between epithelial and mesenchymal cells is a normal process during embryogenesis and wound healing but is deregulated in cancer cells. EMT involves transcriptional reprogramming in which epithelial structural proteins are lost and mesenchymal ones acquired. This facilitates invasion of a tumour into surrounding tissues. […] Cancer is a genetic disease but mostly not inherited from the parents. Normal cells evolve to become cancer cells by acquiring successive mutations in cancer-related genes. There are two main classes of cancer genes, the proto-oncogenes and the tumour suppressor genes. The proto-oncogenes code for protein products that promote cell proliferation. […] A mutation in a proto-oncogene changes it to an ‘oncogene’ […] One gene above all others is associated with cancer suppression and that is TP53. […] approximately half of all human cancers carry a mutated TP53 and in many more, p53 is deregulated. […] p53 plays a key role in eliminating cells that have either acquired activating oncogenes or excessive genomic damage. Thus mutations in the TP53 gene allows cancer cells to survive and divide further by escaping cell death […] A mutant p53 not only lacks the tumour suppressor functions of the normal or wild type protein but in many cases it also takes on the role of an oncogene. […] Overall 5-10 per cent of cancers occur due to inherited or germ line mutations that are passed from parents to offspring. Many of these genes code for DNA repair enzymes […] The vast majority of cancer mutations are not inherited; instead they are sporadic with mutations arising in somatic cells. […] At least 15 per cent of cancers are attributable to infectious agents, examples being HPV and cervical cancer, H. pylori and gastric cancer, and also hepatitis B or C and liver cancer.”

“There are about 10 million different sites at which people can vary in their DNA sequence withing the 3 billion bases in our DNA. […] A few, but highly variable sequences or minisatellites are chosen for DNA profiling. These give a highly sensitive procedure suitable for use with small amounts of body fluids […] even shorter sequences called microsatellite repeats [are also] used. Each marker or microsatellite is a short tandem repeat (STR) of two to five base pairs of DNA sequence. A single STR will be shared by up to 20 per cent of the population but by using a dozen or so identification markers in profile, the error is miniscule. […] Microsatellites are extremely useful for analysing low-quality or degraded DNA left at a crime scene as their short sequences are usually preserved. However, DNA in specimens that have not been optimally preserved persists in exceedingly small amounts and is also highly fragmented. It is probably also riddled by contamination and chemical damage. Such sources of DNA sources of DNA are too degraded to obtain a profile using genomic STRs and in these cases mitochondrial DNA, being more abundant, is more useful than nuclear DNA for DNA profiling. […]  Mitochondrial DNA profiling is the method of choice for determining the identities of missing or unknown people when a maternally linked relative can be found. Molecular biologists can amplify hypervariable regions of mitochondrial DNA by PCR to obtain enough material for analysis. The DNA products are sequenced and single nucleotide differences are sought with a reference DNA from a maternal relative. […] It has now become possible for […] ancient DNA to reveal much more than genotype matches. […] Pigmentation characteristics can now be determined from ancient DNA since skin, hair, and eye colour are some of the easiest characteristics to predict. This is due to the limited number of base differences or SNPs required to explain most of the variability.”

“A broad range of debilitating and fatal conditions, non of which can be cured, are associated with mitochondrial DNA mutations. […] [M]itochondrial DNA mutates ten to thirty times faster than nuclear DNA […] Mitochondrial DNA mutates at a higher rate than nuclear DNA due to higher numbers of DNA molecules and reduced efficiency in controlling DNA replication errors. […] Over 100,000 copies of mitochondrial DNA are present in the cytoplasm of the human egg or oocyte. After fertilization, only maternal mitochondria survive; the small numbers of the father’s mitochondria in the zygote are targeted for destruction. Thus all mitochondrial DNA for all cell types in the resulting embryo is maternal-derived. […] Patients affected by mitochondrial disease usually have a mixture of wild type (normal) and mutant mitochondrial DNA and the disease severity depends on the ratio of the two. Importantly the actual level of mutant DNA in a mother’s heteroplas[m]y […curiously the authors throughout the coverage insist on spelling this ‘heteroplasty’, which according to google is something quite different – I decided to correct the spelling error (?) here – US] is not inherited and offspring can be better or worse off than the mother. This also causes uncertainty since the ratio of wild type to mutant mitochondria may change during development. […] Over 700 mutations in mitochondrial DNA have been found leading to myopathies, neurodegeneration, diabetes, cancer, and infertility.”

Links:

Dementia. Alzheimer’s disease. Amyloid hypothesis. Tau protein. Proteopathy. Parkinson’s disease. TP53-inducible glycolysis and apoptosis regulator (TIGAR).
Progeria. Progerin. Werner’s syndrome. Xeroderma pigmentosum. Cockayne syndrome.
Shelterin.
Telomerase.
Alternative lengthening of telomeres: models, mechanisms and implications (Nature).
Coats plus syndrome.
Neoplasia. Tumor angiogenesis. Inhibitor protein MDM2.
Li–Fraumeni syndrome.
Non-coding RNA networks in cancer (Nature).
Cancer stem cell. (“The reason why current cancer therapies often fail to eradicate the disease is that the CSCs survive current DNA damaging treatments and repopulate the tumour.” See also this IAS lecture which covers closely related topics – US.)
Imatinib.
Restriction fragment length polymorphism (RFLP).
CODIS.
MC1R.
Archaic human admixture with modern humans.
El Tor strain.
DNA barcoding.
Hybrid breakdown/-inviability.
Trastuzumab.
Digital PCR.
Pearson’s syndrome.
Mitochondrial replacement therapy.
Synthetic biology.
Artemisinin.
Craig Venter.
Genome editing.
Indel.
CRISPR.
Tyrosinemia.

June 3, 2018 Posted by | Biology, Books, Cancer/oncology, Genetics, Medicine, Molecular biology | Leave a comment

A few diabetes papers of interest

i. Reevaluating the Evidence for Blood Pressure Targets in Type 2 Diabetes.

“There is general consensus that treating adults with type 2 diabetes mellitus (T2DM) and hypertension to a target blood pressure (BP) of <140/90 mmHg helps prevent cardiovascular disease (CVD). Whether more intensive BP control should be routinely targeted remains a matter of debate. While the American Diabetes Association (ADA) BP guidelines recommend an individualized assessment to consider different treatment goals, the American College of Cardiology/American Heart Association BP guidelines recommend a BP target of <130/80 mmHg for most individuals with hypertension, including those with T2DM (13).

In large part, these discrepant recommendations reflect the divergent results of the Action to Control Cardiovascular Risk in Diabetes-BP trial (ACCORD-BP) among people with T2DM and the Systolic Blood Pressure Intervention Trial (SPRINT), which excluded people with diabetes (4,5). Both trials evaluated the effect of intensive compared with standard BP treatment targets (<120 vs. <140 mmHg systolic) on a composite CVD end point of nonfatal myocardial infarction or stroke or death from cardiovascular causes. SPRINT also included unstable angina and acute heart failure in its composite end point. While ACCORD-BP did not show a significant benefit from the intervention (hazard ratio [HR] 0.88; 95% CI 0.73–1.06), SPRINT found a significant 25% relative risk reduction on the primary end point favoring intensive therapy (0.75; 0.64–0.89).”

“To some extent, CVD mechanisms and causes of death differ in T2DM patients compared with the general population. Microvascular disease (particularly kidney disease), accelerated vascular calcification, and diabetic cardiomyopathy are common in T2DM (1315). Moreover, the rate of sudden cardiac arrest is markedly increased in T2DM and related, in part, to diabetes-specific factors other than ischemic heart disease (16). Hypoglycemia is a potential cause of CVD mortality that is specific to diabetes (17). In addition, polypharmacy is common and may increase CVD risk (18). Furthermore, nonvascular causes of death account for approximately 40% of the premature mortality burden experienced by T2DM patients (19). Whether these disease processes may render patients with T2DM less amenable to derive a mortality benefit from intensive BP control, however, is not known and should be the focus of future research.

In conclusion, the divergent results between ACCORD-BP and SPRINT are most readily explained by the apparent lack of benefit of intensive BP control on CVD and all-cause mortality in ACCORD-BP, rather than differences in the design, population characteristics, or interventions between the trials. This difference in effects on mortality may be attributable to differential mechanisms underlying CVD mortality in T2DM, to chance, or to both. These observations suggest that caution should be exercised extrapolating the results of SPRINT to patients with T2DM and support current ADA recommendations to individualize BP targets, targeting a BP of <140/90 mmHg in the majority of patients with T2DM and considering lower BP targets when it is anticipated that individual benefits outweigh risks.”

ii. Modelling incremental benefits on complications rates when targeting lower HbA1c levels in people with Type 2 diabetes and cardiovascular disease.

“Glucose‐lowering interventions in Type 2 diabetes mellitus have demonstrated reductions in microvascular complications and modest reductions in macrovascular complications. However, the degree to which targeting different HbA1c reductions might reduce risk is unclear. […] Participant‐level data for Trial Evaluating Cardiovascular Outcomes with Sitagliptin (TECOS) participants with established cardiovascular disease were used in a Type 2 diabetes‐specific simulation model to quantify the likely impact of different HbA1c decrements on complication rates. […] The use of the TECOS data limits our findings to people with Type 2 diabetes and established cardiovascular disease. […] Ten‐year micro‐ and macrovascular rates were estimated with HbA1c levels fixed at 86, 75, 64, 53 and 42 mmol/mol (10%, 9%, 8%, 7% and 6%) while holding other risk factors constant at their baseline levels. Cumulative relative risk reductions for each outcome were derived for each HbA1c decrement. […] Of 5717 participants studied, 72.0% were men and 74.2% White European, with a mean (sd) age of 66.2 (7.9) years, systolic blood pressure 134 (16.9) mmHg, LDL‐cholesterol 2.3 (0.9) mmol/l, HDL‐cholesterol 1.13 (0.3) mmol/l and median Type 2 diabetes duration 9.6 (5.1–15.6) years. Ten‐year cumulative relative risk reductions for modelled HbA1c values of 75, 64, 53 and 42 mmol/mol, relative to 86 mmol/mol, were 4.6%, 9.3%, 15.1% and 20.2% for myocardial infarction; 6.0%, 12.8%, 19.6% and 25.8% for stroke; 14.4%, 26.6%, 37.1% and 46.4% for diabetes‐related ulcer; 21.5%, 39.0%, 52.3% and 63.1% for amputation; and 13.6%, 25.4%, 36.0% and 44.7 for single‐eye blindness. […] We did not investigate outcomes for renal failure or chronic heart failure as previous research conducted to create the model did not find HbA1c to be a statistically significant independent risk factor for either condition, therefore no clinically meaningful differences would be expected from modelling different HbA1c levels 11.”

“For microvascular complications, the absolute median estimates tended to be lower than for macrovascular complications at the same HbA1c level, but cumulative relative risk reductions were greater. For amputation the 10‐year absolute median estimate for a modelled constant HbA1c of 86 mmol/mol (10%) was 3.8% (3.7, 3.9), with successively lower values for each modelled 1% HbA1c decrement. Compared with the 86 mmol/mol (10%) HbA1c level, median relative risk reductions for amputation were 21.5% (21.1, 21.9) at 75 mmol/mol (9%) increasing to 52.3% (52.0, 52.6) at 53 mmol/mol (7%). […] Relative risk reductions in micro‐ and macrovascular complications for each 1% HbA1c reduction were similar for each decrement. The exception was all‐cause mortality, where the relative risk reductions for 1% HbA1c decrements were greater at higher baseline HbA1c levels. These simulated outcomes differ from the Diabetes Control and Complications Trial outcome in people with Type 1 diabetes, where lowering HbA1c from higher baseline levels had a greater impact on microvascular risk reduction 18.”

iii. Laser photocoagulation for proliferative diabetic retinopathy (Cochrane review).

“Diabetic retinopathy is a complication of diabetes in which high blood sugar levels damage the blood vessels in the retina. Sometimes new blood vessels grow in the retina, and these can have harmful effects; this is known as proliferative diabetic retinopathy. Laserphotocoagulation is an intervention that is commonly used to treat diabetic retinopathy, in which light energy is applied to the retinawith the aim of stopping the growth and development of new blood vessels, and thereby preserving vision. […] The aim of laser photocoagulation is to slow down the growth of new blood vessels in the retina and thereby prevent the progression of visual loss (Ockrim 2010). Focal laser photocoagulation uses the heat of light to seal or destroy abnormal blood vessels in the retina. Individual vessels are treated with a small number of laser burns.

PRP [panretinal photocoagulation, US] aims to slow down the growth of new blood vessels in a wider area of the retina. Many hundreds of laser burns are placed on the peripheral parts of the retina to stop blood vessels from growing (RCOphth 2012). It is thought that the anatomic and functional changes that result from photocoagulation may improve the oxygen supply to the retina, and so reduce the stimulus for neovascularisation (Stefansson 2001). Again the exact mechanisms are unclear, but it is possible that the decreased area of retinal tissue leads to improved oxygenation and a reduction in the levels of anti-vascular endothelial growth factor. A reduction in levels of anti-vascular endothelial growth factor may be important in reducing the risk of harmful new vessels forming. […] Laser photocoagulation is a well-established common treatment for DR and there are many different potential strategies for delivery of laser treatment that are likely to have different effects. A systematic review of the evidence for laser photocoagulation will provide important information on benefits and harms to guide treatment choices. […] This is the first in a series of planned reviews on laser photocoagulation. Future reviews will compare different photocoagulation techniques.”

“We identified a large number of trials of laser photocoagulation of diabetic retinopathy (n = 83) but only five of these studies were eligible for inclusion in the review, i.e. they compared laser photocoagulation with currently available lasers to no (or deferred) treatment. Three studies were conducted in the USA, one study in the UK and one study in Japan. A total of 4786 people (9503 eyes) were included in these studies. The majority of participants in four of these trials were people with proliferative diabetic retinopathy; one trial recruited mainly people with non-proliferative retinopathy.”

“At 12 months there was little difference between eyes that received laser photocoagulation and those allocated to no treatment (or deferred treatment), in terms of loss of 15 or more letters of visual acuity (risk ratio (RR) 0.99, 95% confidence interval (CI) 0.89 to1.11; 8926 eyes; 2 RCTs, low quality evidence). Longer term follow-up did not show a consistent pattern, but one study found a 20% reduction in risk of loss of 15 or more letters of visual acuity at five years with laser treatment. Treatment with laser reduced the risk of severe visual loss by over 50% at 12 months (RR 0.46, 95% CI 0.24to 0.86; 9276 eyes; 4 RCTs, moderate quality evidence). There was a beneficial effect on progression of diabetic retinopathy with treated eyes experiencing a 50% reduction in risk of progression of diabetic retinopathy (RR 0.49, 95% CI 0.37 to 0.64; 8331 eyes; 4 RCTs, low quality evidence) and a similar reduction in risk of vitreous haemorrhage (RR 0.56, 95% CI 0.37 to 0.85; 224 eyes; 2RCTs, low quality evidence).”

“Overall there is not a large amount of evidence from RCTs on the effects of laser photocoagulation compared to no treatment or deferred treatment. The evidence is dominated by two large studies conducted in the US population (DRS 1978; ETDRS 1991). These two studies were generally judged to be at low or unclear risk of bias, with the exception of inevitable unmasking of patients due to differences between intervention and control. […] In current clinical guidelines, e.g. RCOphth 2012, PRP is recommended in high-risk PDR. The recommendation is that “as retinopathy approaches the proliferative stage, laser scatter treatment (PRP) should be increasingly considered to prevent progression to high risk PDR” based on other factors such as patients’ compliance or planned cataract surgery.

These recommendations need to be interpreted while considering the risk of visual loss associated with different levels of severity of DR, as well as the risk of progression. Since PRP reduces the risk of severe visual loss, but not moderate visual loss that is more related to diabetic maculopathy, most ophthalmologists judge that there is little benefit in treating non-proliferative DR at low risk of severe visual damage, as patients would incur the known adverse effects of PRP, which, although mild, include pain and peripheral visual field loss and transient DMO [diabetic macular oedema, US]. […] This review provides evidence that laser photocoagulation is beneficial in treating diabetic retinopathy. […] based on the baseline risk of progression of the disease, and risk of visual loss, the current approach of caution in treating non-proliferative DR with laser would appear to be justified.

By current standards the quality of the evidence is not high, however, the effects on risk of progression and risk of severe visual loss are reasonably large (50% relative risk reduction).”

iv. Immune Recognition of β-Cells: Neoepitopes as Key Players in the Loss of Tolerance.

I should probably warn beforehand that this one is rather technical. It relates reasonably closely to topics covered in the molecular biology book I recently covered here on the blog, and if I had not read that book quite recently I almost certainly would not have been able to read the paper – so the coverage below is more ‘for me’ than ‘for you’. Anyway, some quotes:

“Prior to the onset of type 1 diabetes, there is progressive loss of immune self-tolerance, evidenced by the accumulation of islet autoantibodies and emergence of autoreactive T cells. Continued autoimmune activity leads to the destruction of pancreatic β-cells and loss of insulin secretion. Studies of samples from patients with type 1 diabetes and of murine disease models have generated important insights about genetic and environmental factors that contribute to susceptibility and immune pathways that are important for pathogenesis. However, important unanswered questions remain regarding the events that surround the initial loss of tolerance and subsequent failure of regulatory mechanisms to arrest autoimmunity and preserve functional β-cells. In this Perspective, we discuss various processes that lead to the generation of neoepitopes in pancreatic β-cells, their recognition by autoreactive T cells and antibodies, and potential roles for such responses in the pathology of disease. Emerging evidence supports the relevance of neoepitopes generated through processes that are mechanistically linked with β-cell stress. Together, these observations support a paradigm in which neoepitope generation leads to the activation of pathogenic immune cells that initiate a feed-forward loop that can amplify the antigenic repertoire toward pancreatic β-cell proteins.”

“Enzymatic posttranslational processes that have been implicated in neoepitope generation include acetylation (10), citrullination (11), glycosylation (12), hydroxylation (13), methylation (either protein or DNA methylation) (14), phosphorylation (15), and transglutamination (16). Among these, citrullination and transglutamination are most clearly implicated as processes that generate neoantigens in human disease, but evidence suggests that others also play a role in neoepitope formation […] Citrulline, which is among the most studied PTMs in the context of autoimmunity, is a diagnostic biomarker of rheumatoid arthritis (RA). […] Anticitrulline antibodies are among the earliest immune responses that are diagnostic of RA and often correlate with disease severity (18). We have recently documented the biological consequences of citrulline modifications and autoimmunity that arise from pancreatic β-cell proteins in the development of T1D (19). In particular, citrullinated GAD65 and glucose-regulated protein (GRP78) elicit antibody and T-cell responses in human T1D and in NOD diabetes, respectively (20,21).”

Carbonylation is an irreversible, iron-catalyzed oxidative modification of the side chains of lysine, arginine, threonine, or proline. Mitochondrial functions are particularly sensitive to carbonyl modification, which also has detrimental effects on other intracellular enzymatic pathways (30). A number of diseases have been linked with altered carbonylation of self-proteins, including Alzheimer and Parkinson diseases and cancer (27). There is some data to support that carbonyl PTM is a mechanism that directs unstable self-proteins into cellular degradation pathways. It is hypothesized that carbonyl PTM [post-translational modification] self-proteins that fail to be properly degraded in pancreatic β-cells are autoantigens that are targeted in T1D. Recently submitted studies have identified several carbonylated pancreatic β-cell neoantigens in human and murine models of T1D (27). Among these neoantigens are chaperone proteins that are required for the appropriate folding and secretion of insulin. These studies imply that although some PTM self-proteins may be direct targets of autoimmunity, others may alter, interrupt, or disturb downstream metabolic pathways in the β-cell. In particular, these studies indicated that upstream PTMs resulted in misfolding and/or metabolic disruption between proinsulin and insulin production, which provides one explanation for recent observations of increased proinsulin-to-insulin ratios in the progression of T1D (31).”

“Significant hypomethylation of DNA has been linked with several classic autoimmune diseases, such as SLE, multiple sclerosis, RA, Addison disease, Graves disease, and mixed connective tissue disease (36). Therefore, there is rationale to consider the possible influence of epigenetic changes on protein expression and immune recognition in T1D. Relevant to T1D, epigenetic modifications occur in pancreatic β-cells during progression of diabetes in NOD mice (37). […] Consequently, DNMTs [DNA methyltransferases] and protein arginine methyltransferases are likely to play a role in the regulation of β-cell differentiation and insulin gene expression, both of which are pathways that are altered in the presence of inflammatory cytokines. […] Eizirik et al. (38) reported that exposure of human islets to proinflammatory cytokines leads to modulation of transcript levels and increases in alternative splicing for a number of putative candidate genes for T1D. Their findings suggest a mechanism through which alternative splicing may lead to the generation of neoantigens and subsequent presentation of novel β-cell epitopes (39).”

“The phenomenon of neoepitope recognition by autoantibodies has been shown to be relevant in a variety of autoimmune diseases. For example, in RA, antibody responses directed against various citrullinated synovial proteins are remarkably disease-specific and routinely used as a diagnostic test in the clinic (18). Appearance of the first anticitrullinated protein antibodies occurs years prior to disease onset, and accumulation of additional autoantibody specificities correlates closely with the imminent onset of clinical arthritis (44). There is analogous evidence supporting a hierarchical emergence of autoantibody specificities and multiple waves of autoimmune damage in T1D (3,45). Substantial data from longitudinal studies indicate that insulin and GAD65 autoantibodies appear at the earliest time points during progression, followed by additional antibody specificities directed at IA-2 and ZnT8.”

“Multiple autoimmune diseases often cluster within families (or even within one person), implying shared etiology. Consequently, relevant insights can be gleaned from studies of more traditional autoantibody-mediated systemic autoimmune diseases, such as SLE and RA, where inter- and intramolecular epitope spreading are clearly paradigms for disease progression (47). In general, early autoimmunity is marked by restricted B- and T-cell epitopes, followed by an expanded repertoire coinciding with the onset of more significant tissue pathology […] Akin to T1D, other autoimmune syndromes tend to cluster to subcellular tissues or tissue components that share biological or biochemical properties. For example, SLE is marked by autoimmunity to nucleic acid–bearing macromolecules […] Unlike other systemic autoantibody-mediated diseases, such as RA and SLE, there is no clear evidence that T1D-related autoantibodies play a pathogenic role. Autoantibodies against citrulline-containing neoepitopes of proteoglycan are thought to trigger or intensify arthritis by forming immune complexes with this autoantigen in the joints of RA patients with anticitrullinated protein antibodies. In a similar manner, autoantibodies and immune complexes are hallmarks of tissue pathology in SLE. Therefore, it remains likely that autoantibodies or the B cells that produce them contribute to the pathogenesis of T1D.”

“In summation, the existing literature demonstrates that oxidation, citrullination, and deamidation can have a direct impact on T-cell recognition that contributes to loss of tolerance.”

“There is a general consensus that the pathogenesis of T1D is initiated when individuals who possess a high level of genetic risk (e.g., susceptible HLA, insulin VNTR, PTPN22 genotypes) are exposed to environmental factors (e.g., enteroviruses, diet, microbiome) that precipitate a loss of tolerance that manifests through the appearance of insulin and/or GAD autoantibodies. This early autoimmunity is followed by epitope spreading, increasing both the number of antigenic targets and the diversity of epitopes within these targets. These processes create a feed-forward loop antigen release that induces increasing inflammation and increasing numbers of distinct T-cell specificities (64). The formation and recognition of neoepitopes represents one mechanism through which epitope spreading can occur. […] mechanisms related to neoepitope formation and recognition can be envisioned at multiple stages of T1D pathogenesis. At the level of genetic risk, susceptible individuals may exhibit a genetically driven impairment of their stress response, increasing the likelihood of neoepitope formation. At the level of environmental exposure, many of the insults that are thought to initiate T1D are known to cause neoepitope formation. During the window of β-cell destruction that encompasses early autoimmunity through dysglycemia and diagnosis of T1D it remains unclear when neoepitope responses appear in relation to “classic” responses to insulin and GAD65. However, by the time of onset, neoepitope responses are clearly present and remain as part of the ongoing autoimmunity that is present during established T1D. […] The ultimate product of both direct and indirect generation of neoepitopes is an accumulation of robust and diverse autoimmune B- and T-cell responses, accelerating the pathological destruction of pancreatic islets. Clearly, the emergence of sophisticated methods of tissue and single-cell proteomics will identify novel neoepitopes, including some that occur at near the earliest stages of disease. A detailed mechanistic understanding of the pathways that lead to specific classes of neoepitopes will certainly suggest targets of therapeutic manipulation and intervention that would be hoped to impede the progression of disease.”

v. Diabetes technology: improving care, improving patient‐reported outcomes and preventing complications in young people with Type 1 diabetes.

“With the evolution of diabetes technology, those living with Type 1 diabetes are given a wider arsenal of tools with which to achieve glycaemic control and improve patient‐reported outcomes. Furthermore, the use of these technologies may help reduce the risk of acute complications, such as severe hypoglycaemia and diabetic ketoacidosis, as well as long‐term macro‐ and microvascular complications. […] Unfortunately, diabetes goals are often unmet and people with Type 1 diabetes too frequently experience acute and long‐term complications of this condition, in addition to often having less than ideal psychosocial outcomes. Increasing realization of the importance of patient‐reported outcomes is leading to diabetes care delivery becoming more patient‐centred. […] Optimal diabetes management requires both the medical and psychosocial needs of people with Type 1 diabetes and their caregivers to be addressed. […] The aim of this paper was to demonstrate how, by incorporating technology into diabetes care, we can increase patient‐centered care, reduce acute and chronic diabetes complications, and improve clinical outcomes and quality of life.”

[The paper’s Table 2 on page 422 of the pdf-version is awesome, it includes a lot of different Hba1c estimates from various patient populations all across the world. The numbers included in the table are slightly less awesome, as most populations only achieve suboptimal metabolic control.]

“The risks of all forms of complications increase with higher HbA1c concentration, increasing diabetes duration, hypertension, presence of other microvascular complications, obesity, insulin resistance, hyperlipidaemia and smoking 6. Furthermore, the Diabetes Research in Children (DirecNet) study has shown that individuals with Type 1 diabetes have white matter differences in the brain and cognitive differences compared with individuals without Type 1 diabetes. These studies showed that the degree of structural differences in the brain were related to the degree of chronic hyperglycaemia, hypoglycaemia and glucose variability 7. […] In addition to long‐term complications, people with Type 1 diabetes are also at risk of acute complications. Severe hypoglycaemia, a hypoglycaemic event resulting in altered/loss of consciousness or seizures, is a serious complication of insulin therapy. If unnoticed and untreated, severe hypoglycaemia can result in death. […] The incidence of diabetic ketoacidosis, a life‐threatening consequence of diabetes, remains unacceptably high in children with established diabetes (Table 5). The annual incidence of ketoacidosis was 5% in the Prospective Diabetes Follow‐Up Registry (DPV) in Germany and Austria, 6.4% in the National Paediatric Diabetes Audit (NPDA), and 7.1% in the Type 1 Diabetes Exchange (T1DX) registry 10. Psychosocial factors including female gender, non‐white race, lower socio‐economic status, and elevated HbA1c all contribute to increased risk of diabetic ketoacidosis 11.”

“Depression is more common in young people with Type 1 diabetes than in young people without a chronic disease […] Depression can make it more difficult to engage in diabetes self‐management behaviours, and as a result, contributes to suboptimal glycaemic control and lower rates of self‐monitoring of blood glucose (SMBG) in young people with Type 1 diabetes 15. […] Unlike depression, diabetes distress is not a clinical diagnosis but rather emotional distress that comes from the burden of living with and managing diabetes 16. A recent systematic review found that roughly one‐third of young people with Type 1 diabetes (age 10–20 years) have some level of diabetes distress and that diabetes distress was consistently associated with higher HbA1c and worse self‐management 17. […] Eating and weight‐related comorbidities also exist for individuals with Type 1 diabetes. There is a higher incidence of obesity in individuals with Type 1 diabetes on intensive insulin therapy. […] Adolescent girls and young adult women with Type 1 diabetes are more likely to omit insulin for weight loss and have disordered eating habits 20.”

“In addition to screening for and treating depression and diabetes distress to improve overall diabetes management, it is equally important to assess quality of life as well as positive coping factors that may also influence self‐management and well‐being. For example, lower scores on the PROMIS® measure of global health, which assesses social relationships as well as physical and mental well‐being, have been linked to higher depression scores and less frequent blood glucose checks 13. Furthermore, coping strategies such as problem‐solving, emotional expression, and acceptance have been linked to lower HbA1c and enhanced quality of life 21.”

“Self‐monitoring of blood glucose via multiple finger sticks for capillary blood samples per day has been the ‘gold standard’ for glucose monitoring, but SMBG only provides glucose measurements as snapshots in time. Still, the majority of young people with Type 1 diabetes use SMBG as their main method to assess glycaemia. Data from the T1DX registry suggest that an increased frequency of SMBG is associated with lower HbA1c levels 23. The development of continuous glucose monitoring (CGM) provides more values, along with the rate and direction of glucose changes. […] With continued use, CGM has been shown to decrease the incidence of hypoglycaemia and HbA1c levels 26. […] Insulin can be administered via multiple daily injections or continuous subcutaneous insulin infusion (insulin pumps). Over the last 30 years, insulin pumps have become smaller with more features, making them a valuable alternative to multiple daily injections. Insulin pump use in various registries ranges from as low as 5.9% among paediatric patients in the New Zealand national register 28 to as high as 74% in the German/Austrian DPV in children aged <6 years (Table 2) 29. Recent data suggest that consistent use of insulin pumps can result in improved HbA1c values and decreased incidence of severe hypoglycaemia 30, 31. Insulin pumps have been associated with improved quality of life 32. The data on insulin pumps and diabetic ketoacidosis are less clear.”

“The majority of Type 1 diabetes management is carried out outside the clinical setting and in individuals’ daily lives. People with Type 1 diabetes must make complex treatment decisions multiple times daily; thus, diabetes self‐management skills are central to optimal diabetes management. Unfortunately, many people with Type 1 diabetes and their caregivers are not sufficiently familiar with the necessary diabetes self‐management skills. […] Parents are often the first who learn these skills. As children become older, they start receiving more independence over their diabetes care; however, the transition of responsibilities from caregiver to child is often unstructured and haphazard. It is important to ensure that both individuals with diabetes and their caregivers have adequate self‐management skills throughout the diabetes journey.”

“In the developed world (nations with the highest gross domestic product), 87% of the population has access to the internet and 68% report using a smartphone 39. Even in developing countries, 54% of people use the internet and 37% own smartphones 39. In many areas, smartphones are the primary source of internet access and are readily available. […] There are >1000 apps for diabetes on the Apple App Store and the Google Play store. Many of these apps have focused on nutrition, blood glucose logging, and insulin dosing. Given the prevalence of smartphones and the interest in having diabetes apps handy, there is the potential for using a smartphone to deliver education and decision support tools. […] The new psychosocial position statement from the ADA recommends routine psychosocial screening in clinic. These recommendations include screening for: 1) depressive symptoms annually, at diagnosis, or with changes in medical status; 2) anxiety and worry about hypoglycaemia, complications and other diabetes‐specific worries; 3) disordered eating and insulin omission for purposes of weight control; 4) and diabetes distress in children as young as 7 or 8 years old 16. Implementation of in‐clinic screening for depression in young people with Type 1 diabetes has already been shown to be feasible, acceptable and able to identify individuals in need of treatment who may otherwise have gone unnoticed for a longer period of time which would have been having a detrimental impact on physical health and quality of life 13, 40. These programmes typically use tablets […] to administer surveys to streamline the screening process and automatically score measures 13, 40. This automation allows psychologists and social workers to focus on care delivery rather than screening. In addition to depression screening, automated tablet‐based screening for parental depression, distress and anxiety; problem‐solving skills; and resilience/positive coping factors can help the care team understand other psychosocial barriers to care. This approach allows the development of patient‐ and caregiver‐centred interventions to improve these barriers, thereby improving clinical outcomes and complication rates.”

“With the advent of electronic health records, registries and downloadable medical devices, people with Type 1 diabetes have troves of data that can be analysed to provide insights on an individual and population level. Big data analytics for diabetes are still in the early stages, but present great potential for improving diabetes care. IBM Watson Health has partnered with Medtronic to deliver personalized insights to individuals with diabetes based on device data 48. Numerous other systems […] allow people with Type 1 diabetes to access their data, share their data with the healthcare team, and share de‐identified data with the research community. Data analysis and insights such as this can form the basis for the delivery of personalized digital health coaching. For example, historical patterns can be analysed to predict activity and lead to pro‐active insulin adjustment to prevent hypoglycaemia. […] Improvements to diabetes care delivery can occur at both the population level and at the individual level using insights from big data analytics.”

vi. Route to improving Type 1 diabetes mellitus glycaemic outcomes: real‐world evidence taken from the National Diabetes Audit.

“While control of blood glucose levels reduces the risk of diabetes complications, it can be very difficult for people to achieve. There has been no significant improvement in average glycaemic control among people with Type 1 diabetes for at least the last 10 years in many European countries 6.

The National Diabetes Audit (NDA) in England and Wales has shown relatively little change in the levels of HbA1c being achieved in people with Type 1 diabetes over the last 10 years, with >70% of HbA1c results each year being >58 mmol/mol (7.5%) 7.

Data for general practices in England are published by the NDA. NHS Digital publishes annual prescribing data, including British National Formulary (BNF) codes 7, 8. Together, these data provide an opportunity to investigate whether there are systematic associations between HbA1c levels in people with Type 1 diabetes and practice‐level population characteristics, diabetes service levels and use of medication.”

“The Quality and Outcomes Framework (a payment system for general practice performance) provided a baseline list of all general practices in England for each year, the practice list size and number of people (both with Type 1 and Type 2 diabetes) on their diabetes register. General practice‐level data of participating practices were taken from the NDA 2013–2014, 2014–2015 and 2015–2016 (5455 practices in the last year). They include Type 1 diabetes population characteristics, routine review checks and the proportions of people achieving target glycaemic control and/or being at higher glycaemic risk.

Diabetes medication data for all people with diabetes were taken from the general practice prescribing in primary care data for 2013–2014, 2014–2015 and 2015–2016, including insulin and blood glucose monitoring (BGM) […] A total of 20 indicators were created that covered the epidemiological, service, medication, technological, costs and outcomes performance for each practice and year. The variance in these indicators over the 4‐year period and among general practices was also considered. […] The values of the indicators found to be in the 90th percentile were used to quantify the potential of highest performing general practices. […] In total 13 085 practice‐years of data were analysed, covering 437 000 patient‐years of management.”

“There was significant variation among the participating general practices (Fig. 3) in the proportion of people achieving target glycaemic control target [percentage of people with HbA1c ≤58 mmol/mol (7.5%)] and in the proportion at high glycaemic risk [percentage of people with HbA1c >86 mmol/mol (10%)]. […] Our analysis showed that, at general practice level, the median target glycaemic control attainment was 30%, while the 10th percentile was 16%, and the 90th percentile was 45%. The corresponding median for the high glycaemic risk percentage was 16%, while the 10th percentile (corresponding to the best performing practices) was 6% and the 90th percentile (greatest proportion of Type 1 diabetes at high glycaemic risk) was 28%. Practices in the deciles for both lowest target glycaemic control and highest high glycaemic risk had 49% of the results in the 58–86 mmol/mol range. […] A very wide variation was found in the percentage of insulin for presumed pump use (deduced from prescriptions of fast‐acting vial insulin), with a median of 3.8% at general practice level. The 10th percentile was 0% and the 90th percentile was 255% of the median inferred pump usage.”

“[O]ur findings suggest that if all practices optimized service and therapies to the levels achieved by the top decile then 16 100 (7%) more people with Type 1 diabetes would achieve the glycaemic control target of 58 mmol/mol (7.5%) and 11 500 (5%) fewer people would have HbA1c >86 mmol/mol (10%). Put another way, if the results for all practices were at the top decile level, 36% vs 29% of people with Type 1 diabetes would achieve the glycaemic control target of HbA1c ≤ 58 mmol/mol (7.5%), and as few as 10% could have HbA1c levels > 86 mmol/mol (10%) compared with 15% currently (Fig. 6). This has significant implications for the potential to improve the longer‐term outcomes of people with Type 1 diabetes, given the close link between glycaemia and complications in such individuals 5, 10, 11.”

“We found that the significant variation among the participating general practices (Fig. 2) in terms of the proportion of people with HbA1c ≤58 mmol/mol (7.5%) was only partially related to a lower proportion of people with HbA1c >86 mmol/mol (10%). There was only a weak relationship between level of target glycaemia achieved and avoidance of very suboptimal glycaemia. The overall r2 value was 0.6. This suggests that there is a degree of independence between these outcomes, so that success factors at a general practice level differ for people achieving optimal glycaemia vs those factors affecting avoiding a level of at risk glycaemia.”

May 30, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Immunology, Medicine, Molecular biology, Ophthalmology, Studies | Leave a comment

Molecular biology (II)

Below I have added some more quotes and links related to the book’s coverage:

“[P]roteins are the most abundant molecules in the body except for water. […] Proteins make up half the dry weight of a cell whereas DNA and RNA make up only 3 per cent and 20 per cent respectively. […] The approximately 20,000 protein-coding genes in the human genome can, by alternative splicing, multiple translation starts, and post-translational modifications, produce over 1,000,000 different proteins, collectively called ‘the proteome‘. It is the size of the proteome and not the genome that defines the complexity of an organism. […] For simple organisms, such as viruses, all the proteins coded by their genome can be deduced from its sequence and these comprise the viral proteome. However for higher organisms the complete proteome is far larger than the genome […] For these organisms not all the proteins coded by the genome are found in any one tissue at any one time and therefore a partial proteome is usually studied. What are of interest are those proteins that are expressed in specific cell types under defined conditions.”

“Enzymes are proteins that catalyze or alter the rate of chemical reactions […] Enzymes can speed up reactions […] but they can also slow some reactions down. Proteins play a number of other critical roles. They are involved in maintaining cell shape and providing structural support to connective tissues like cartilage and bone. Specialized proteins such as actin and myosin are required [for] muscular movement. Other proteins act as ‘messengers’ relaying signals to regulate and coordinate various cell processes, e.g. the hormone insulin. Yet another class of protein is the antibodies, produced in response to foreign agents such as bacteria, fungi, and viruses.”

“Proteins are composed of amino acids. Amino acids are organic compounds with […] an amino group […] and a carboxyl group […] In addition, amino acids carry various side chains that give them their individual functions. The twenty-two amino acids found in proteins are called proteinogenic […] but other amino acids exist that are non-protein functioning. […] A peptide bond is formed between two amino acids by the removal of a water molecule. […] each individual unit in a peptide or protein is known as an amino acid residue. […] Chains of less than 50-70 amino acid residues are known as peptides or polypeptides and >50-70 as proteins, although many proteins are composed of more than one polypeptide chain. […] Proteins are macromolecules consisting of one or more strings of amino acids folded into highly specific 3D-structures. Each amino acid has a different size and carries a different side group. It is the nature of the different side groups that facilitates the correct folding of a polypeptide chain into a functional tertiary protein structure.”

“Atoms scatter the waves of X-rays mainly through their electrons, thus forming secondary or reflected waves. The pattern of X-rays diffracted by the atoms in the protein can be captured on a photographic plate or an image sensor such as a charge coupled device placed behind the crystal. The pattern and relative intensity of the spots on the diffraction image are then used to calculate the arrangement of atoms in the original protein. Complex data processing is required to convert the series of 2D diffraction or scatter patterns into a 3D image of the protein. […] The continued success and significance of this technique for molecular biology is witnessed by the fact that almost 100,000 structures of biological molecules have been determined this way, of which most are proteins.”

“The number of proteins in higher organisms far exceeds the number of known coding genes. The fact that many proteins carry out multiple functions but in a regulated manner is one way a complex proteome arises without increasing the number of genes. Proteins that performed a single role in the ancestral organism have acquired extra and often disparate functions through evolution. […] The active site of an enzyme employed in catalysis is only a small part of the protein, leaving spare capacity for acquiring a second function. […] The glycolytic pathway is involved in the breakdown of sugars such as glucose to release energy. Many of the highly conserved and ancient enzymes from this pathway have developed secondary or ‘moonlighting’ functions. Proteins often change their location in the cell in order to perform a ‘second job’. […] The limited size of the genome may not be the only evolutionary pressure for proteins to moonlight. Combining two functions in one protein can have the advantage of coordinating multiple activities in a cell, enabling it to respond quickly to changes in the environment without the need for lengthy transcription and translational processes.”

Post-translational modifications (PTMs) […] is [a] process that can modify the role of a protein by addition of chemical groups to amino acids in the peptide chain after translation. Addition of phosphate groups (phosphorylation), for example, is a common mechanism for activating or deactivating an enzyme. Other common PTMs include addition of acetyl groups (acetylation), glucose (glucosylation), or methyl groups (methylation). […] Some additions are reversible, facilitating the switching between active and inactive states, and others are irreversible such as marking a protein for destruction by ubiquitin. [The difference between reversible and irreversible modifications can be quite important in pharmacology, and if you’re curious to know more about these topics Coleman’s drug metabolism text provide great coverage of related topics – US.] Diseases caused by malfunction of these modifications highlight the importance of PTMs. […] in diabetes [h]igh blood glucose lead to unwanted glocosylation of proteins. At the high glucose concentrations associated with diabetes, an unwanted irreversible chemical reaction binds the gllucose to amino acid residues such as lysines exposed on the protein surface. The glucosylated proteins then behave badly, cross-linking themselves to the extracellular matrix. This is particularly dangerous in the kidney where it decreases function and can lead to renal failure.”

“Twenty thousand protein-coding genes make up the human genome but for any given cell only about half of these are expressed. […] Many genes get switched off during differentiation and a major mechanism for this is epigenetics. […] an epigenetic trait […] is ‘a stably heritable phenotype resulting from changes in the chromosome without alterations in the DNA sequence’. Epigenetics involves the chemical alteration of DNA by methyl or other small molecular groups to affect the accessibility of a gene by the transcription machinery […] Epigenetics can […] act on gene expression without affecting the stability of the genetic code by modifying the DNA, the histones in chromatin, or a whole chromosome. […] Epigenetic signatures are not only passed on to somatic daughter cells but they can also be transferred through the germline to the offspring. […] At first the evidence appeared circumstantial but more recent studies have provided direct proof of epigenetic changes involving gene methylation being inherited. Rodent models have provided mechanistic evidence. […] the importance of epigenetics in development is highlighted by the fact that low dietary folate, a nutrient essential for methylation, has been linked to higher risk of birth defects in the offspring.” […on the other hand, well…]

The cell cycle is divided into phases […] Transition from G1 into S phase commits the cell to division and is therefore a very tightly controlled restriction point. Withdrawal of growth factors, insufficient nucleotides, or energy to complete DNA replication, or even a damaged template DNA, would compromise the process. Problems are therefore detected and the cell cycle halted by cell cycle inhibitors before the cell has committed to DNA duplication. […] The cell cycle inhibitors inactive the kinases that promote transition through the phases, thus halting the cell cycle. […] The cell cycle can also be paused in S phase to allow time for DNA repairs to be carried out before cell division. The consequences of uncontrolled cell division are so catastrophic that evolution has provided complex checks and balances to maintain fidelity. The price of failure is apoptosis […] 50 to 70 billion cells die every day in a human adult by the controlled molecular process of apoptosis.”

“There are many diseases that arise because a particular protein is either absent or a faulty protein is produced. Administering a correct version of that protein can treat these patients. The first commercially available recombinant protein to be produced for medical use was human insulin to treat diabetes mellitus. […] (FDA) approved the recombinant insulin for clinical use in 1982. Since then over 300 protein-based recombinant pharmaceuticals have been licensed by the FDA and the European Medicines Agency (EMA) […], and many more are undergoing clinical trials. Therapeutic proteins can be produced in bacterial cells but more often mammalian cells such as the Chinese hamster ovary cell line and human fibroblasts are used as these hosts are better able to produce fully functional human protein. However, using mammalian cells is extremely expensive and an alternative is to use live animals or plants. This is called molecular pharming and is an innovative way of producing large amounts of protein relatively cheaply. […] In plant pharming, tobacco, rice, maize, potato, carrots, and tomatoes have all been used to produce therapeutic proteins. […] [One] class of proteins that can be engineered using gene-cloning technology is therapeutic antibodies. […] Therapeutic antibodies are designed to be monoclonal, that is, they are engineered so that they are specific for a particular antigen to which they bind, to block the antigen’s harmful effects. […] Monoclonal antibodies are at the forefront of biological therapeutics as they are highly specific and tend not to induce major side effects.”

“In gene therapy the aim is to restore the function of a faulty gene by introducing a correct version of that gene. […] a cloned gene is transferred into the cells of a patient. Once inside the cell, the protein encoded by the gene is produced and the defect is corrected. […] there are major hurdles to be overcome for gene therapy to be effective. One is the gene construct has to be delivered to the diseased cells or tissues. This can often be difficult […] Mammalian cells […] have complex mechanisms that have evolved to prevent unwanted material such as foreign DNA getting in. Second, introduction of any genetic construct is likely to trigger the patient’s immune response, which can be fatal […] once delivered, expression of the gene product has to be sustained to be effective. One approach to delivering genes to the cells is to use genetically engineered viruses constructed so that most of the viral genome is deleted […] Once inside the cell, some viral vectors such as the retroviruses integrate into the host genome […]. This is an advantage as it provides long-lasting expression of the gene product. However, it also poses a safety risk, as there is little control over where the viral vector will insert into the patient’s genome. If the insertion occurs within a coding gene, this may inactivate gene function. If it integrates close to transcriptional start sites, where promoters and enhancer sequences are located, inappropriate gene expression can occur. This was observed in early gene therapy trials [where some patients who got this type of treatment developed cancer as a result of it. A few more details hereUS] […] Adeno-associated viruses (AAVs) […] are often used in gene therapy applications as they are non-infectious, induce only a minimal immune response, and can be engineered to integrate into the host genome […] However, AAVs can only carry a small gene insert and so are limited to use with genes that are of a small size. […] An alternative delivery system to viruses is to package the DNA into liposomes that are then taken up by the cells. This is safer than using viruses as liposomes do not integrate into the host genome and are not very immunogenic. However, liposome uptake by the cells can be less efficient, resulting in lower expression of the gene.”

Links:

One gene–one enzyme hypothesis.
Molecular chaperone.
Protein turnover.
Isoelectric point.
Gel electrophoresis. Polyacrylamide.
Two-dimensional gel electrophoresis.
Mass spectrometry.
Proteomics.
Peptide mass fingerprinting.
Worldwide Protein Data Bank.
Nuclear magnetic resonance spectroscopy of proteins.
Immunoglobulins. Epitope.
Western blot.
Immunohistochemistry.
Crystallin. β-catenin.
Protein isoform.
Prion.
Gene expression. Transcriptional regulation. Chromatin. Transcription factor. Gene silencing. Histone. NF-κB. Chromatin immunoprecipitation.
The agouti mouse model.
X-inactive specific transcript (Xist).
Cell cycle. Cyclin. Cyclin-dependent kinase.
Retinoblastoma protein pRb.
Cytochrome c. CaspaseBcl-2 family. Bcl-2-associated X protein.
Hybridoma technology. Muromonab-CD3.
Recombinant vaccines and the development of new vaccine strategies.
Knockout mouse.
Adenovirus Vectors for Gene Therapy, Vaccination and Cancer Gene Therapy.
Genetically modified food. Bacillus thuringiensis. Golden rice.

 

May 29, 2018 Posted by | Biology, Books, Chemistry, Diabetes, Engineering, Genetics, Immunology, Medicine, Molecular biology, Pharmacology | Leave a comment

Molecular biology (I?)

“This is a great publication, considering the format. These authors in my opinion managed to get quite close to what I’d consider to be ‘the ideal level of coverage’ for books of this nature.”

The above was what I wrote in my short goodreads review of the book. In this post I’ve added some quotes from the first chapters of the book and some links to topics covered.

Quotes:

“Once the base-pairing double helical structure of DNA was understood it became apparent that by holding and preserving the genetic code DNA is the source of heredity. The heritable material must also be capable of faithful duplication every time a cell divides. The DNA molecule is ideal for this. […] The effort then concentrated on how the instructions held by the DNA were translated into the choice of the twenty different amino acids that make up proteins. […] George Gamov [yes, that George Gamov! – US] made the suggestion that information held in the four bases of DNA (A, T, C, G) must be read as triplets, called codons. Each codon, made up of three nucleotides, codes for one amino acid or a ‘start’ or ‘stop’ signal. This information, which determines an organism’s biochemical makeup, is known as the genetic code. An encryption based on three nucleotides means that there are sixty-four possible three-letter combinations. But there are only twenty amino acids that are universal. […] some amino acids can be coded for by more than one codon.”

“The mechanism of gene expression whereby DNA transfers its information into proteins was determined in the early 1960s by Sydney Brenner, Francois Jacob, and Matthew Meselson. […] Francis Crick proposed in 1958 that information flowed in one direction only: from DNA to RNA to protein. This was called the ‘Central Dogma‘ and describes how DNA is transcribed into RNA, which then acts as a messenger carrying the information to be translated into proteins. Thus the flow of information goes from DNA to RNA to proteins and information can never be transferred back from protein to nucleic acid. DNA can be copied into more DNA (replication) or into RNA (transcription) but only the information in mRNA [messenger RNA] can be translated into protein”.

“The genome is the entire DNA contained within the forty-six chromosomes located in the nucleus of each human somatic (body) cell. […] The complete human genome is composed of over 3 billion bases and contain approximately 20,000 genes that code for proteins. This is much lower than earlier estimates of 80,000 to 140,000 and astonished the scientific community when revealed through human genome sequencing. Equally surprising was the finding that genomes of much simpler organisms sequenced at the same time contained a higher number of protein-coding genes than humans. […] It is now clear that the size of the genome does not correspond with the number of protein-coding genes, and these do not determine the complexity of an organism. Protein-coding genes can be viewed as ‘transcription units’. These are made up of sequences called exons that code for amino acids, and separated by by non-coding sequences called introns. Associated with these are additional sequences termed promoters and enhancers that control the expression of that gene.”

“Some sections of the human genome code for RNA molecules that do not have the capacity to produce proteins. […] it is now becoming apparent that many play a role in controlling gene expression. Despite the importance of proteins, less than 1.5 per cent of the genome is made up of exon sequences. A recent estimate is that about 80 per cent of the genome is transcribed or involved in regulatory functions with the rest mainly composed of repetitive sequences. […] Satellite DNA […] is a short sequence repeated many thousands of times in tandem […] A second type of repetitive DNA is the telomere sequence. […] Their role is to prevent chromosomes from shortening during DNA replication […] Repetitive sequences can also be found distributed or interspersed throughout the genome. These repeats have the ability to move around the genome and are referred to as mobile or transposable DNA. […] Such movements can be harmful sometimes as gene sequences can be disrupted causing disease. […] The vast majority of transposable sequences are no longer able to move around and are considered to be ‘silent’. However, these movements have contributed, over evolutionary time, to the organization and evolution of the genome, by creating new or modified genes leading to the production of proteins with novel functions.”

“A very important property of DNA is that it can make an accurate copy of itself. This is necessary since cells die during the normal wear and tear of tissues and need to be replenished. […] DNA replication is a highly accurate process with an error occurring every 10,000 to 1 million bases in human DNA. This low frequency is because the DNA polymerases carry a proofreading function. If an incorrect nucleotide is incorporated during DNA synthesis, the polymerase detects the error and excises the incorrect base. Following excision, the polymerase reinserts the correct base and replication continues. Any errors that are not corrected through proofreading are repaired by an alternative mismatch repair mechanism. In some instances, proofreading and repair mechanisms fail to correct errors. These become permanent mutations after the next cell division cycle as they are no longer recognized as errors and are therefore propagated each time the DNA replicates.”

DNA sequencing identifies the precise linear order of the nucleotide bases A, C, G, T, in a DNA fragment. It is possible to sequence individual genes, segments of a genome, or whole genomes. Sequencing information is fundamental in helping us understand how our genome is structured and how it functions. […] The Human Genome Project, which used Sanger sequencing, took ten years to sequence and cost 3 billion US dollars. Using high-throughput sequencing, the entire human genome can now be sequenced in a few days at a cost of 3,000 US dollars. These costs are continuing to fall, making it more feasible to sequence whole genomes. The human genome sequence published in 2003 was built from DNA pooled from a number of donors to generate a ‘reference’ or composite genome. However, the genome of each individual is unique and so in 2005 the Personal Genome Project was launched in the USA aiming to sequence and analyse the genomes of 100,000 volunteers across the world. Soon after, similar projects followed in Canada and Korea and, in 2013, in the UK. […] To store and analyze the huge amounts of data, computational systems have developed in parallel. This branch of biology, called bioinformatics, has become an extremely important collaborative research area for molecular biologists drawing on the expertise of computer scientists, mathematicians, and statisticians.”

“[T]he structure of RNA differs from DNA in three fundamental ways. First, the sugar is a ribose, whereas in DNA it is a deoxyribose. Secondly, in RNA the nucleotide bases are A, G, C, and U (uracil) instead of A, G, C, and T. […] Thirdly, RNA is a single-stranded molecule unlike double-stranded DNA. It is not helical in shape but can fold to form a hairpin or stem-loop structure by base-pairing between complementary regions within the same RNA molecule. These two-dimensional secondary structures can further fold to form complex three-dimensional, tertiary structures. An RNA molecule is able to interact not only with itself, but also with other RNAs, with DNA, and with proteins. These interactions, and the variety of conformations that RNAs can adopt, enables them to carry out a wide range of functions. […] RNAs can influence many normal cellular and disease processes by regulating gene expression. RNA interference […] is one of the main ways in which gene expression is regulated.”

“Translation of the mRNA to a protein takes place in the cell cytoplasm on ribosomes. Ribosomes are cellular structures made up primarily of rRNA and proteins. At the ribosomes, the mRNA is decoded to produce a specific protein according to the rules defined by the genetic code. The correct amino acids are brought to the mRNA at the ribosomes by molecules called transfer RNAs (tRNAs). […] At the start of translation, a tRNA binds to the mRNA at the start codon AUG. This is followed by the binding of a second tRNA matching the adjacent mRNA codon. The two neighbouring amino acids linked to the tRNAs are joined together by a chemical bond called the peptide bond. Once the peptide bond forms, the first tRNA detaches leaving its amino acid behind. The ribosome then moves one codon along the mRNA and a third tRNA binds. In this way, tRNAs sequentially bind to the mRNA as the ribosome moves from codon to codon. Each time a tRNA molecule binds, the linked amino acid is transferred to the growing amino acid chain. Thus the mRNA sequence is translated into a chain of amino acids connected by peptide bonds to produce a polypeptide chain. Translation is terminated when the ribosome encounters a stop codon […]. After translation, the chain is folded and very often modified by the addition of sugar or other molecules to produce fully functional proteins.”

“The naturally occurring RNAi pathway is now extensively exploited in the laboratory to study the function of genes. It is possible to design synthetic siRNA molecules with a sequence complementary to the gene under study. These double-stranded RNA molecules are then introduced into the cell by special techniques to temporarily knock down the expression of that gene. By studying the phenotypic effects of this severe reduction of gene expression, the function of that gene can be identified. Synthetic siRNA molecules also have the potential to be used to treat diseases. If a disease is caused or enhanced by a particular gene product, then siRNAs can be designed against that gene to silence its expression. This prevents the protein which drives the disease from being produced. […] One of the major challenges to the use of RNAi as therapy is directing siRNA to the specific cells in which gene silencing is required. If released directly into the bloodstream, enzymes in the bloodstream degrade siRNAs. […] Other problems are that siRNAs can stimulate the body’s immune response and can produce off-target effects by silencing RNA molecules other than those against which they were specifically designed. […] considerable attention is currently focused on designing carrier molecules that can transport siRNA through the bloodstream to the diseased cell.”

“Both Northern blotting and RT-PCR enable the expression of one or a few genes to be measured simultaneously. In contrast, the technique of microarrays allows gene expression to be measured across the full genome of an organism in a single step. This massive scale genome analysis technique is very useful when comparing gene expression profiles between two samples. […] This can identify gene subsets that are under- or over-expressed in one sample relative to the second sample to which it is compared.”

Links:

Molecular biology.
Charles Darwin. Alfred Wallace. Gregor Mendel. Wilhelm Johannsen. Heinrich Waldeyer. Theodor Boveri. Walter Sutton. Friedrich Miescher. Phoebus Levene. Oswald Avery. Colin MacLeod. Maclyn McCarty. James Watson. Francis Crick. Rosalind Franklin. Andrew Fire. Craig Mello.
Gene. Genotype. Phenotype. Chromosome. Nucleotide. DNA. RNA. Protein.
Chargaff’s rules.
Photo 51.
Human Genome Project.
Long interspersed nuclear elements (LINEs). Short interspersed nuclear elements (SINEs).
Histone. Nucleosome.
Chromatin. Euchromatin. Heterochromatin.
Mitochondrial DNA.
DNA replication. Helicase. Origin of replication. DNA polymeraseOkazaki fragments. Leading strand and lagging strand. DNA ligase. Semiconservative replication.
Mutation. Point mutation. Indel. Frameshift mutation.
Genetic polymorphism. Single-nucleotide polymorphism (SNP).
Genome-wide association study (GWAS).
Molecular cloning. Restriction endonuclease. Multiple cloning site (MCS). Bacterial artificial chromosome.
Gel electrophoresis. Southern blot. Polymerase chain reaction (PCR). Reverse transcriptase PCR (RT-PCR). Quantitative PCR (qPCR).
GenBank. European Molecular Biology Laboratory (EMBL). Encyclopedia of DNA Elements (ENCODE).
RNA polymerase II. TATA box. Transcription factor IID. Stop codon.
Protein biosynthesis.
SmRNA (small nuclear RNA).
Untranslated region (/UTR sequences).
Transfer RNA.
Micro RNA (miRNA).
Dicer (enzyme).
RISC (RNA-induced silencing complex).
Argonaute.
Lipid-Based Nanoparticles for siRNA Delivery in Cancer Therapy.
Long non-coding RNA.
Ribozyme/catalytic RNA.
RNA-sequencing (RNA-seq).

May 5, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine, Molecular biology | Leave a comment

Networks

I actually think this was a really nice book, considering the format – I gave it four stars on goodreads. One of the things I noticed people didn’t like about it in the reviews is that it ‘jumps’ a bit in terms of topic coverage; it covers a wide variety of applications and analytical settings. I mostly don’t consider this a weakness of the book – even if occasionally it does get a bit excessive – and I can definitely understand the authors’ choice of approach; it’s sort of hard to illustrate the potential the analytical techniques described within this book have if you’re not allowed to talk about all the areas in which they have been – or could be gainfully – applied. A related point is that many people who read the book might be familiar with the application of these tools in specific contexts but have perhaps not thought about the fact that similar methods are applied in many other areas (and they might all of them be a bit annoyed the authors don’t talk more about computer science applications, or foodweb analyses, or infectious disease applications, or perhaps sociometry…). Most of the book is about graph-theory-related stuff, but a very decent amount of the coverage deals with applications, in a broad sense of the word at least, not theory. The discussion of theoretical constructs in the book always felt to me driven to a large degree by their usefulness in specific contexts.

I have covered related topics before here on the blog, also quite recently – e.g. there’s at least some overlap between this book and Holland’s book about complexity theory in the same series (I incidentally think these books probably go well together) – and as I found the book slightly difficult to blog as it was I decided against covering it in as much detail as I sometimes do when covering these texts – this means that I decided to leave out the links I usually include in posts like these.

Below some quotes from the book.

“The network approach focuses all the attention on the global structure of the interactions within a system. The detailed properties of each element on its own are simply ignored. Consequently, systems as different as a computer network, an ecosystem, or a social group are all described by the same tool: a graph, that is, a bare architecture of nodes bounded by connections. […] Representing widely different systems with the same tool can only be done by a high level of abstraction. What is lost in the specific description of the details is gained in the form of universality – that is, thinking about very different systems as if they were different realizations of the same theoretical structure. […] This line of reasoning provides many insights. […] The network approach also sheds light on another important feature: the fact that certain systems that grow without external control are still capable of spontaneously developing an internal order. […] Network models are able to describe in a clear and natural way how self-organization arises in many systems. […] In the study of complex, emergent, and self-organized systems (the modern science of complexity), networks are becoming increasingly important as a universal mathematical framework, especially when massive amounts of data are involved. […] networks are crucial instruments to sort out and organize these data, connecting individuals, products, news, etc. to each other. […] While the network approach eliminates many of the individual features of the phenomenon considered, it still maintains some of its specific features. Namely, it does not alter the size of the system — i.e. the number of its elements — or the pattern of interaction — i.e. the specific set of connections between elements. Such a simplified model is nevertheless enough to capture the properties of the system. […] The network approach [lies] somewhere between the description by individual elements and the description by big groups, bridging the two of them. In a certain sense, networks try to explain how a set of isolated elements are transformed, through a pattern of interactions, into groups and communities.”

“[T]he random graph model is very important because it quantifies the properties of a totally random network. Random graphs can be used as a benchmark, or null case, for any real network. This means that a random graph can be used in comparison to a real-world network, to understand how much chance has shaped the latter, and to what extent other criteria have played a role. The simplest recipe for building a random graph is the following. We take all the possible pair of vertices. For each pair, we toss a coin: if the result is heads, we draw a link; otherwise we pass to the next pair, until all the pairs are finished (this means drawing the link with a probability p = ½, but we may use whatever value of p). […] Nowadays [the random graph model] is a benchmark of comparison for all networks, since any deviations from this model suggests the presence of some kind of structure, order, regularity, and non-randomness in many real-world networks.”

“…in networks, topology is more important than metrics. […] In the network representation, the connections between the elements of a system are much more important than their specific positions in space and their relative distances. The focus on topology is one of its biggest strengths of the network approach, useful whenever topology is more relevant than metrics. […] In social networks, the relevance of topology means that social structure matters. […] Sociology has classified a broad range of possible links between individuals […]. The tendency to have several kinds of relationships in social networks is called multiplexity. But this phenomenon appears in many other networks: for example, two species can be connected by different strategies of predation, two computers by different cables or wireless connections, etc. We can modify a basic graph to take into account this multiplexity, e.g. by attaching specific tags to edges. […] Graph theory [also] allows us to encode in edges more complicated relationships, as when connections are not reciprocal. […] If a direction is attached to the edges, the resulting structure is a directed graph […] In these networks we have both in-degree and out-degree, measuring the number of inbound and outbound links of a node, respectively. […] in most cases, relations display a broad variation or intensity [i.e. they are not binary/dichotomous]. […] Weighted networks may arise, for example, as a result of different frequencies of interactions between individuals or entities.”

“An organism is […] the outcome of several layered networks and not only the deterministic result of the simple sequence of genes. Genomics has been joined by epigenomics, transcriptomics, proteomics, metabolomics, etc., the disciplines that study these layers, in what is commonly called the omics revolution. Networks are at the heart of this revolution. […] The brain is full of networks where various web-like structures provide the integration between specialized areas. In the cerebellum, neurons form modules that are repeated again and again: the interaction between modules is restricted to neighbours, similarly to what happens in a lattice. In other areas of the brain, we find random connections, with a more or less equal probability of connecting local, intermediate, or distant neurons. Finally, the neocortex — the region involved in many of the higher functions of mammals — combines local structures with more random, long-range connections. […] typically, food chains are not isolated, but interwoven in intricate patterns, where a species belongs to several chains at the same time. For example, a specialized species may predate on only one prey […]. If the prey becomes extinct, the population of the specialized species collapses, giving rise to a set of co-extinctions. An even more complicated case is where an omnivore species predates a certain herbivore, and both eat a certain plant. A decrease in the omnivore’s population does not imply that the plant thrives, because the herbivore would benefit from the decrease and consume even more plants. As more species are taken into account, the population dynamics can become more and more complicated. This is why a more appropriate description than ‘foodchains’ for ecosystems is the term foodwebs […]. These are networks in which nodes are species and links represent relations of predation. Links are usually directed (big fishes eat smaller ones, not the other way round). These networks provide the interchange of food, energy, and matter between species, and thus constitute the circulatory system of the biosphere.”

“In the cell, some groups of chemicals interact only with each other and with nothing else. In ecosystems, certain groups of species establish small foodwebs, without any connection to external species. In social systems, certain human groups may be totally separated from others. However, such disconnected groups, or components, are a strikingly small minority. In all networks, almost all the elements of the systems take part in one large connected structure, called a giant connected component. […] In general, the giant connected component includes not less than 90 to 95 per cent of the system in almost all networks. […] In a directed network, the existence of a path from one node to another does not guarantee that the journey can be made in the opposite direction. Wolves eat sheep, and sheep eat grass, but grass does not eat sheep, nor do sheep eat wolves. This restriction creates a complicated architecture within the giant connected component […] according to an estimate made in 1999, more than 90 per cent of the WWW is composed of pages connected to each other, if the direction of edges is ignored. However, if we take direction into account, the proportion of nodes mutually reachable is only 24 per cent, the giant strongly connected component. […] most networks are sparse, i.e. they tend to be quite frugal in connections. Take, for example, the airport network: the personal experience of every frequent traveller shows that direct flights are not that common, and intermediate stops are necessary to reach several destinations; thousands of airports are active, but each city is connected to less than 20 other cities, on average. The same happens in most networks. A measure of this is given by the mean number of connection of their nodes, that is, their average degree.”

“[A] puzzling contradiction — a sparse network can still be very well connected — […] attracted the attention of the Hungarian mathematicians […] Paul Erdős and Alfréd Rényi. They tackled it by producing different realizations of their random graph. In each of them, they changed the density of edges. They started with a very low density: less than one edge per node. It is natural to expect that, as the density increases, more and more nodes will be connected to each other. But what Erdős and Rényi found instead was a quite abrupt transition: several disconnected components coalesced suddenly into a large one, encompassing almost all the nodes. The sudden change happened at one specific critical density: when the average number of links per node (i.e. the average degree) was greater than one, then the giant connected component suddenly appeared. This result implies that networks display a very special kind of economy, intrinsic to their disordered structure: a small number of edges, even randomly distributed between nodes, is enough to generate a large structure that absorbs almost all the elements. […] Social systems seem to be very tightly connected: in a large enough group of strangers, it is not unlikely to find pairs of people with quite short chains of relations connecting them. […] The small-world property consists of the fact that the average distance between any two nodes (measured as the shortest path that connects them) is very small. Given a node in a network […], few nodes are very close to it […] and few are far from it […]: the majority are at the average — and very short — distance. This holds for all networks: starting from one specific node, almost all the nodes are at very few steps from it; the number of nodes within a certain distance increases exponentially fast with the distance. Another way of explaining the same phenomenon […] is the following: even if we add many nodes to a network, the average distance will not increase much; one has to increase the size of a network by several orders of magnitude to notice that the paths to new nodes are (just a little) longer. The small-world property is crucial to many network phenomena. […] The small-world property is something intrinsic to networks. Even the completely random Erdős-Renyi graphs show this feature. By contrast, regular grids do not display it. If the Internet was a chessboard-like lattice, the average distance between two routers would be of the order of 1,000 jumps, and the Net would be much slower [the authors note elsewhere that “The Internet is composed of hundreds of thousands of routers, but just about ten ‘jumps’ are enough to bring an information packet from one of them to any other.”] […] The key ingredient that transforms a structure of connections into a small world is the presence of a little disorder. No real network is an ordered array of elements. On the contrary, there are always connections ‘out of place’. It is precisely thanks to these connections that networks are small worlds. […] Shortcuts are responsible for the small-world property in many […] situations.”

“Body size, IQ, road speed, and other magnitudes have a characteristic scale: that is, an average value that in the large majority of cases is a rough predictor of the actual value that one will find. […] While height is a homogeneous magnitude, the number of social connection[s] is a heterogeneous one. […] A system with this feature is said to be scale-free or scale-invariant, in the sense that it does not have a characteristic scale. This can be rephrased by saying that the individual fluctuations with respect to the average are too large for us to make a correct prediction. […] In general, a network with heterogeneous connectivity has a set of clear hubs. When a graph is small, it is easy to find whether its connectivity is homogeneous or heterogeneous […]. In the first case, all the nodes have more or less the same connectivity, while in the latter it is easy to spot a few hubs. But when the network to be studied is very big […] things are not so easy. […] the distribution of the connectivity of the nodes of the […] network […] is the degree distribution of the graph. […] In homogeneous networks, the degree distribution is a bell curve […] while in heterogeneous networks, it is a power law […]. The power law implies that there are many more hubs (and much more connected) in heterogeneous networks than in homogeneous ones. Moreover, hubs are not isolated exceptions: there is a full hierarchy of nodes, each of them being a hub compared with the less connected ones.”

“Looking at the degree distribution is the best way to check if a network is heterogeneous or not: if the distribution is fat tailed, then the network will have hubs and heterogeneity. A mathematically perfect power law is never found, because this would imply the existence of hubs with an infinite number of connections. […] Nonetheless, a strongly skewed, fat-tailed distribution is a clear signal of heterogeneity, even if it is never a perfect power law. […] While the small-world property is something intrinsic to networked structures, hubs are not present in all kind of networks. For example, power grids usually have very few of them. […] hubs are not present in random networks. A consequence of this is that, while random networks are small worlds, heterogeneous ones are ultra-small worlds. That is, the distance between their vertices is relatively smaller than in their random counterparts. […] Heterogeneity is not equivalent to randomness. On the contrary, it can be the signature of a hidden order, not imposed by a top-down project, but generated by the elements of the system. The presence of this feature in widely different networks suggests that some common underlying mechanism may be at work in many of them. […] the Barabási–Albert model gives an important take-home message. A simple, local behaviour, iterated through many interactions, can give rise to complex structures. This arises without any overall blueprint”.

Homogamy, the tendency of like to marry like, is very strong […] Homogamy is a specific instance of homophily: this consists of a general trend of like to link to like, and is a powerful force in shaping social networks […] assortative mixing [is] a special form of homophily, in which nodes tend to connect with others that are similar to them in the number of connections. By contrast [when] high- and low-degree nodes are more connected to each other [it] is called disassortative mixing. Both cases display a form of correlation in the degrees of neighbouring nodes. When the degrees of neighbours are positively correlated, then the mixing is assortative; when negatively, it is disassortative. […] In random graphs, the neighbours of a given node are chosen completely at random: as a result, there is no clear correlation between the degrees of neighbouring nodes […]. On the contrary, correlations are present in most real-world networks. Although there is no general rule, most natural and technological networks tend to be disassortative, while social networks tend to be assortative. […] Degree assortativity and disassortativity are just an example of the broad range of possible correlations that bias how nodes tie to each other.”

“[N]etworks (neither ordered lattices nor random graphs), can have both large clustering and small average distance at the same time. […] in almost all networks, the clustering of a node depends on the degree of that node. Often, the larger the degree, the smaller the clustering coefficient. Small-degree nodes tend to belong to well-interconnected local communities. Similarly, hubs connect with many nodes that are not directly interconnected. […] Central nodes usually act as bridges or bottlenecks […]. For this reason, centrality is an estimate of the load handled by a node of a network, assuming that most of the traffic passes through the shortest paths (this is not always the case, but it is a good approximation). For the same reason, damaging central nodes […] can impair radically the flow of a network. Depending on the process one wants to study, other definitions of centrality can be introduced. For example, closeness centrality computes the distance of a node to all others, and reach centrality factors in the portion of all nodes that can be reached in one step, two steps, three steps, and so on.”

“Domino effects are not uncommon in foodwebs. Networks in general provide the backdrop for large-scale, sudden, and surprising dynamics. […] most of the real-world networks show a doubled-edged kind of robustness. They are able to function normally even when a large fraction of the network is damaged, but suddenly certain small failures, or targeted attacks, bring them down completely. […] networks are very different from engineered systems. In an airplane, damaging one element is enough to stop the whole machine. In order to make it more resilient, we have to use strategies such as duplicating certain pieces of the plane: this makes it almost 100 per cent safe. In contrast, networks, which are mostly not blueprinted, display a natural resilience to a broad range of errors, but when certain elements fail, they collapse. […] A random graph of the size of most real-world networks is destroyed after the removal of half of the nodes. On the other hand, when the same procedure is performed on a heterogeneous network (either a map of a real network or a scale-free model of a similar size), the giant connected component resists even after removing more than 80 per cent of the nodes, and the distance within it is practically the same as at the beginning. The scene is different when researchers simulate a targeted attack […] In this situation the collapse happens much faster […]. However, now the most vulnerable is the second: while in the homogeneous network it is necessary to remove about one-fifth of its more connected nodes to destroy it, in the heterogeneous one this happens after removing the first few hubs. Highly connected nodes seem to play a crucial role, in both errors and attacks. […] hubs are mainly responsible for the overall cohesion of the graph, and removing a few of them is enough to destroy it.”

“Studies of errors and attacks have shown that hubs keep different parts of a network connected. This implies that they also act as bridges for spreading diseases. Their numerous ties put them in contact with both infected and healthy individuals: so hubs become easily infected, and they infect other nodes easily. […] The vulnerability of heterogeneous networks to epidemics is bad news, but understanding it can provide good ideas for containing diseases. […] if we can immunize just a fraction, it is not a good idea to choose people at random. Most of the times, choosing at random implies selecting individuals with a relatively low number of connections. Even if they block the disease from spreading in their surroundings, hubs will always be there to put it back into circulation. A much better strategy would be to target hubs. Immunizing hubs is like deleting them from the network, and the studies on targeted attacks show that eliminating a small fraction of hubs fragments the network: thus, the disease will be confined to a few isolated components. […] in the epidemic spread of sexually transmitted diseases the timing of the links is crucial. Establishing an unprotected link with a person before they establish an unprotected link with another person who is infected is not the same as doing so afterwards.”

April 3, 2018 Posted by | Biology, Books, Ecology, Engineering, Epidemiology, Genetics, Mathematics, Statistics | Leave a comment

A few (more) diabetes papers of interest

Earlier this week I covered a couple of papers, but the second paper turned out to include a lot of interesting stuff so I decided to cut the post short and postpone my coverage of the other papers I’d intended to cover in that post until a later point in time; this post includes some of those other papers I’d intended to cover in that post.

i. TCF7L2 Genetic Variants Contribute to Phenotypic Heterogeneity of Type 1 Diabetes.

“Although the autoimmune destruction of β-cells has a major role in the development of type 1 diabetes, there is growing evidence that the differences in clinical, metabolic, immunologic, and genetic characteristics among patients (1) likely reflect diverse etiology and pathogenesis (2). Factors that govern this heterogeneity are poorly understood, yet these may have important implications for prognosis, therapy, and prevention.

The transcription factor 7 like 2 (TCF7L2) locus contains the single nucleotide polymorphism (SNP) most strongly associated with type 2 diabetes risk, with an ∼30% increase per risk allele (3). In a U.S. cohort, heterozygous and homozygous carriers of the at-risk alleles comprised 40.6% and 7.9%, respectively, of the control subjects and 44.3% and 18.3%, respectively, of the individuals with type 2 diabetes (3). The locus has no known association with type 1 diabetes overall (48), with conflicting reports in latent autoimmune diabetes in adults (816). […] Our studies in two separate cohorts have shown that the type 2 diabetes–associated TCF7L2 genetic variant is more frequent among specific subsets of individuals with autoimmune type 1 diabetes, specifically those with fewer markers of islet autoimmunity (22,23). These observations support a role of this genetic variant in the pathogenesis of diabetes at least in a subset of individuals with autoimmune diabetes. However, whether individuals with type 1 diabetes and this genetic variant have distinct metabolic abnormalities has not been investigated. We aimed to study the immunologic and metabolic characteristics of individuals with type 1 diabetes who carry a type 2 diabetes–associated allele of the TCF7L2 locus.”

“We studied 810 TrialNet participants with newly diagnosed type 1 diabetes and found that among individuals 12 years and older, the type 2 diabetes–associated TCF7L2 genetic variant is more frequent in those presenting with a single autoantibody than in participants who had multiple autoantibodies. These TCF7L2 variants were also associated with higher mean C-peptide AUC and lower mean glucose AUC levels at the onset of type 1 diabetes. […] These findings suggest that, besides the well-known link with type 2 diabetes, the TCF7L2 locus may play a role in the development of type 1 diabetes. The type 2 diabetes–associated TCF7L2 genetic variant identifies a subset of individuals with autoimmune type 1 diabetes and fewer markers of islet autoimmunity, lower glucose, and higher C-peptide at diagnosis. […] A possible interpretation of these data is that TCF7L2-encoded diabetogenic mechanisms may contribute to diabetes development in individuals with limited autoimmunity […]. Because the risk of progression to type 1 diabetes is lower in individuals with single compared with multiple autoantibodies, it is possible that in the absence of this type 2 diabetes–associated TCF7L2 variant, these individuals may have not manifested diabetes. If that is the case, we would postulate that disease development in these patients may have a type 2 diabetes–like pathogenesis in which islet autoimmunity is a significant component but not necessarily the primary driver.”

“The association between this genetic variant and single autoantibody positivity was present in individuals 12 years or older but not in children younger than 12 years. […] The results in the current study suggest that the type 2 diabetes–associated TCF7L2 genetic variant plays a larger role in older individuals. There is mounting evidence that the pathogenesis of type 1 diabetes varies by age (31). Younger individuals appear to have a more aggressive form of disease, with faster decline of β-cell function before and after onset of disease, higher frequency and severity of diabetic ketoacidosis, which is a clinical correlate of severe insulin deficiency, and lower C-peptide at presentation (3135). Furthermore, older patients are less likely to have type 1 diabetes–associated HLA alleles and islet autoantibodies (28). […] Taken together, we have demonstrated that individuals with autoimmune type 1 diabetes who carry the type 2 diabetes–associated TCF7L2 genetic variant have a distinct phenotype characterized by milder immunologic and metabolic characteristics than noncarriers, closer to those of type 2 diabetes, with an important effect of age.”

ii. Heart Failure: The Most Important, Preventable, and Treatable Cardiovascular Complication of Type 2 Diabetes.

“Concerns about cardiovascular disease in type 2 diabetes have traditionally focused on atherosclerotic vasculo-occlusive events, such as myocardial infarction, stroke, and limb ischemia. However, one of the earliest, most common, and most serious cardiovascular disorders in patients with diabetes is heart failure (1). Following its onset, patients experience a striking deterioration in their clinical course, which is marked by frequent hospitalizations and eventually death. Many sudden deaths in diabetes are related to underlying ventricular dysfunction rather than a new ischemic event. […] Heart failure and diabetes are linked pathophysiologically. Type 2 diabetes and heart failure are each characterized by insulin resistance and are accompanied by the activation of neurohormonal systems (norepinephrine, angiotensin II, aldosterone, and neprilysin) (3). The two disorders overlap; diabetes is present in 35–45% of patients with chronic heart failure, whether they have a reduced or preserved ejection fraction.”

“Treatments that lower blood glucose do not exert any consistently favorable effect on the risk of heart failure in patients with diabetes (6). In contrast, treatments that increase insulin signaling are accompanied by an increased risk of heart failure. Insulin use is independently associated with an enhanced likelihood of heart failure (7). Thiazolidinediones promote insulin signaling and have increased the risk of heart failure in controlled clinical trials (6). With respect to incretin-based secretagogues, liraglutide increases the clinical instability of patients with existing heart failure (8,9), and the dipeptidyl peptidase 4 inhibitors saxagliptin and alogliptin are associated with an increased risk of heart failure in diabetes (10). The likelihood of heart failure with the use of sulfonylureas may be comparable to that with thiazolidinediones (11). Interestingly, the only two classes of drugs that ameliorate hyperinsulinemia (metformin and sodium–glucose cotransporter 2 inhibitors) are also the only two classes of antidiabetes drugs that appear to reduce the risk of heart failure and its adverse consequences (12,13). These findings are consistent with experimental evidence that insulin exerts adverse effects on the heart and kidneys that can contribute to heart failure (14). Therefore, physicians can prevent many cases of heart failure in type 2 diabetes by careful consideration of the choice of agents used to achieve glycemic control. Importantly, these decisions have an immediate effect; changes in risk are seen within the first few months of changes in treatment. This immediacy stands in contrast to the years of therapy required to see a benefit of antidiabetes drugs on microvascular risk.”

“As reported by van den Berge et al. (4), the prognosis of patients with heart failure has improved over the past two decades; heart failure with a reduced ejection fraction is a treatable disease. Inhibitors of the renin-angiotensin system are a cornerstone of the management of both disorders; they prevent the onset of heart failure and the progression of nephropathy in patients with diabetes, and they reduce the risk of cardiovascular death and hospitalization in those with established heart failure (3,15). Diabetes does not influence the magnitude of the relative benefit of ACE inhibitors in patients with heart failure, but patients with diabetes experience a greater absolute benefit from treatment (16).”

“The totality of evidence from randomized trials […] demonstrates that in patients with diabetes, heart failure is not only common and clinically important, but it can also be prevented and treated. This conclusion is particularly significant because physicians have long ignored heart failure in their focus on glycemic control and their concerns about the ischemic macrovascular complications of diabetes (1).”

iii. Closely related to the above study: Mortality Reduction Associated With β-Adrenoceptor Inhibition in Chronic Heart Failure Is Greater in Patients With Diabetes.

“Diabetes increases mortality in patients with chronic heart failure (CHF) and reduced left ventricular ejection fraction. Studies have questioned the safety of β-adrenoceptor blockers (β-blockers) in some patients with diabetes and reduced left ventricular ejection fraction. We examined whether β-blockers and ACE inhibitors (ACEIs) are associated with differential effects on mortality in CHF patients with and without diabetes. […] We conducted a prospective cohort study of 1,797 patients with CHF recruited between 2006 and 2014, with mean follow-up of 4 years.”

RESULTS Patients with diabetes were prescribed larger doses of β-blockers and ACEIs than were patients without diabetes. Increasing β-blocker dose was associated with lower mortality in patients with diabetes (8.9% per mg/day; 95% CI 5–12.6) and without diabetes (3.5% per mg/day; 95% CI 0.7–6.3), although the effect was larger in people with diabetes (interaction P = 0.027). Increasing ACEI dose was associated with lower mortality in patients with diabetes (5.9% per mg/day; 95% CI 2.5–9.2) and without diabetes (5.1% per mg/day; 95% CI 2.6–7.6), with similar effect size in these groups (interaction P = 0.76).”

“Our most important findings are:

  • Higher-dose β-blockers are associated with lower mortality in patients with CHF and LVSD, but patients with diabetes may derive more benefit from higher-dose β-blockers.

  • Higher-dose ACEIs were associated with comparable mortality reduction in people with and without diabetes.

  • The association between higher β-blocker dose and reduced mortality is most pronounced in patients with diabetes who have more severely impaired left ventricular function.

  • Among patients with diabetes, the relationship between β-blocker dose and mortality was not associated with glycemic control or insulin therapy.”

“We make the important observation that patients with diabetes may derive more prognostic benefit from higher β-blocker doses than patients without diabetes. These data should provide reassurance to patients and health care providers and encourage careful but determined uptitration of β-blockers in this high-risk group of patients.”

iv. Diabetes, Prediabetes, and Brain Volumes and Subclinical Cerebrovascular Disease on MRI: The Atherosclerosis Risk in Communities Neurocognitive Study (ARIC-NCS).

“Diabetes and prediabetes are associated with accelerated cognitive decline (1), and diabetes is associated with an approximately twofold increased risk of dementia (2). Subclinical brain pathology, as defined by small vessel disease (lacunar infarcts, white matter hyperintensities [WMH], and microhemorrhages), large vessel disease (cortical infarcts), and smaller brain volumes also are associated with an increased risk of cognitive decline and dementia (37). The mechanisms by which diabetes contributes to accelerated cognitive decline and dementia are not fully understood, but contributions of hyperglycemia to both cerebrovascular disease and primary neurodegenerative disease have been suggested in the literature, although results are inconsistent (2,8). Given that diabetes is a vascular risk factor, brain atrophy among individuals with diabetes may be driven by increased cerebrovascular disease. Brain magnetic resonance imaging (MRI) provides a noninvasive opportunity to study associations of hyperglycemia with small vessel disease (lacunar infarcts, WMH, microhemorrhages), large vessel disease (cortical infarcts), and brain volumes (9).”

“Overall, the mean age of participants [(n = 1,713)] was 75 years, 60% were women, 27% were black, 30% had prediabetes (HbA1c 5.7 to <6.5%), and 35% had diabetes. Compared with participants without diabetes and HbA1c <5.7%, those with prediabetes (HbA1c 5.7 to <6.5%) were of similar age (75.2 vs. 75.0 years; P = 0.551), were more likely to be black (24% vs. 11%; P < 0.001), have less than a high school education (11% vs. 7%; P = 0.017), and have hypertension (71% vs. 63%; P = 0.012) (Table 1). Among participants with diabetes, those with HbA1c <7.0% versus ≥7.0% were of similar age (75.4 vs. 75.1 years; P = 0.481), but those with diabetes and HbA1c ≥7.0% were more likely to be black (39% vs. 28%; P = 0.020) and to have less than a high school education (23% vs. 16%; P = 0.031) and were more likely to have a longer duration of diabetes (12 vs. 8 years; P < 0.001).”

“Compared with participants without diabetes and HbA1c <5.7%, those with diabetes and HbA1c ≥7.0% had smaller total brain volume (β −0.20 SDs; 95% CI −0.31, −0.09) and smaller regional brain volumes, including frontal, temporal, occipital, and parietal lobes; deep gray matter; Alzheimer disease signature region; and hippocampus (all P < 0.05) […]. Compared with participants with diabetes and HbA1c <7.0%, those with diabetes and HbA1c ≥7.0% had smaller total brain volume (P < 0.001), frontal lobe volume (P = 0.012), temporal lobe volume (P = 0.012), occipital lobe volume (P = 0.008), parietal lobe volume (P = 0.015), deep gray matter volume (P < 0.001), Alzheimer disease signature region volume (0.031), and hippocampal volume (P = 0.016). Both participants with diabetes and HbA1c <7.0% and those with prediabetes (HbA1c 5.7 to <6.5%) had similar total and regional brain volumes compared with participants without diabetes and HbA1c <5.7% (all P > 0.05). […] No differences in the presence of lobar microhemorrhages, subcortical microhemorrhages, cortical infarcts, and lacunar infarcts were observed among the diabetes-HbA1c categories (all P > 0.05) […]. Compared with participants without diabetes and HbA1c <5.7%, those with diabetes and HbA1c ≥7.0% had increased WMH volume (P = 0.016). The WMH volume among participants with diabetes and HbA1c ≥7.0% was also significantly greater than among those with diabetes and HbA1c <7.0% (P = 0.017).”

“Those with diabetes duration ≥10 years were older than those with diabetes duration <10 years (75.9 vs. 75.0 years; P = 0.041) but were similar in terms of race and sex […]. Compared with participants with diabetes duration <10 years, those with diabetes duration ≥10 years has smaller adjusted total brain volume (β −0.13 SDs; 95% CI −0.20, −0.05) and smaller temporal lobe (β −0.14 SDs; 95% CI −0.24, −0.03), parietal lobe (β − 0.11 SDs; 95% CI −0.21, −0.01), and hippocampal (β −0.16 SDs; 95% CI −0.30, −0.02) volumes […]. Participants with diabetes duration ≥10 years also had a 2.44 times increased odds (95% CI 1.46, 4.05) of lacunar infarcts compared with those with diabetes duration <10 years”.

Conclusions
In this community-based population, we found that ARIC-NCS participants with diabetes with HbA1c ≥7.0% have smaller total and regional brain volumes and an increased burden of WMH, but those with prediabetes (HbA1c 5.7 to <6.5%) and diabetes with HbA1c <7.0% have brain volumes and markers of subclinical cerebrovascular disease similar to those without diabetes. Furthermore, among participants with diabetes, those with more-severe disease (as measured by higher HbA1c and longer disease duration) had smaller total and regional brain volumes and an increased burden of cerebrovascular disease compared with those with lower HbA1c and shorter disease duration. However, we found no evidence that associations of diabetes with smaller brain volumes are mediated by cerebrovascular disease.

The findings of this study extend the current literature that suggests that diabetes is strongly associated with brain volume loss (11,2527). Global brain volume loss (11,2527) has been consistently reported, but associations of diabetes with smaller specific brain regions have been less robust (27,28). Similar to prior studies, the current results show that compared with individuals without diabetes, those with diabetes have smaller total brain volume (11,2527) and regional brain volumes, including frontal and occipital lobes, deep gray matter, and the hippocampus (25,27). Furthermore, the current study suggests that greater severity of disease (as measured by HbA1c and diabetes duration) is associated with smaller total and regional brain volumes. […] Mechanisms whereby diabetes may contribute to brain volume loss include accelerated amyloid-β and hyperphosphorylated tau deposition as a result of hyperglycemia (29). Another possible mechanism involves pancreatic amyloid (amylin) infiltration of the brain, which then promotes amyloid-β deposition (29). […] Taken together, […] the current results suggest that diabetes is associated with both lower brain volumes and increased cerebrovascular pathology (WMH and lacunes).”

v. Interventions to increase attendance for diabetic retinopathy screening (Cochrane review).

“The primary objective of the review was to assess the effectiveness of quality improvement (QI) interventions that seek to increase attendance for DRS in people with type 1 and type 2 diabetes.

Secondary objectives were:
To use validated taxonomies of QI intervention strategies and behaviour change techniques (BCTs) to code the description of interventions in the included studies and determine whether interventions that include particular QI strategies or component BCTs are more effective in increasing screening attendance;
To explore heterogeneity in effect size within and between studies to identify potential explanatory factors for variability in effect size;
To explore differential effects in subgroups to provide information on how equity of screening attendance could be improved;
To critically appraise and summarise current evidence on the resource use, costs and cost effectiveness.”

“We included 66 RCTs conducted predominantly (62%) in the USA. Overall we judged the trials to be at low or unclear risk of bias. QI strategies were multifaceted and targeted patients, healthcare professionals or healthcare systems. Fifty-six studies (329,164 participants) compared intervention versus usual care (median duration of follow-up 12 months). Overall, DRS [diabetic retinopathy screening] attendance increased by 12% (risk difference (RD) 0.12, 95% confidence interval (CI) 0.10 to 0.14; low-certainty evidence) compared with usual care, with substantial heterogeneity in effect size. Both DRS-targeted (RD 0.17, 95% CI 0.11 to 0.22) and general QI interventions (RD 0.12, 95% CI 0.09 to 0.15) were effective, particularly where baseline DRS attendance was low. All BCT combinations were associated with significant improvements, particularly in those with poor attendance. We found higher effect estimates in subgroup analyses for the BCTs ‘goal setting (outcome)’ (RD 0.26, 95% CI 0.16 to 0.36) and ‘feedback on outcomes of behaviour’ (RD 0.22, 95% CI 0.15 to 0.29) in interventions targeting patients, and ‘restructuring the social environment’ (RD 0.19, 95% CI 0.12 to 0.26) and ‘credible source’ (RD 0.16, 95% CI 0.08 to 0.24) in interventions targeting healthcare professionals.”

“Ten studies (23,715 participants) compared a more intensive (stepped) intervention versus a less intensive intervention. In these studies DRS attendance increased by 5% (RD 0.05, 95% CI 0.02 to 0.09; moderate-certainty evidence).”

“Overall, we found that there is insufficient evidence to draw robust conclusions about the relative cost effectiveness of the interventions compared to each other or against usual care.”

“The results of this review provide evidence that QI interventions targeting patients, healthcare professionals or the healthcare system are associated with meaningful improvements in DRS attendance compared to usual care. There was no statistically significant difference between interventions specifically aimed at DRS and those which were part of a general QI strategy for improving diabetes care.”

vi. Diabetes in China: Epidemiology and Genetic Risk Factors and Their Clinical Utility in Personalized Medication.

“The incidence of type 2 diabetes (T2D) has rapidly increased over recent decades, and T2D has become a leading public health challenge in China. Compared with European descents, Chinese patients with T2D are diagnosed at a relatively young age and low BMI. A better understanding of the factors contributing to the diabetes epidemic is crucial for determining future prevention and intervention programs. In addition to environmental factors, genetic factors contribute substantially to the development of T2D. To date, more than 100 susceptibility loci for T2D have been identified. Individually, most T2D genetic variants have a small effect size (10–20% increased risk for T2D per risk allele); however, a genetic risk score that combines multiple T2D loci could be used to predict the risk of T2D and to identify individuals who are at a high risk. […] In this article, we review the epidemiological trends and recent progress in the understanding of T2D genetic etiology and further discuss personalized medicine involved in the treatment of T2D.”

“Over the past three decades, the prevalence of diabetes in China has sharply increased. The prevalence of diabetes was reported to be less than 1% in 1980 (2), 5.5% in 2001 (3), 9.7% in 2008 (4), and 10.9% in 2013, according to the latest published nationwide survey (5) […]. The prevalence of diabetes was higher in the senior population, men, urban residents, individuals living in economically developed areas, and overweight and obese individuals. The estimated prevalence of prediabetes in 2013 was 35.7%, which was much higher than the estimate of 15.5% in the 2008 survey. Similarly, the prevalence of prediabetes was higher in the senior population, men, and overweight and obese individuals. However, prediabetes was more prevalent in rural residents than in urban residents. […] the 2013 survey also compared the prevalence of diabetes among different races. The crude prevalence of diabetes was 14.7% in the majority group, i.e., Chinese Han, which was higher than that in most minority ethnic groups, including Tibetan, Zhuang, Uyghur, and Muslim. The crude prevalence of prediabetes was also higher in the Chinese Han ethnic group. The Tibetan participants had the lowest prevalence of diabetes and prediabetes (4.3% and 31.3%).”

“[T]he prevalence of diabetes in young people is relatively high and increasing. The prevalence of diabetes in the 20- to 39-year age-group was 3.2%, according to the 2008 national survey (4), and was 5.9%, according to the 2013 national survey (5). The prevalence of prediabetes also increased from 9.0% in 2008 to 28.8% in 2013 […]. Young people suffering from diabetes have a higher risk of chronic complications, which are the major cause of mortality and morbidity in diabetes. According to a study conducted in Asia (6), patients with young-onset diabetes had higher mean concentrations of HbA1c and LDL cholesterol and a higher prevalence of retinopathy (20% vs. 18%, P = 0.011) than those with late-onset diabetes. In the Chinese, patients with early-onset diabetes had a higher risk of nonfatal cardiovascular disease (7) than did patients with late-onset diabetes (odds ratio [OR] 1.91, 95% CI 1.81–2.02).”

“As approximately 95% of patients with diabetes in China have T2D, the rapid increase in the prevalence of diabetes in China may be attributed to the increasing rates of overweight and obesity and the reduction in physical activity, which is driven by economic development, lifestyle changes, and diet (3,11). According to a series of nationwide surveys conducted by the China Physical Fitness Surveillance Center (12), the prevalence of overweight (BMI ≥23.0 to <27.5 kg/m2) in Chinese adults aged 20–59 years increased from 37.4% in 2000 to 39.2% in 2005, 40.7% in 2010, and 41.2% in 2014, with an estimated increase of 0.27% per year. The prevalence of obesity (BMI ≥27.5 kg/m2) increased from 8.6% in 2000 to 10.3% in 2005, 12.2% in 2010, and 12.9% in 2014, with an estimated increase of 0.32% per year […]. The prevalence of central obesity increased from 13.9% in 2000 to 18.3% in 2005, 22.1% in 2010, and 24.9% in 2014, with an estimated increase of 0.78% per year. Notably, T2D develops at a considerably lower BMI in the Chinese population than that in European populations. […] The relatively high risk of diabetes at a lower BMI could be partially attributed to the tendency toward visceral adiposity in East Asian populations, including the Chinese population (13). Moreover, East Asian populations have been found to have a higher insulin sensitivity with a much lower insulin response than European descent and African populations, implying a lower compensatory β-cell function, which increases the risk of progressing to overt diabetes (14).”

“Over the past two decades, linkage analyses, candidate gene approaches, and large-scale GWAS have successfully identified more than 100 genes that confer susceptibility to T2D among the world’s major ethnic populations […], most of which were discovered in European populations. However, less than 50% of these European-derived loci have been successfully confirmed in East Asian populations. […] there is a need to identify specific genes that are associated with T2D in other ethnic populations. […] Although many genetic loci have been shown to confer susceptibility to T2D, the mechanism by which these loci participate in the pathogenesis of T2D remains unknown. Most T2D loci are located near genes that are related to β-cell function […] most single nucleotide polymorphisms (SNPs) contributing to the T2D risk are located in introns, but whether these SNPs directly modify gene expression or are involved in linkage disequilibrium with unknown causal variants remains to be investigated. Furthermore, the loci discovered thus far collectively account for less than 15% of the overall estimated genetic heritability.”

“The areas under the receiver operating characteristic curves (AUCs) are usually used to assess the discriminative accuracy of an approach. The AUC values range from 0.5 to 1.0, where an AUC of 0.5 represents a lack of discrimination and an AUC of 1 represents perfect discrimination. An AUC ≥0.75 is considered clinically useful. The dominant conventional risk factors, including age, sex, BMI, waist circumference, blood pressure, family history of diabetes, physical activity level, smoking status, and alcohol consumption, can be combined to construct conventional risk factor–based models (CRM). Several studies have compared the predictive capacities of models with and without genetic information. The addition of genetic markers to a CRM could slightly improve the predictive performance. For example, one European study showed that the addition of an 11-SNP GRS to a CRM marginally improved the risk prediction (AUC was 0.74 without and 0.75 with the genetic markers, P < 0.001) in a prospective cohort of 16,000 individuals (37). A meta-analysis (38) consisting of 23 studies investigating the predictive performance of T2D risk models also reported that the AUCs only slightly increased with the addition of genetic information to the CRM (median AUC was increased from 0.78 to 0.79). […] Despite great advances in genetic studies, the clinical utility of genetic information in the prediction, early identification, and prevention of T2D remains in its preliminary stage.”

“An increasing number of studies have highlighted that early nutrition has a persistent effect on the risk of diabetes in later life (40,41). China’s Great Famine of 1959–1962 is considered to be the largest and most severe famine of the 20th century […] Li et al. (43) found that offspring of mothers exposed to the Chinese famine have a 3.9-fold increased risk of diabetes or hyperglycemia as adults. A more recent study (the Survey on Prevalence in East China for Metabolic Diseases and Risk Factors [SPECT-China]) conducted in 2014, among 6,897 adults from Shanghai, Jiangxi, and Zhejiang provinces, had the same conclusion that famine exposure during the fetal period (OR 1.53, 95% CI 1.09–2.14) and childhood (OR 1.82, 95% CI 1.21–2.73) was associated with diabetes (44). These findings indicate that undernutrition during early life increases the risk of hyperglycemia in adulthood and this association is markedly exaggerated when facing overnutrition in later life.”

February 23, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Health Economics, Immunology, Medicine, Neurology, Ophthalmology, Pharmacology, Studies | Leave a comment

Systems Biology (III)

Some observations from chapter 4 below:

The need to maintain a steady state ensuring homeostasis is an essential concern in nature while negative feedback loop is the fundamental way to ensure that this goal is met. The regulatory system determines the interdependences between individual cells and the organism, subordinating the former to the latter. In trying to maintain homeostasis, the organism may temporarily upset the steady state conditions of its component cells, forcing them to perform work for the benefit of the organism. […] On a cellular level signals are usually transmitted via changes in concentrations of reaction substrates and products. This simple mechanism is made possible due to limited volume of each cell. Such signaling plays a key role in maintaining homeostasis and ensuring cellular activity. On the level of the organism signal transmission is performed by hormones and the nervous system. […] Most intracellular signal pathways work by altering the concentrations of selected substances inside the cell. Signals are registered by forming reversible complexes consisting of a ligand (reaction product) and an allosteric receptor complex. When coupled to the ligand, the receptor inhibits the activity of its corresponding effector, which in turn shuts down the production of the controlled substance ensuring the steady state of the system. Signals coming from outside the cell are usually treated as commands (covalent modifications), forcing the cell to adjust its internal processes […] Such commands can arrive in the form of hormones, produced by the organism to coordinate specialized cell functions in support of general homeostasis (in the organism). These signals act upon cell receptors and are usually amplified before they reach their final destination (the effector).”

“Each concentration-mediated signal must first be registered by a detector. […] Intracellular detectors are typically based on allosteric proteins. Allosteric proteins exhibit a special property: they have two stable structural conformations and can shift from one form to the other as a result of changes in ligand concentrations. […] The concentration of a product (or substrate) which triggers structural realignment in the allosteric protein (such as a regulatory enzyme) depends on the genetically-determined affinity of the active site to its ligand. Low affinity results in high target concentration of the controlled substance while high affinity translates into lower concentration […]. In other words, high concentration of the product is necessary to trigger a low-affinity receptor (and vice versa). Most intracellular regulatory mechanisms rely on noncovalent interactions. Covalent bonding is usually associated with extracellular signals, generated by the organism and capable of overriding the cell’s own regulatory mechanisms by modifying the sensitivity of receptors […]. Noncovalent interactions may be compared to requests while covalent signals are treated as commands. Signals which do not originate in the receptor’s own feedback loop but modify its affinity are known as steering signals […] Hormones which act upon cells are, by their nature, steering signals […] Noncovalent interactions — dependent on substance concentrations — impose spatial restrictions on regulatory mechanisms. Any increase in cell volume requires synthesis of additional products in order to maintain stable concentrations. The volume of a spherical cell is given as V = 4/3 π r3, where r indicates cell radius. Clearly, even a slight increase in r translates into a significant increase in cell volume, diluting any products dispersed in the cytoplasm. This implies that cells cannot expand without incurring great energy costs. It should also be noted that cell expansion reduces the efficiency of intracellular regulatory mechanisms because signals and substrates need to be transported over longer distances. Thus, cells are universally small, regardless of whether they make up a mouse or an elephant.”

An effector is an element of a regulatory loop which counteracts changes in the regulated quantity […] Synthesis and degradation of biological compounds often involves numerous enzymes acting in sequence. The product of one enzyme is a substrate for another enzyme. With the exception of the initial enzyme, each step of this cascade is controlled by the availability of the supplied substrate […] The effector consists of a chain of enzymes, each of which depends on the activity of the initial regulatory enzyme […] as well as on the activity of its immediate predecessor which supplies it with substrates. The function of all enzymes in the effector chain is indirectly dependent on the initial enzyme […]. This coupling between the receptor and the first link in the effector chain is a universal phenomenon. It can therefore be said that the initial enzyme in the effector chain is, in fact, a regulatory enzyme. […] Most cell functions depend on enzymatic activity. […] It seems that a set of enzymes associated with a specific process which involves a negative feedback loop is the most typical form of an intracellular regulatory effector. Such effectors can be controlled through activation or inhibition of their associated enzymes.”

“The organism is a self-contained unit represented by automatic regulatory loops which ensure homeostasis. […] Effector functions are conducted by cells which are usually grouped and organized into tissues and organs. Signal transmission occurs by way of body fluids, hormones or nerve connections. Cells can be treated as automatic and potentially autonomous elements of regulatory loops, however their specific action is dependent on the commands issued by the organism. This coercive property of organic signals is an integral requirement of coordination, allowing the organism to maintain internal homeostasis. […] Activities of the organism are themselves regulated by their own negative feedback loops. Such regulation differs however from the mechanisms observed in individual cells due to its place in the overall hierarchy and differences in signal properties, including in particular:
• Significantly longer travel distances (compared to intracellular signals);
• The need to maintain hierarchical superiority of the organism;
• The relative autonomy of effector cells. […]
The relatively long distance travelled by organism’s signals and their dilution (compared to intracellular ones) calls for amplification. As a consequence, any errors or random distortions in the original signal may be drastically exacerbated. A solution to this problem comes in the form of encoding, which provides the signal with sufficient specificity while enabling it to be selectively amplified. […] a loudspeaker can […] assist in acoustic communication, but due to the lack of signal encoding it cannot compete with radios in terms of communication distance. The same reasoning applies to organism-originated signals, which is why information regarding blood glucose levels is not conveyed directly by glucose but instead by adrenalin, glucagon or insulin. Information encoding is handled by receptors and hormone-producing cells. Target cells are capable of decoding such signals, thus completing the regulatory loop […] Hormonal signals may be effectively amplified because the hormone itself does not directly participate in the reaction it controls — rather, it serves as an information carrier. […] strong amplification invariably requires encoding in order to render the signal sufficiently specific and unambiguous. […] Unlike organisms, cells usually do not require amplification in their internal regulatory loops — even the somewhat rare instances of intracellular amplification only increase signal levels by a small amount. Without the aid of an amplifier, messengers coming from the organism level would need to be highly concentrated at their source, which would result in decreased efficiency […] Most signals originated on organism’s level travel with body fluids; however if a signal has to reach its destination very rapidly (for instance in muscle control) it is sent via the nervous system”.

“Two types of amplifiers are observed in biological systems:
1. cascade amplifier,
2. positive feedback loop. […]
A cascade amplifier is usually a collection of enzymes which perform their action by activation in strict sequence. This mechanism resembles multistage (sequential) synthesis or degradation processes, however instead of exchanging reaction products, amplifier enzymes communicate by sharing activators or by directly activating one another. Cascade amplifiers are usually contained within cells. They often consist of kinases. […] Amplification effects occurring at each stage of the cascade contribute to its final result. […] While the kinase amplification factor is estimated to be on the order of 103, the phosphorylase cascade results in 1010-fold amplification. It is a stunning value, though it should also be noted that the hormones involved in this cascade produce particularly powerful effects. […] A positive feedback loop is somewhat analogous to a negative feedback loop, however in this case the input and output signals work in the same direction — the receptor upregulates the process instead of inhibiting it. Such upregulation persists until the available resources are exhausted.
Positive feedback loops can only work in the presence of a control mechanism which prevents them from spiraling out of control. They cannot be considered self-contained and only play a supportive role in regulation. […] In biological systems positive feedback loops are sometimes encountered in extracellular regulatory processes where there is a need to activate slowly-migrating components and greatly amplify their action in a short amount of time. Examples include blood coagulation and complement factor activation […] Positive feedback loops are often coupled to negative loop-based control mechanisms. Such interplay of loops may impart the signal with desirable properties, for instance by transforming a flat signals into a sharp spike required to overcome the activation threshold for the next stage in a signalling cascade. An example is the ejection of calcium ions from the endoplasmic reticulum in the phospholipase C cascade, itself subject to a negative feedback loop.”

“Strong signal amplification carries an important drawback: it tends to “overshoot” its target activity level, causing wild fluctuations in the process it controls. […] Nature has evolved several means of signal attenuation. The most typical mechanism superimposes two regulatory loops which affect the same parameter but act in opposite directions. An example is the stabilization of blood glucose levels by two contradictory hormones: glucagon and insulin. Similar strategies are exploited in body temperature control and many other biological processes. […] The coercive properties of signals coming from the organism carry risks associated with the possibility of overloading cells. The regulatory loop of an autonomous cell must therefore include an “off switch”, controlled by the cell. An autonomous cell may protect itself against excessive involvement in processes triggered by external signals (which usually incur significant energy expenses). […] The action of such mechanisms is usually timer-based, meaning that they inactivate signals following a set amount of time. […] The ability to interrupt signals protects cells from exhaustion. Uncontrolled hormone-induced activity may have detrimental effects upon the organism as a whole. This is observed e.g. in the case of the vibrio cholerae toxin which causes prolonged activation of intestinal epithelial cells by locking protein G in its active state (resulting in severe diarrhea which can dehydrate the organism).”

“Biological systems in which information transfer is affected by high entropy of the information source and ambiguity of the signal itself must include discriminatory mechanisms. These mechanisms usually work by eliminating weak signals (which are less specific and therefore introduce ambiguities). They create additional obstacles (thresholds) which the signals must overcome. A good example is the mechanism which eliminates the ability of weak, random antigens to activate lymphatic cells. It works by inhibiting blastic transformation of lymphocytes until a so-called receptor cap has accumulated on the surface of the cell […]. Only under such conditions can the activation signal ultimately reach the cell nucleus […] and initiate gene transcription. […] weak, reversible nonspecific interactions do not permit sufficient aggregation to take place. This phenomenon can be described as a form of discrimination against weak signals. […] Discrimination may also be linked to effector activity. […] Cell division is counterbalanced by programmed cell death. The most typical example of this process is apoptosis […] Each cell is prepared to undergo controlled death if required by the organism, however apoptosis is subject to tight control. Cells protect themselves against accidental triggering of the process via IAP proteins. Only strong proapoptotic signals may overcome this threshold and initiate cellular suicide”.

Simply knowing the sequences, structures or even functions of individual proteins does not provide sufficient insight into the biological machinery of living organisms. The complexity of individual cells and entire organisms calls for functional classification of proteins. This task can be accomplished with a proteome — a theoretical construct where individual elements (proteins) are grouped in a way which acknowledges their mutual interactions and interdependencies, characterizing the information pathways in a complex organism.
Most ongoing proteome construction projects focus on individual proteins as the basic building blocks […] [We would instead argue in favour of a model in which] [t]he basic unit of the proteome is one negative feedback loop (rather than a single protein) […]
Due to the relatively large number of proteins (between 25 and 40 thousand in the human organism), presenting them all on a single graph with vertex lengths corresponds to the relative duration of interactions would be unfeasible. This is why proteomes are often subdivided into functional subgroups such as the metabolome (proteins involved in metabolic processes), interactome (complex-forming proteins), kinomes (proteins which belong to the kinase family) etc.”

February 18, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine, Molecular biology | Leave a comment

Peripheral Neuropathy (I)

The objective of this book is to update health care professionals on recent advances in the pathogenesis, diagnosis and treatment of peripheral neuropathy. This work was written by a group of clinicians and scientists with large expertise in the field.

The book is not the first book about this topic I’ve read, so a lot of the stuff included was of course review – however it’s a quite decent text, and I decided to blog it in at least some detail anyway. It’s somewhat technical and it’s probably not a very good introduction to this topic if you know next to nothing about neurology – in that case I’m certain Said’s book (see the ‘not’-link above) is a better option.

I have added some observations from the first couple of chapters below. As InTech publications like these explicitly encourage people to share the ideas and observations included in these books, I shall probably cover the book in more detail than I otherwise would have.

“Within the developing world, infectious diseases [2-4] and trauma [5] are the most common sources of neuropathic pain syndromes. The developed world, in contrast, suffers more frequently from diabetic polyneuropathy (DPN) [6, 7], post herpetic neuralgia (PHN) from herpes zoster infections [8], and chemotherapy-induced peripheral neuropathy (CIPN) [9, 10]. There is relatively little epidemiological data regarding the prevalence of neuropathic pain within the general population, but a few estimates suggest it is around 7-8% [11, 12]. Despite the widespread occurrence of neuropathic pain, treatment options are limited and often ineffective […] Neuropathic pain can present as on-going or spontaneous discomfort that occurs in the absence of any observable stimulus or a painful hypersensitivity to temperature and touch. […] people with chronic pain have increased incidence of anxiety and depression and reduced scores in quantitative measures of health related quality of life [15]. Despite significant progress in chronic and neuropathic pain research, which has led to the discovery of several efficacious treatments in rodent models, pain management in humans remains ineffective and insufficient [16]. The lack of translational efficiency may be due to inadequate animal models that do not faithfully recapitulate human disease or from biological differences between rodents and humans […] In an attempt to increase the efficacy of medical treatment for neuropathic pain, clinicians and researchers have been moving away from an etiology based classification towards one that is mechanism based. It is current practice to diagnose a person who presents with neuropathic pain according to the underlying etiology and lesion topography [17]. However, this does not translate to effective patient care as these classification criteria do not suggest efficacious treatment. A more apt diagnosis might include a description of symptoms and the underlying pathophysiology associated with those symptoms.”

Neuropathic pain has been defined […] as “pain arising as the direct consequence of a lesion or disease affecting the somatosensory system” [18]. This is distinct from nociceptive pain – which signals tissue damage through an intact nervous system – in underlying pathophysiology, severity, and associated psychological comorbidities [13]. Individuals who suffer from neuropathic pain syndromes report pain of higher intensity and duration than individuals with non-neuropathic chronic pain and have significantly increased incidence of depression, anxiety, and sleep disorders [13, 19]. […] individuals with seemingly identical diseases who both develop neuropathic pain may experience distinct abnormal sensory phenotypes. This may include a loss of sensory perception in some modalities and increased activity in others. Often a reduction in the perception of vibration and light touch is coupled with positive sensory symptoms such as paresthesia, dysesthesia, and pain[20]. Pain may manifest as either spontaneous, with a burning or shock-like quality, or as a hypersensitivity to mechanical or thermal stimuli [21]. This hypersensitivity takes two forms: allodynia, pain that is evoked from a normally non-painful stimulus, and hyperalgesia, an exaggerated pain response from a moderately painful stimulus. […] Noxious stimuli are perceived by small diameter peripheral neurons whose free nerve endings are distributed throughout the body. These neurons are distinct from, although anatomically proximal to, the low threshold mechanoreceptors responsible for the perception of vibration and light touch.”

In addition to hypersensitivity, individuals with neuropathic pain frequently experience ongoing spontaneous pain as a major source of discomfort and distress. […] In healthy individuals, a quiescent neuron will only generate an action potential when presented with a stimulus of sufficient magnitude to cause membrane depolarization. Following nerve injury, however, significant changes in ion channel expression, distribution, and kinetics lead to disruption of the homeostatic electric potential of the membrane resulting in oscillations and burst firing. This manifests as spontaneous pain that has a shooting or burning quality […] There is reasonable evidence to suggest that individual ion channels contribute to specific neuropathic pain symptoms […] [this observation] provides an intriguing therapeutic possibility: unambiguous pharmacologic ion channel blockers to relieve individual sensory symptoms with minimal unintended effects allowing pain relief without global numbness. […] Central sensitization leads to painful hypersensitivity […] Functional and structural changes of dorsal horn circuitry lead to pain hypersensitivity that is maintained independent of peripheral sensitization [38]. This central sensitization provides a mechanistic explanation for the sensory abnormalities that occur in both acute and chronic pain states, such as the expansion of hypersensitivity beyond the innervation territory of a lesion site, repeated stimulation of a constant magnitude leading to an increasing pain response, and pain outlasting a peripheral stimulus [39-41]. In healthy individuals, acute pain triggers central sensitization, but homeostatic sensitivity returns following clearance of the initial insult. In some individuals who develop neuropathic pain, genotype and environmental factors contribute to maintenance of central sensitization leading to spontaneous pain, hyperalgesia, and allodynia. […] Similarly, facilitation also results in a lowered activation threshold in second order neurons”.

“Chronic pain conditions are associated with vast functional and structural changes of the brain, when compared to healthy controls, but it is currently unclear which comes first: does chronic pain cause distortions of brain circuitry and anatomy or do cerebral abnormalities trigger and/or maintain the perception of chronic pain? […] Brain abnormalities in chronic pain states include modification of brain activity patterns, localized decreases in gray matter volume, and circuitry rerouting [53]. […] Chronic pain conditions are associated with localized reduction in gray matter volume, and the topography of gray matter volume reduction is dictated, at least in part, by the particular pathology. […] These changes appear to represent a form of plasticity as they are reversible when pain is effectively managed [63, 67, 68].”

“By definition, neuropathic pain indicates direct pathology of the nervous system while nociceptive pain is an indication of real or potential tissue damage. Due to the distinction in pathophysiology, conventional treatments prescribed for nociceptive pain are not very effective in treating neuropathic pain and vice versa [78]. Therefore the first step towards meaningful pain relief is an accurate diagnosis. […] Treating neuropathic pain requires a multifaceted approach that aims to eliminate the underlying etiology, when possible, and manage the associated discomforts and emotional distress. Although in some cases it is possible to directly treat the cause of neuropathic pain, for example surgery to alleviate a constricted nerve, it is more likely that the primary cause is untreatable, as is the case with singular traumatic events such as stroke and spinal cord injury and diseases like diabetes. When this is the case, symptom management and pain reduction become the primary focus. Unfortunately, in most cases complete elimination of pain is not a feasible endpoint; a pain reduction of 30% is considered to be efficacious [21]. Additionally, many pharmacological treatments require careful titration and tapering to prevent adverse effects and toxicity. This process may take several weeks to months, and ultimately the drug may be ineffective, necessitating another trial with a different medication. It is therefore necessary that both doctor and patient begin treatment with realistic expectations and goals.”

First-line medications for the treatment of neuropathic pain are those that have proven efficacy in randomized clinical trials (RCTs) and are consistent with pooled clinical observations [81]. These include antidepressants, calcium channel ligands, and topical lidocaine [15]. Tricyclic antidepressants (TCAs) have demonstrated efficacy in treating neuropathic pain with positive results in RCTs for central post-stroke pain, PHN, painful diabetic and non-diabetic polyneuropathy, and post-mastectomy pain syndrome [82]. However they do not seem to be effective in treating painful HIV-neuropathy or CIPN [82]. Duloxetine and venlafaxine, two selective serotonin norepinephrine reuptake inhibitors (SSNRIs), have been found to be effective in DPN and both DPN and painful polyneuropathies, respectively [81]. […] Gabapentin and pregabalin have also demonstrated efficacy in several neuropathic pain conditions including DPN and PHN […] Topical lidocaine (5% patch or gel) has significantly reduced allodynia associated with PHN and other neuropathic pain syndromes in several RCTs [81, 82]. With no reported systemic adverse effects and mild skin irritation as the only concern, lidocaine is an appropriate choice for treating localized peripheral neuropathic pain. In the event that first line medications, alone or in combination, are not effective at achieving adequate pain relief, second line medications may be considered. These include opioid analgesics and tramadol, pharmaceuticals which have proven efficacy in RCTs but are associated with significant adverse effects that warrant cautious prescription [15]. Although opioid analgesics are effective pain relievers in several types of neuropathic pain [81, 82, 84], they are associated with misuse or abuse, hypogonadism, constipation, nausea, and immunological changes […] Careful consideration should be given when prescribing opiates to patients who have a personal or family history of drug or alcohol abuse […] Deep brain stimulation, a neurosurgical technique by which an implanted electrode delivers controlled electrical impulses to targeted brain regions, has demonstrated some efficacy in treating chronic pain but is not routinely employed due to a high risk-to-benefit ratio [91]. […] A major challenge in treating neuropathic pain is the heterogeneity of disease pathogenesis within an individual etiological classification. Patients with seemingly identical diseases may experience completely different neuropathic pain phenotypes […] One of the biggest barriers to successful management of neuropathic pain has been the lack of understanding in the underlying pathophysiology that produces a pain phenotype. To that end, significant progress has been made in basic science research.”

In diabetes mellitus, nerves and their supporting cells are subjected to prolonged hyperglycemia and metabolic disturbances and this culminates in reversible/irreversible nervous system dysfunction and damage, namely diabetic peripheral neuropathy (DPN). Due to the varying compositions and extents of neurological involvements, it is difficult to obtain accurate and thorough prevalence estimates of DPN, rendering this microvascular complication vastly underdiagnosed and undertreated [1-4]. According to American Diabetes Association, DPN occurs to 60-70% of diabetic individuals [5] and represents the leading cause of peripheral neuropathies among all cases [6, 7].”

A quick remark: This number seems really high to me. I won’t rule out that it’s accurate if you go with highly sensitive measures of neuropathy, but the number of patients who will experience significant clinical sequelae as a result of DPN is in my opinion likely to be significantly lower than that. On a peripherally related note, it should however on the other hand also be kept in mind that although diabetes-related neurological complications may display some clustering in patient groups – which will necessarily decrease the magnitude of the problem – no single test will ever completely rule out neurological complications in a diabetic; a patient with a negative Semmes-Weinstein monofilament test may still have autonomic neuropathy. So assessing the full disease burden in the context of diabetes-related neurological complications cannot be done using only a single instrument, and the full disease burden is likely to be higher than individual estimates encountered in the literature (unless a full neurological workup was done, which is unlikely to be the case). They do go into more detail about subgroups, clinical significance, etc. below, but I thought this observation was important to add early on in this part of the coverage.

Because diverse anatomic distributions and fiber types may be differentially affected in patients with diabetes, the disease manifestations, courses and pathologies of clinical and subclinical DPN are rather heterogeneous and encompass a broad spectrum […] Current consensus divides diabetes-associated somatic neuropathic syndromes into the focal/multifocal and diffuse/generalized neuropathies [6, 14]. The first category comprises a group of asymmetrical, acute-in-onset and self-limited single lesion(s) of nerve injury or impairment largely resulting from the increased vulnerability of diabetic nerves to mechanical insults (Carpal Tunnel Syndrome) […]. Such mononeuropathies occur idiopathically and only become a clinical problem in association with aging in 5-10% of those affected. Therefore, focal neuropathies are not extensively covered in this chapter [16]. The rest of the patients frequently develop diffuse neuropathies characterized by symmetrical distribution, insidious onset and chronic progression. In particular, a distal symmetrical sensorimotor polyneuropathy accounts for 90% of all DPN diagnoses in type 1 and type 2 diabetics and affects all types of peripheral sensory and motor fibers in a temporally non-uniform manner [6, 17].
Symptoms begin with prickling, tingling, numbness, paresthesia, dysesthesia and various qualities of pain associated with small sensory fibers at the very distal end (toes) of lower extremities [1, 18]. Presence of the above symptoms together with abnormal nociceptive response of epidermal C and A-δ fibers to pain/temperature (as revealed by clinical examination) constitute the diagnosis of small fiber sensory neuropathy, which produces both painful and insensate phenotypes [19]. Painful diabetic neuropathy is a prominent, distressing and chronic experience in at least 10-30% of DPN populations [20, 21]. Its occurrence does not necessarily correlate with impairment in electrophysiological or quantitative sensory testing (QST). […] Large myelinated sensory fibers that innervate the dermis, such as Aβ, also become involved later on, leading to impaired proprioception, vibration and tactile detection, and mechanical hypoalgesia [19]. Following this “stocking-glove”, length-dependent and dying-back evolvement, neurodegeneration gradually proceeds to proximal muscle sensory and motor nerves. Its presence manifests in neurological testings as reduced nerve impulse conductions, diminished ankle tendon reflex, unsteadiness and muscle weakness [1, 24].
Both the absence of protective sensory response and motor coordination predispose neuropathic foot to impaired wound healing and gangrenous ulceration — often ensued by limb amputation in severe and/or advanced cases […]. Although symptomatic motor deficits only appear in later stages of DPN [25], motor denervation and distal atrophy can increase the rate of fractures by causing repetitive minor trauma or falls [24, 28]. Other unusual but highly disabling late sequelae of DPN include limb ischemia and joint deformity [6]; the latter also being termed Charcot’s neuroarthropathy or Charcot’s joints [1]. In addition to significant morbidities, several separate cohort studies provided evidence that DPN [29], diabetic foot ulcers [30] and increased toe vibration perception threshold (VPT) [31] are all independent risk factors for mortality.”

Unfortunately, current therapy for DPN is far from effective and at best only delays the onset and/or progression of the disease via tight glucose control […] Even with near normoglycemic control, a substantial proportion of patients still suffer the debilitating neurotoxic consequences of diabetes [34]. On the other hand, some with poor glucose control are spared from clinically evident signs and symptoms of neuropathy for a long time after diagnosis [37-39]. Thus, other etiological factors independent of hyperglycemia are likely to be involved in the development of DPN. Data from a number of prospective, observational studies suggested that older age, longer diabetes duration, genetic polymorphism, presence of cardiovascular disease markers, malnutrition, presence of other microvascular complications, alcohol and tobacco consumption, and higher constitutional indexes (e.g. weight and height) interact with diabetes and make for strong predictors of neurological decline [13, 32, 40-42]. Targeting some of these modifiable risk factors in addition to glycemia may improve the management of DPN. […] enormous efforts have been devoted to understanding and intervening with the molecular and biochemical processes linking the metabolic disturbances to sensorimotor deficits by studying diabetic animal models. In return, nearly 2,200 articles were published in PubMed central and at least 100 clinical trials were reported evaluating the efficacy of a number of pharmacological agents; the majority of them are designed to inhibit specific pathogenic mechanisms identified by these experimental approaches. Candidate agents have included aldose reductase inhibitors, AGE inhibitors, γ-linolenic acid, α-lipoic acid, vasodilators, nerve growth factor, protein kinase Cβ inhibitors, and vascular endothelial growth factor. Notwithstanding a fruitful of knowledge and promising results in animals, none has translated into definitive clinical success […] Based on the records published by National Institute of Neurological Disorders and Stroke (NINDS), a main source of DPN research, about 16,488 projects were funded at the expense of over $8 billion for the fiscal years of 2008 through 2012. Of these projects, an estimated 72,200 animals were used annually to understand basic physiology and disease pathology as well as to evaluate potential drugs [255]. As discussed above, however, the usefulness of these pharmaceutical agents developed through such a pipeline in preventing or reducing neuronal damage has been equivocal and usually halted at human trials due to toxicity, lack of efficacy or both […]. Clearly, the pharmacological translation from our decades of experimental modeling to clinical practice with regard to DPN has thus far not even [been] close to satisfactory.”

Whereas a majority of the drugs investigated during preclinical testing executed experimentally desired endpoints without revealing significant toxicity, more than half that entered clinical evaluation for treating DPN were withdrawn as a consequence of moderate to severe adverse events even at a much lower dose. Generally, using other species as surrogates for human population inherently encumbers the accurate prediction of toxic reactions for several reasons […] First of all, it is easy to dismiss drug-induced non-specific effects in animals – especially for laboratory rodents who do not share the same size, anatomy and physical activity with humans. […]  Second, some physiological and behavioral phenotypes observable in humans are impossible for animals to express. In this aspect, photosensitive skin rash and pain serve as two good examples of non-translatable side effects. Rodent skin differs from that of humans in that it has a thinner and hairier epidermis and distinct DNA repair abilities [260]. Therefore, most rodent stains used in diabetes modeling provide poor estimates for the probability of cutaneous hypersensitivity reactions to pharmacological treatments […] Another predicament is to assess pain in rodents. The reason for this is simple: these animals cannot tell us when, where or even whether they are experiencing pain […]. Since there is not any specific type of behavior to which painful reaction can be unequivocally associated, this often leads to underestimation of painful side effects during preclinical drug screening […] The third problem is that animals and humans have different pharmacokinetic and toxicological responses.”

“Genetic or chemical-induced diabetic rats or mice have been a major tool for preclinical pharmacological evaluation of potential DPN treatments. Yet, they do not faithfully reproduce many neuropathological manifestations in human diabetics. The difficulty of such begins with the fact that it is not possible to obtain in rodents a qualitative and quantitative expression of the clinical symptoms that are frequently presented in neuropathic diabetic patients, including spontaneous pain of different characteristics (e.g. prickling, tingling, burning, squeezing), paresthesia and numbness. As symptomatic changes constitute an important parameter of therapeutic outcome, this may well underlie the failure of some aforementioned drugs in clinical trials despite their good performance in experimental tests […] Development of nerve dysfunction in diabetic rodents also does not follow the common natural history of human DPN. […] Besides the lack of anatomical resemblance, the changes in disease severity are often missing in these models. […] importantly, foot ulcers that occur as a late complication to 15% of all individuals with diabetes [14] do not spontaneously develop in hyperglycemic rodents. Superimposed injury by experimental procedure in the foot pads of diabetic rats or mice may lend certain insight in the impaired wound healing in diabetes [278] but is not reflective of the chronic, accumulating pathological changes in diabetic feet of human counterparts. Another salient feature of human DPN that has not been described in animals is the predominant sensory and autonomic nerve damage versus minimal involvement of motor fibers [279]. This should elicit particular caution as the selective susceptibility is critical to our true understanding of the etiopathogenesis underlying distal sensorimotor polyneuropathy in diabetes. In addition to the lack of specificity, most animal models studied only cover a narrow spectrum of clinical DPN and have not successfully duplicated syndromes including proximal motor neuropathy and focal lesions [279].
Morphologically, fiber atrophy and axonal loss exist in STZ-rats and other diabetic rodents but are much milder compared to the marked degeneration and loss of myelinated and unmyelinated nerves readily observed in human specimens [280]. Of significant note, rodents are notoriously resistant to developing some of the histological hallmarks seen in diabetic patients, such as segmental and paranodal demyelination […] the simultaneous presence of degenerating and regenerating fibers that is characteristic of early DPN has not been clearly demonstrated in these animals [44]. Since such dynamic nerve degeneration/regeneration signifies an active state of nerve repair and is most likely to be amenable to therapeutic intervention, absence of this property makes rodent models a poor tool in both deciphering disease pathogenesis and designing treatment approaches […] With particular respect to neuroanatomy, a peripheral axon in humans can reach as long as one meter [296] whereas the maximal length of the axons innervating the hind limb is five centimeters in mice and twelve centimeters in rats. This short length makes it impossible to study in rodents the prominent length dependency and dying-back feature of peripheral nerve dysfunction that characterizes human DPN. […] For decades the cytoarchitecture of human islets was assumed to be just like those in rodents with a clear anatomical subdivision of β-cells and other cell types. By using confocal microscopy and multi-fluorescent labeling, it was finally uncovered that human islets have not only a substantially lower percentage of β-cell population, but also a mixed — rather than compartmentalized — organization of the different cell types [297]. This cellular arrangement was demonstrated to directly alter the functional performance of human islets as opposed to rodent islets. Although it is not known whether such profound disparities in cell composition and association also exist in the PNS, it might as well be anticipated considering the many sophisticated sensory and motor activities that are unique to humans. Considerable species difference also manifest at a molecular level. […] At least 80% of human genes have a counterpart in the mouse and rat genome. However, temporal and spatial expression of these genes can vary remarkably between humans and rodents, in terms of both extent and isoform specificity.”

“Ultimately, a fundamental problem associated with resorting to rodents in DPN research is to study a human disorder that takes decades to develop and progress in organisms with a maximum lifespan of 2-3 years. […] It is […] fair to say that a full clinical spectrum of the maturity-onset DPN likely requires a length of time exceeding the longevity of rodents to present and diabetic rodent models at best only help illustrate the very early aspects of the entire disease syndrome. Since none of the early pathogenetic pathways revealed in diabetic rodents will contribute to DPN in a quantitatively and temporally uniform fashion throughout the prolonged natural history of this disease, it is not surprising that a handful of inhibitors developed against these processes have not benefited patients with relatively long-standing neuropathy. As a matter of fact, any agents targeting single biochemical insults would be too little too late to treat a chronic neurological disorder with established nerve damage and pathogenetic heterogeneity […] It is important to point out that the present review does not argue against the ability of animal models to shed light on basic molecular, cellular and physiological processes that are shared among species. Undoubtedly, animal models of diabetes have provided abundant insights into the disease biology of DPN. Nevertheless, the lack of any meaningful advance in identifying a promising pharmacological target necessitates a reexamination of the validity of current DPN models as well as to offer a plausible alternative methodology to scientific approaches and disease intervention. […] we conclude that the fundamental species differences have led to misinterpretation of rodent data and overall failure of pharmacological investment. As more is being learned, it is becoming prevailing that DPN is a chronic, heterogeneous disease unlikely to benefit from targeting specific and early pathogenetic components revealed by animal studies.”

February 13, 2018 Posted by | Books, Diabetes, Genetics, Medicine, Neurology, Pharmacology | Leave a comment

Systems Biology (II)

Some observations from the book’s chapter 3 below:

“Without regulation biological processes would become progressively more and more chaotic. In living cells the primary source of information is genetic material. Studying the role of information in biology involves signaling (i.e. spatial and temporal transfer of information) and storage (preservation of information). Regarding the role of the genome we can distinguish three specific aspects of biological processes: steady-state genetics, which ensure cell-level and body homeostasis; genetics of development, which controls cell differentiation and genesis of the organism; and evolutionary genetics, which drives speciation. […] The ever growing demand for information, coupled with limited storage capacities, has resulted in a number of strategies for minimizing the quantity of the encoded information that must be preserved by living cells. In addition to combinatorial approaches based on noncontiguous genes structure, self-organization plays an important role in cellular machinery. Nonspecific interactions with the environment give rise to coherent structures despite the lack of any overt information store. These mechanisms, honed by evolution and ubiquitous in living organisms, reduce the need to directly encode large quantities of data by adopting a systemic approach to information management.”

Information is commonly understood as a transferable description of an event or object. Information transfer can be either spatial (communication, messaging or signaling) or temporal (implying storage). […] The larger the set of choices, the lower the likelihood [of] making the correct choice by accident and — correspondingly — the more information is needed to choose correctly. We can therefore state that an increase in the cardinality of a set (the number of its elements) corresponds to an increase in selection indeterminacy. This indeterminacy can be understood as a measure of “a priori ignorance”. […] Entropy determines the uncertainty inherent in a given system and therefore represents the relative difficulty of making the correct choice. For a set of possible events it reaches its maximum value if the relative probabilities of each event are equal. Any information input reduces entropy — we can therefore say that changes in entropy are a quantitative measure of information. […] Physical entropy is highest in a state of equilibrium, i.e. lack of spontaneity (G = 0,0) which effectively terminates the given reaction. Regulatory processes which counteract the tendency of physical systems to reach equilibrium must therefore oppose increases in entropy. It can be said that a steady inflow of information is a prerequisite of continued function in any organism. As selections are typically made at the entry point of a regulatory process, the concept of entropy may also be applied to information sources. This approach is useful in explaining the structure of regulatory systems which must be “designed” in a specific way, reducing uncertainty and enabling accurate, error-free decisions.

The fire ant exudes a pheromone which enables it to mark sources of food and trace its own path back to the colony. In this way, the ant conveys pathing information to other ants. The intensity of the chemical signal is proportional to the abundance of the source. Other ants can sense the pheromone from a distance of several (up to a dozen) centimeters and thus locate the source themselves. […] As can be expected, an increase in the entropy of the information source (i.e. the measure of ignorance) results in further development of regulatory systems — in this case, receptors capable of receiving signals and processing them to enable accurate decisions. Over time, the evolution of regulatory mechanisms increases their performance and precision. The purpose of various structures involved in such mechanisms can be explained on the grounds of information theory. The primary goal is to select the correct input signal, preserve its content and avoid or eliminate any errors.”

Genetic information stored in nucleotide sequences can be expressed and transmitted in two ways:
a. via replication (in cell division);
b. via transcription and translation (also called gene expression […]
)
Both processes act as effectors and can be triggered by certain biological signals transferred on request.
Gene expression can be defined as a sequence of events which lead to the synthesis of proteins or their products required for a particular function. In cell division, the goal of this process is to generate a copy of the entire genetic code (S phase), whereas in gene expression only selected fragments of DNA (those involved in the requested function) are transcribed and translated. […] Transcription calls for exposing a section of the cell’s genetic code and although its product (RNA) is short-lived, it can be recreated on demand, just like a carbon copy of a printed text. On the other hand, replication affects the entire genetic material contained in the cell and must conform to stringent precision requirements, particularly as the size of the genome increases.”

The magnitude of effort involved in replication of genetic code can be visualized by comparing the DNA chain to a zipper […]. Assuming that the zipper consists of three pairs of interlocking teeth per centimeter (300 per meter) and that the human genome is made up of 3 billion […] base pairs, the total length of our uncoiled DNA in “zipper form” would be equal to […] 10,000 km […] If we were to unfasten the zipper at a rate of 1 m per second, the entire unzipping process would take approximately 3 months […]. This comparison should impress upon the reader the length of the DNA chain and the precision with which individual nucleotides must be picked to ensure that the resulting code is an exact copy of the source. It should also be noted that for each base pair the polymerase enzyme needs to select an appropriate matching nucleotide from among four types of nucleotides present in the solution, and attach it to the chain (clearly, no such problem occurs in zippers). The reliability of an average enzyme is on the order of 10-3–10-4, meaning that one error occurs for every 1,000–10,000 interactions between the enzyme and its substrate. Given this figure, replication of 3*109 base pairs would introduce approximately 3 million errors (mutations) per genome, resulting in a highly inaccurate copy. Since the observed reliability of replication is far higher, we may assume that some corrective mechanisms are involved. Really, the remarkable precision of genetic replication is ensured by DNA repair processes, and in particular by the corrective properties of polymerase itself.

Many mutations are caused by the inherent chemical instability of nucleic acids: for example, cytosine may spontaneously convert to uracil. In the human genome such an event occurs approximately 100 times per day; however uracil is not normally encountered in DNA and its presence alerts defensive mechanisms which correct the error. Another type of mutation is spontaneous depurination, which also triggers its own, dedicated error correction procedure. Cells employ a large number of corrective mechanisms […] DNA repair mechanisms may be treated as an “immune system” which protects the genome from loss or corruption of genetic information. The unavoidable mutations which sometimes occur despite the presence of error correction-mechanisms can be masked due to doubled presentation (alleles) of genetic information. Thus, most mutations are recessive and not expressed in the phenotype. As the length of the DNA chain increases, mutations become more probable. It should be noted that the number of nucleotides in DNA is greater than the relative number of aminoacids participating in polypeptide chains. This is due to the fact that each aminoacid is encoded by exactly three nucleotides — a general principle which applies to all living organisms. […] Fidelity is, of course, fundamentally important in DNA replication as any harmful mutations introduced in its course are automatically passed on to all successive generations of cells. In contrast, transcription and translation processes can be more error-prone as their end products are relatively short-lived. Of note is the fact that faulty transcripts appear in relatively low quantities and usually do not affect cell functions, since regulatory processes ensure continued synthesis of the required substances until a suitable level of activity is reached. Nevertheless, it seems that reliable transcription of genetic material is sufficiently significant for cells to have developed appropriate proofreading mechanisms, similar to those which assist replication. […] the entire information pathway — starting with DNA and ending with active proteins — is protected against errors. We can conclude that fallibility is an inherent property of genetic information channels, and that in order to perform their intended function, these channels require error correction mechanisms.”

The discrete nature of genetic material is an important property which distinguishes prokaryotes from eukaryotes. […] The ability to select individual nucleotide fragments and construct sequences from predetermined “building blocks” results in high adaptability to environmental stimuli and is a fundamental aspect of evolution. The discontinuous nature of genes is evidenced by the presence of fragments which do not convey structural information (introns), as opposed to structure-encoding fragments (exons). The initial transcript (pre-mRNA) contains introns as well as exons. In order to provide a template for protein synthesis, it must undergo further processing (also known as splicing): introns must be cleaved and exon fragments attached to one another. […] Recognition of intron-exon boundaries is usually very precise, while the reattachment of adjacent exons is subject to some variability. Under certain conditions, alternative splicing may occur, where the ordering of the final product does not reflect the order in which exon sequences appear in the source chain. This greatly increases the number of potential mRNA combinations and thus the variety of resulting proteins. […] While access to energy sources is not a major problem, sources of information are usually far more difficult to manage — hence the universal tendency to limit the scope of direct (genetic) information storage. Reducing the length of genetic code enables efficient packing and enhances the efficiency of operations while at the same time decreasing the likelihood of errors. […] The number of genes identified in the human genome is lower than the number of distinct proteins by a factor of 4; a difference which can be attributed to alternative splicing. […] This mechanism increases the variety of protein structures without affecting core information storage, i.e. DNA sequences. […] Primitive organisms often possess nearly as many genes as humans, despite the essential differences between both groups. Interspecies diversity is primarily due to the properties of regulatory sequences.”

The discontinuous nature of genes is evolutionarily advantageous but comes at the expense of having to maintain a nucleus where such splicing processes can be safely conducted, in addition to efficient transport channels allowing transcripts to penetrate the nuclear membrane. While it is believed that at early stages of evolution RNA was the primary repository of genetic information, its present function can best be described as an information carrier. Since unguided proteins cannot ensure sufficient specificity of interaction with nucleic acids, protein-RNA complexes are used often in cases where specific fragments of genetic information need to be read. […] The use of RNA in protein complexes is common across all domains of the living world as it bridges the gap between discrete and continuous storage of genetic information.”

Epigenetic differentiation mechanisms are particularly important in embryonic development. […] Unlike the function of mature organisms, embryonic programming refers to structures which do not yet exist but which need to be created through cell proliferation and differentiation. […] Differentiation of cells results in phenotypic changes. This phenomenon is the primary difference between development genetics and steady-state genetics. Functional differences are not, however, associated with genomic changes: instead they are mediated by the transcriptome where certain genes are preferentially selected for transcription while others are suppressed. […] In a mature, specialized cell only a small portion of the transcribable genome is actually expressed. The remainder of the cell’s genetic material is said to be silenced. Gene silencing is a permanent condition. Under normal circumstances mature cells never alter their function, although such changes may be forced in a laboratory setting […] Cells which make up the embryo at a very early stage of development are pluripotent, meaning that their purpose can be freely determined and that all of their genetic information can potentially be expressed (under certain conditions). […] At each stage of the development process the scope of pluripotency is reduced until, ultimately, the cell becomes monopotent. Monopotency implies that the final function of the cell has already been determined, although the cell itself may still be immature. […] functional dissimilarities between specialized cells are not associated with genetic mutations but rather with selective silencing of genes. […] Most genes which determine biological functions have a biallelic representation (i.e. a representation consisting of two alleles). The remainder (approximately 10 % of genes) is inherited from one specific parent, as a result of partial or complete silencing of their sister alleles (called paternal or maternal imprinting) which occurs during gametogenesis. The suppression of a single copy of the X chromosome is a special case of this phenomenon.”

Evolutionary genetics is subject to two somewhat contradictory criteria. On the one hand, there is clear pressure on accurate and consistent preservation of biological functions and structures while on the other hand it is also important to permit gradual but persistent changes. […] the observable progression of adaptive traits which emerge as a result of evolution suggests a mechanism which promotes constructive changes over destructive ones. Mutational diversity cannot be considered truly random if it is limited to certain structures or functions. […] Approximately 50 % of the human genome consists of mobile segments, capable of migrating to various positions in the genome. These segments are called transposons and retrotransposons […] The mobility of genome fragments not only promotes mutations (by increasing the variability of DNA) but also affects the stability and packing of chromatin strands wherever such mobile sections are reintegrated with the genome. Under normal circumstances the activity of mobile sections is tempered by epigenetic mechanisms […]; however in certain situations gene mobility may be upregulated. In particular, it seems that in “prehistoric” (remote evolutionary) times such events occurred at a much faster pace, accelerating the rate of genetic changes and promoting rapid evolution. Cells can actively promote mutations by way of the so-called AID process (activity-dependent cytosine deamination). It is an enzymatic mechanism which converts cytosine into uracil, thereby triggering repair mechanisms and increasing the likelihood of mutations […] The existence of AID proves that cells themselves may trigger evolutionary changes and that the role of mutations in the emergence of new biological structures is not strictly passive.”

Regulatory mechanisms which receive signals characterized by high degrees of uncertainty, must be able to make informed choices to reduce the overall entropy of the system they control. This property is usually associated with development of information channels. Special structures ought to be exposed within information channels connecting systems of different character as for example linking transcription to translation or enabling transduction of signals through the cellular membrane. Examples of structures which convey highly entropic information are receptor systems associated with blood coagulation and immune responses. The regulatory mechanism which triggers an immune response relies on relatively simple effectors (complement factor enzymes, phages and killer cells) coupled to a highly evolved receptor system, represented by specific antibodies and organized set of cells. Compared to such advanced receptors the structures which register the concentration of a given product (e.g. glucose in blood) are rather primitive. Advanced receptors enable the immune system to recognize and verify information characterized by high degrees of uncertainty. […] In sequential processes it is usually the initial stage which poses the most problems and requires the most information to complete successfully. It should come as no surprise that the most advanced control loops are those associated with initial stages of biological pathways.”

February 10, 2018 Posted by | Biology, Books, Chemistry, Evolutionary biology, Genetics, Immunology, Medicine, Molecular biology | Leave a comment

Endocrinology (part 4 – reproductive endocrinology)

Some observations from chapter 4 of the book below.

“*♂. The whole process of spermatogenesis takes approximately 74 days, followed by another 12-21 days for sperm transport through the epididymis. This means that events which may affect spermatogenesis may not be apparent for up to three months, and successful induction of spermatogenesis treatment may take 2 years. *♀. From primordial follicle to primary follicle, it takes about 180 days (a continuous process). It is then another 60 days to form a preantral follicle which then proceeds to ovulation three menstrual cycles later. Only the last 2-3 weeks of this process is under gonadotrophin drive, during which time the follicle grows from 2 to 20mm.”

“Hirsutism (not a diagnosis in itself) is the presence of excess hair growth in ♀ as a result of androgen production and skin sensitivity to androgens. […] In ♀, testosterone is secreted primarily by the ovaries and adrenal glands, although a significant amount is produced by the peripheral conversion of androstenedione and DHEA. Ovarian androgen production is regulated by luteinizing hormone, whereas adrenal production is ACTH-dependent. The predominant androgens produced by the ovaries are testosterone and androstenedione, and the adrenal glands are the main source of DHEA. Circulating testosterone is mainly bound to sex hormone-binding globulin (SHBG), and it is the free testosterone which is biologically active. […] Slowly progressive hirsutism following puberty suggests a benign cause, whereas rapidly progressive hirsutism of recent onset requires further immediate investigation to rule out an androgen-secreting neoplasm. [My italics, US] […] Serum testosterone should be measured in all ♀ presenting with hirsutism. If this is <5nmol/L, then the risk of a sinister cause for her hirsutism is low.”

“Polycystic ovary syndrome (PCOS) *A heterogeneous clinical syndrome characterized by hyperandrogenism, mainly of ovarian origin, menstrual irregularity, and hyperinsulinaemia, in which other causes of androgen excess have been excluded […] *A distinction is made between polycystic ovary morphology on ultrasound (PCO which also occurs in congenital adrenal hyperplasia, acromegaly, Cushing’s syndrome, and testesterone-secreting tumours) and PCOS – the syndrome. […] PCOS is the most common endocrinopathy in ♀ of reproductive age; >95% of ♀ presenting to outpatients with hirsutism have PCOS. *The estimated prevalence of PCOS ranges from 5 to 10% on clinical criteria. Polycystic ovaries on US alone are present in 20-25% of ♀ of reproductive age. […] family history of type 2 diabetes mellitus is […] more common in ♀ with PCOS. […] Approximately 70% of ♀ with PCOS are insulin-resistant, depending on the definition. […] Type 2 diabetes mellitus is 2-4 x more common in ♀ with PCOS. […] Hyperinsulinaemia is exacerbated by obesity but can also be present in lean ♀ with PCOS. […] Insulin […] inhibits SHBG synthesis by the liver, with a consequent rise in free androgen levels. […] Symptoms often begin around puberty, after weight gain, or after stopping the oral contraceptive pill […] Oligo-/amenorrhoea [is present in] 70% […] Hirsutism [is present in] 66% […] Obesity [is present in] 50% […] *Infertility (30%). PCOS accounts for 75% of cases of anovulatory infertility. The risk of spontaneous miscarriage is also thought to be higher than the general population, mainly because of obesity. […] The aims of investigations [of PCOS] are mainly to exclude serious underlying disorders and to screen for complications, as the diagnosis is primarily clinical […] Studies have uniformly shown that weight reduction in obese ♀ with PCOS will improve insulin sensitivity and significantly reduce hyperandrogenaemia. Obese ♀ are less likely to respond to antiandrogens and infertility treatment.”

“Androgen-secreting tumours [are] [r]are tumours of the ovary or adrenal gland which may be benign or malignant, which cause virilization in ♀ through androgen production. […] Virilization […] [i]ndicates severe hyperandrogenism, is associated with clitoromegaly, and is present in 98% of ♀ with androgen-producing tumours. Not usually a feature of PCOS. […] Androgen-secreting ovarian tumours[:] *75% develop before the age of 40 years. *Account for 0.4% of all ovarian tumours; 20% are malignant. *Tumours are 5-25cm in size. The larger they are, the more likely they are to be malignant. They are rarely bilateral. […] Androgen-secreting adrenal tumours[:] *50% develop before the age of 50 years. *Larger tumours […] are more likely to be malignant. *Usually with concomitant cortisol secretion as a variant of Cushing’s syndrome. […] Symptoms and signs of Cushing’s syndrome are present in many of ♀ with adrenal tumours. […] Onset of symptoms. Usually recent onset of rapidly progressive symptoms. […] Malignant ovarian and adrenal androgen-secreting tumours are usually resistant to chemotherapy and radiotherapy. […] *Adrenal tumours. 20% 5-year survival. Most have metastatic disease at the time of surgery. *Ovarian tumours. 30% disease-free survival and 40% overall survival at 5 years. […] Benign tumours. *Prognosis excellent. *Hirsutism improves post-operatively, but clitoromegaly, male pattern balding, and deep voice may persist.”

*Oligomenorrhoea is defined as the reduction in the frequency of menses to <9 periods a year. *1° amenorrhoea is the failure of menarche by the age of 16 years. Prevalence ~0.3% *2° amenorrhoea refers to the cessation of menses for >6 months in ♀ who had previously menstruated. Prevalence ~3%. […] Although the list of causes is long […], the majority of cases of secondary amenorrhoea can be accounted for by four conditions: *Polycystic ovary syndrome. *Hypothalamic amenorrhoea. *Hyperprolactinaemia. *Ovarian failure. […] PCOS is the only common endocrine cause of amenorrhoea with normal oestrogenization – all other causes are oestrogen-deficient. Women with PCOS, therefore, are at risk of endometrial hyperplasia, and all others are at risk of osteoporosis. […] Anosmia may indicate Kallman’s syndrome. […] In routine practice, a common differential diagnosis is between mild version of PCOS and hypothalamic amenorrhoea. The distinction between these conditions may require repeated testing, as a single snapshot may not discriminate. The reason to be precise is that PCOS is oestrogen-replete and will, therefore, respond to clomiphene citrate (an antioestrogen) for fertility. HA will be oestrogen-deficient and will need HRT and ovulation induction with pulsatile GnRH or hMG [human Menopausal Gonadotropins – US]. […] […] 75% of ♀ who develop 2° amenorrhoea report hot flushes, night sweats, mood changes, fatigue, or dyspareunia; symptoms may precede the onset of menstrual disturbances.”

“POI [Premature Ovarian Insufficiency] is a disorder characterized by amenorrhoea, oestrogen deficiency, and elevated gonadotrophins, developing in ♀ <40 years, as a result of loss of ovarian follicular function. […] *Incidence – 0.1% of ♀ <30 years and 1% of those <40 years. *Accounts for 10% of all cases of 2° amenorrhoea. […] POI is the result of accelerated depletion of ovarian germ cells. […] POI is usually permanent and progressive, although a remitting course is also experienced and cannot be fully predicted, so all women must know that pregnancy is possible, even though fertility treatments are not effective (often a difficult paradox to describe). Spontaneous pregnancy has been reported in 5%. […] 80% of [women with Turner’s syndrome] have POI. […] All ♀ presenting with hypergonadotrophic amenorrhoea below age 40 should be karyotyped.”

“The menopause is the permanent cessation of menstruation as a result of ovarian failure and is a retrospective diagnosis made after 12 months of amenorrhoea. The average age of at the time of the menopause is ~50 years, although smokers reach the menopause ~2 years earlier. […] Cycles gradually become increasingly anovulatory and variable in length (often shorter) from about 4 years prior to the menopause. Oligomenorrhoea often precedes permanent amenorrhoea. in 10% of ♀, menses cease abruptly, with no preceding transitional period. […] During the perimenopausal period, there is an accelerated loss of bone mineral density (BMD), rendering post-menopausal more susceptible to osteoporotic fractures. […] Post-menopausal are 2-3 x more likely to develop IHD [ischaemic heart disease] than premenopausal , even after age adjustments. The menopause is associated with an increase in risk factors for atherosclerosis, including less favourable lipid profile, insulin sensitivity, and an ↑ thrombotic tendency. […] ♀ are 2-3 x more likely to develop Alzheimer’s disease than ♂. It is suggested that oestrogen deficiency may play a role in the development of dementia. […] The aim of treatment of perimenopausal ♀ is to alleviate menopausal symptoms and optimize quality of life. The majority of women with mild symptoms require no HRT. […] There is an ↑ risk of breast cancer in HRT users which is related to the duration of use. The risk increases by 35%, following 5 years of use (over the age of 50), and falls to never-used risk 5 years after discontinuing HRT. For ♀ aged 50 not using HRT, about 45 in every 1,000 will have cancer diagnosed over the following 20 years. This number increases to 47/1,000 ♀ using HRT for 5 years, 51/1,000 using HRT for 10 years, and 57/1,000 after 15 years of use. The risk is highest in ♀ on combined HRT compared with oestradiol alone. […] Oral HRT increases the risk [of venous thromboembolism] approximately 3-fold, resulting in an extra two cases/10,000 women-years. This risk is markedly ↑ in ♀ who already have risk factors for DVT, including previous DVT, cardiovascular disease, and within 90 days of hospitalization. […] Data from >30 observational studies suggest that HRT may reduce the risk of developing CVD [cardiovascular disease] by up to 50%. However, randomized placebo-controlled trials […] have failed to show that HRT protects against IHD. Currently, HRT should not be prescribed to prevent cardiovascular disease.”

“Any chronic illness may affect testicular function, in particular chronic renal failure, liver cirrhosis, and haemochromatosis. […] 25% of  who develop mumps after puberty have associated orchitis, and 25-50% of these will develop 1° testicular failure. […] Alcohol excess will also cause 1° testicular failure. […] Cytotoxic drugs, particularly alkylating agents, are gonadotoxic. Infertility occurs in 50% of patients following chemotherapy, and a significant number of  require androgen replacement therapy because of low testosterone levels. […] Testosterone has direct anabolic effects on skeletal muscle and has been shown to increase muscle mass and strength when given to hypogonadal men. Lean body mass is also with a reduction in fat mass. […] Hypogonadism is a risk factor for osteoporosis. Testosterone inhibits bone resorption, thereby reducing bone turnover. Its administration to hypogonadal has been shown to improve bone mineral density and reduce the risk of developing osteoporosis. […] *Androgens stimulate prostatic growth, and testosterone replacement therapy may therefore induce symptoms of bladder outflow obstruction in with prostatic hypertrophy. *It is unlikely that testosterone increases the risk of developing prostrate cancer, but it may promote the growth of an existing cancer. […] Testosterone replacement therapy may cause a fall in both LDL and HDL cholesterol levels, the significance of which remains unclear. The effect of androgen replacement therapy on the risk of developing coronary artery disease is unknown.”

“Erectile dysfunction [is] [t]he consistent inability to achieve or maintain an erect penis sufficient for satisfactory sexual intercourse. Affects approximately 10% of and >50% of >70 years. […] Erectile dysfunction may […] occur as a result of several mechanisms: *Neurological damage. *Arterial insufficiency. *Venous incompetence. *Androgen deficiency. *Penile abnormalities. […] *Abrupt onset of erectile dysfunction which is intermittent is often psychogenic in origin. *Progressive and persistent dysfunction indicates an organic cause. […] Absence of morning erections suggests an organic cause of erectile dysfunction.”

“*Infertility, defined as failure of pregnancy after 1 year of unprotected regular (2 x week) sexual intercourse, affects ~10% of all couples. *Couples who fail to conceive after 1 years of regular unprotected sexual intercourse should be investigated. […] Causes[:] *♀ factors (e.g. PCOS, tubal damage) 35%. *♂ factors (idiopathic gonadal failure in 60%) 25%. *Combined factors 25%. *Unexplained infertility 15%. […] [♀] Fertility declines rapidly after the age of 36 years. […] Each episode of acute PID causes infertility in 10-15% of cases. *Trachomatis is responsible for half the cases of PID in developed countries. […] Unexplained infertility [is] [i]nfertility despite normal sexual intercourse occurring at least twice weakly, normal semen analysis, documentation of ovulation in several cycles, and normal patent tubes (by laparoscopy). […] 30-50% will become pregnant within 3 years of expectant management. If not pregnant by then, chances that spontaneous pregnancy will occur are greatly reduced, and ART should be considered. In ♀>34 years of age, then expectant management is not an option, and up to six cycles of IUI or IVF should be considered.”

February 9, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Genetics, Medicine, Pharmacology | Leave a comment