Econstudentlog

A few diabetes papers of interest

i. Identical and Nonidentical Twins: Risk and Factors Involved in Development of Islet Autoimmunity and Type 1 Diabetes.

Some observations from the paper:

“Type 1 diabetes is preceded by the presence of preclinical, persistent islet autoantibodies (1). Autoantibodies against insulin (IAA) (2), GAD (GADA), insulinoma-associated antigen 2 (IA-2A) (3), and/or zinc transporter 8 (ZnT8A) (4) are typically present prior to development of symptomatic hyperglycemia and progression to clinical disease. These autoantibodies may develop many years before onset of type 1 diabetes, and increasing autoantibody number and titers have been associated with increased risk of progression to disease (57).

Identical twins have an increased risk of progression of islet autoimmunity and type 1 diabetes after one twin is diagnosed, although reported rates have been highly variable (30–70%) (811). This risk is increased if the proband twin develops diabetes at a young age (12). Concordance rates for type 1 diabetes in monozygotic twins with long-term follow-up is >50% (13). Risk for development of islet autoimmunity and type 1 diabetes for nonidentical twins is thought to be similar to non-twin siblings (risk of 6–10% for diabetes) (14). Full siblings who inherit both high-risk HLA (HLA DQA1*05:01 DR3/4*0302) haplotypes identical to their proband sibling with type 1 diabetes have a much higher risk for development of diabetes than those who share only one or zero haplotypes (55% vs. 5% by 12 years of age, respectively; P = 0.03) (15). Despite sharing both HLA haplotypes with their proband, siblings without the HLA DQA1*05:01 DR3/4*0302 genotype had only a 25% risk for type 1 diabetes by 12 years of age (15).”

“The TrialNet Pathway to Prevention Study (previously the TrialNet Natural History Study; 16) has been screening relatives of patients with type 1 diabetes since 2004 and follows these subjects with serial autoantibody testing for the development of islet autoantibodies and type 1 diabetes. The study offers longitudinal monitoring for autoantibody-positive subjects through HbA1c testing and oral glucose tolerance tests (OGTTs).”

“The purpose of this study was to evaluate the prevalence of islet autoantibodies and analyze a logistic regression model to test the effects of genetic factors and common twin environment on the presence or absence of islet autoantibodies in identical twins, nonidentical twins, and full siblings screened in the TrialNet Pathway to Prevention Study. In addition, this study analyzed the presence of islet autoantibodies (GADA, IA-2A, and IAA) and risk of type 1 diabetes over time in identical twins, nonidentical twins, and full siblings followed in the TrialNet Pathway to Prevention Study. […] A total of 48,051 sibling subjects were initially screened (288 identical twins, 630 nonidentical twins, and 47,133 full siblings). Of these, 48,026 had an initial screening visit with GADA, IA2A, and IAA results (287 identical twins, 630 nonidentical twins, and 47,109 full siblings). A total of 17,226 participants (157 identical twins, 283 nonidentical twins and 16,786 full siblings) were followed for a median of 2.1 years (25th percentile 1.1 year and 75th percentile 4.0 years), with follow-up defined as at least ≥12 months follow-up after initial screening visit.”

“At the initial screening visit, GADA was present in 20.2% of identical twins (58 out of 287), 5.6% of nonidentical twins (35 out of 630), and 4.7% of full siblings (2,205 out of 47,109) (P < 0.0001). Additionally, IA-2A was present primarily in identical twins (9.4%; 27 out of 287) and less so in nonidentical twins (3.3%; 21 out of 630) and full siblings (2.2%; 1,042 out of 47,109) (P = 0.0001). Nearly 12% of identical twins (34 out of 287) were positive for IAA at initial screen, whereas 4.6% of nonidentical twins (29 out of 630) and 2.5% of full siblings (1,152 out of 47,109) were initially IAA positive (P < 0.0001).”

“At 3 years of follow-up, the risk for development of GADA was 16% for identical twins, 5% for nonidentical twins, and 4% for full siblings (P < 0.0001) (Fig. 1A). The risk for development of IA-2A by 3 years of follow-up was 7% for identical twins, 4% for nonidentical twins, and 2% for full siblings (P = 0.0005) (Fig. 1B). At 3 years of follow-up, the risk of development of IAA was 10% for identical twins, 5% for nonidentical twins, and 4% for full siblings (P = 0.006) […] In initially autoantibody-negative subjects, 1.5% of identical twins, 0% of nonidentical twins, and 0.5% of full siblings progressed to diabetes at 3 years of follow-up (P = 0.18) […] For initially single autoantibody–positive subjects, at 3 years of follow-up, 69% of identical twins, 13% of nonidentical twins, and 12% of full siblings developed type 1 diabetes (P < 0.0001) […] Subjects who were positive for multiple autoantibodies at screening had a higher risk of developing type 1 diabetes at 3 years of follow-up with 69% of identical twins, 72% of nonidentical twins, and 47% of full siblings developing type 1 diabetes (P = 0.079)”

“Because TrialNet is not a birth cohort and the median age at screening visit was 11 years overall, this study would not capture subjects who had initial seroconversion at a young age and then progressed through the intermediate stage of multiple antibody positivity before developing diabetes.”

“This study of >48,000 siblings of patients with type 1 diabetes shows that at initial screening, identical twins were more likely to have at least one positive autoantibody and be positive for GADA, IA-2A, and IAA than either nonidentical twins or full siblings. […] risk for development of type 1 diabetes at 3 years of follow-up was high for both single and multiple autoantibody–positive identical twins (62–69%) and multiple autoantibody–positive nonidentical twins (72%) compared with 47% for initially multiple autoantibody–positive full siblings and 12–13% for initially single autoantibody–positive nonidentical twins and full siblings. To our knowledge, this is the largest prediagnosis study to evaluate the effects of genetic factors and common twin environment on the presence or absence of islet autoantibodies.

In this study, younger age, male sex, and genetic factors were significantly associated with expression of IA-2A, IAA, more than one autoantibody, and more than two autoantibodies, whereas only genetic factors were significant for GADA. An influence of common twin environment (E) was not seen. […] Previous studies have shown that identical twin siblings of patients with type 1 diabetes have a higher concordance rate for development of type 1 diabetes compared with nonidentical twins, although reported rates for identical twins have been highly variable (30–70%) […]. Studies from various countries (Australia, Denmark, Finland, Great Britain, and U.S.) have reported concordance rates for nonidentical twins ∼5–15% […]. Concordance rates have been higher when the proband was diagnosed at a younger age (8), which may explain the variability in these reported rates. In this study, autoantibody-negative nonidentical and identical twins had a low risk of type 1 diabetes by 3 years of follow-up. In contrast, once twins developed autoantibodies, risk for type 1 diabetes was high for multiple autoantibody nonidentical twins and both single and multiple autoantibody identical twins.”

ii. A Type 1 Diabetes Genetic Risk Score Can Identify Patients With GAD65 Autoantibody–Positive Type 2 Diabetes Who Rapidly Progress to Insulin Therapy.

This is another paper in the ‘‘ segment from the February edition of Diabetes Care – multiple other papers on related topics were also included in that edition, so if you’re interested in the genetics of diabetes it may be worth checking out.

Some observations from the paper:

“Type 2 diabetes is a progressive disease due to a gradual reduction in the capacity of the pancreatic islet cells (β-cells) to produce insulin (1). The clinical course of this progression is highly variable, with some patients progressing very rapidly to requiring insulin treatment, whereas others can be successfully treated with lifestyle changes or oral agents for many years (1,2). Being able to identify patients likely to rapidly progress may have clinical utility in prioritization monitoring and treatment escalation and in choice of therapy.

It has previously been shown that many patients with clinical features of type 2 diabetes have positive GAD65 autoantibodies (GADA) and that the presence of this autoantibody is associated with faster progression to insulin (3,4). This is often termed latent autoimmune diabetes in adults (LADA) (5,6). However, the predictive value of GADA testing is limited in a population with clinical type 2 diabetes, with many GADA-positive patients not requiring insulin treatment for many years (4,7). Previous research has suggested that genetic variants in the HLA region associated with type 1 diabetes are associated with more rapid progression to insulin in patients with clinically defined type 2 diabetes and positive GADA (8).

We have recently developed a type 1 diabetes genetic risk score (T1D GRS), which provides an inexpensive ($70 in our local clinical laboratory and <$20 where DNA has been previously extracted), integrated assessment of a person’s genetic susceptibility to type 1 diabetes (9). The score is composed of 30 type 1 diabetes risk variants weighted for effect size and aids discrimination of type 1 diabetes from type 2 diabetes. […] We aimed to determine if the T1D GRS could predict rapid progression to insulin (within 5 years of diagnosis) over and above GADA testing in patients with a clinical diagnosis of type 2 diabetes treated without insulin at diagnosis.”

“We examined the relationship between GADA, T1D GRS, and progression to insulin therapy using survival analysis in 8,608 participants with clinical type 2 diabetes initially treated without insulin therapy. […] In this large study of participants with a clinical diagnosis of type 2 diabetes, we have found that type 1 genetic susceptibility alters the clinical implications of a positive GADA when predicting rapid time to insulin. GADA-positive participants with high T1D GRS were more likely to require insulin within 5 years of diagnosis, with 48% progressing to insulin in this time in contrast to only 18% in participants with low T1D GRS. The T1D GRS was independent of and additive to participant’s age of diagnosis and BMI. However, T1D GRS was not associated with rapid insulin requirement in participants who were GADA negative.”

“Our findings have clear implications for clinical practice. The T1D GRS represents a novel clinical test that can be used to enhance the prognostic value of GADA testing. For predicting future insulin requirement in patients with apparent type 2 diabetes who are GADA positive, T1D GRS may be clinically useful and can be used as an additional test in the screening process. However, in patients with type 2 diabetes who are GADA negative, there is no benefit gained from genetic testing. This is unsurprising, as the prevalence of underlying autoimmunity in patients with a clinical phenotype of type 2 diabetes who are GADA negative is likely to be extremely low; therefore, most GADA-negative participants with high T1D GRS will have nonautoimmune diabetes. The use of this two-step testing approach may facilitate a precision medicine approach to patients with apparent type 2 diabetes; patients who are likely to progress rapidly are identified for targeted management, which may include increased monitoring, early therapy intensification, and/or interventions aimed at slowing progression (36,37).

The costs of analyzing the T1D GRS are relatively modest and may fall further, as genetic testing is rapidly becoming less expensive (38). […] In conclusion, a T1D GRS alters the clinical implications of a positive GADA test in patients with clinical type 2 diabetes and is independent of and additive to clinical features. This therefore represents a novel test for identifying patients with rapid progression in this population.”

iii. Retinopathy and RAAS Activation: Results From the Canadian Study of Longevity in Type 1 Diabetes.

“Diabetic retinopathy is the most common cause of preventable blindness in individuals ages 20–74 years and is the most common vascular complication in type 1 and type 2 diabetes (13). On the basis of increasing severity, diabetic retinopathy is classified into nonproliferative diabetic retinopathy (NPDR), defined in early stages by the presence of microaneurysms, retinal vascular closure, and alteration, or proliferative diabetic retinopathy (PDR), defined by the growth of new aberrant blood vessels (neovascularization) susceptible to hemorrhage, leakage, and fibrosis (4). Diabetic macular edema (DME) can be present at any stage of retinopathy and is characterized by increased vascular permeability leading to retinal thickening.

Important risk factors for the development of retinopathy continue to be chronic hyperglycemia, hyperlipidemia, hypertension, and diabetes duration (5,6). Given the systemic nature of these risk factors, cooccurrence of retinopathy with other vascular complications is common in patients with diabetes.”

“A key pathway implicated in diabetes-related small-vessel disease is overactivation of neurohormones. Activation of the neurohormonal renin-angiotensin-aldosterone system (RAAS) pathway predominates in diabetes in response to hyperglycemia and sodium retention. The RAAS plays a pivotal role in regulating systemic BP through vasoconstriction and fluid-electrolyte homeostasis. At the tissue level, angiotensin II (ANGII), the principal mediator of the RAAS, is implicated in fibrosis, oxidative stress, endothelial damage, thrombosis, inflammation, and vascular remodeling. Of note, systemic RAAS blockers reduce the risk of progression of eye disease but not DKD [Diabetic Kidney Disease, US] in adults with type 1 diabetes with normoalbuminuria (12).

Several longitudinal epidemiologic studies of diabetic retinopathy have been completed in type 1 diabetes; however, few have studied the relationships between eye, nerve, and renal complications and the influence of RAAS activation after prolonged duration (≥50 years) in adults with type 1 diabetes. As a result, less is known about mechanisms that persist in diabetes-related microvascular complications after long-standing diabetes. Accordingly, in this cross-sectional analysis from the Canadian Study of Longevity in Type 1 Diabetes involving adults with type 1 diabetes for ≥50 years, our aims were to phenotype retinopathy stage and determine associations between the presence of retinopathy and other vascular complications. In addition, we examined the relationship between retinopathy stage and renal and systemic hemodynamic function, including arterial stiffness, at baseline and dynamically after RAAS activation with an infusion of exogenous ANGII.”

“Of the 75 participants, 12 (16%) had NDR [no diabetic retinopathy], 24 (32%) had NPDR, and 39 (52%) had PDR […]. At baseline, those with NDR had lower mean HbA1c compared with those with NPDR and PDR (7.4 ± 0.7% and 7.5 ± 0.9%, respectively; P for trend = 0.019). Of note, those with more severe eye disease (PDR) had lower systolic and diastolic BP values but a significantly higher urine albumin-to-creatine ratio (UACR) […] compared with those with less severe eye disease (NPDR) or with NDR despite higher use of RAAS inhibitors among those with PDR compared with NPDR or NDR. History of cardiovascular and peripheral vascular disease history was significantly higher in participants with PDR (33.3%) than in those with NPDR (8.3%) or NDR (0%). Diabetic sensory polyneuropathy was prevalent across all groups irrespective of retinopathy status but was numerically higher in the PDR group (95%) than in the NPDR (86%) or NDR (75%) groups. No significant differences were observed in retinal thickness across the three groups.”

One quick note: This was mainly an eye study, but some of the other figures here are well worth taking note of. 3 out of 4 people in the supposedly low-risk group without eye complications had sensory polyneuropathy after 50 years of diabetes.

Conclusions

Hyperglycemia contributes to the pathogenesis of diabetic retinopathy through multiple interactive pathways, including increased production of advanced glycation end products, IGF-I, vascular endothelial growth factor, endothelin, nitric oxide, oxidative damage, and proinflammatory cytokines (2933). Overactivation of the RAAS in response to hyperglycemia also is implicated in the pathogenesis of diabetes-related complications in the retina, nerves, and kidney and is an important therapeutic target in type 1 diabetes. Despite what is known about these underlying pathogenic mechanisms in the early development of diabetes-related complications, whether the same mechanisms are active in the setting of long-standing type 1 diabetes is not known. […] In this study, we observed that participants with PDR were more likely to be taking RAAS inhibitors, to have a higher frequency of cardiovascular or peripheral vascular disease, and to have higher UACR levels, likely reflecting the higher overall risk profile of this group. Although it is not possible to determine why some patients in this cohort developed PDR while others did not after similar durations of type 1 diabetes, it seems unlikely that glycemic control alone is sufficient to fully explain the observed between-group differences and differing vascular risk profiles. Whereas the NDR group had significantly lower mean HbA1c levels than the NPDR and PDR groups, differences between participants with NPDR and those with PDR were modest. Accordingly, other factors, such as differences in vascular function, neurohormones, growth factors, genetics, and lifestyle, may play a role in determining retinopathy severity at the individual level.

The association between retinopathy and risk for DKD is well established in diabetes (34). In the setting of type 2 diabetes, patients with high levels of UACR have twice the risk of developing diabetic retinopathy than those with normal UACR levels. For example, Rodríguez-Poncelas et al. (35) demonstrated that impaired renal function is linked with increased diabetic retinopathy risk. Consistent with these studies and others, the PDR group in this Canadian Study of Longevity in Type 1 Diabetes demonstrated significantly higher UACR, which is associated with an increased risk of DKD progression, illustrating that the interaction between eye and kidney disease progression also may exist in patients with long-standing type 1 diabetes. […] In conclusion, retinopathy was prevalent after prolonged type 1 diabetes duration, and retinopathy severity associated with several measures of neuropathy and with higher UACR. Differential exaggerated responses to RAAS activation in the peripheral vasculature of the PDR group highlights that even in the absence of DKD, neurohormonal abnormalities are likely still operant, and perhaps accentuated, in patients with PDR even after long-standing type 1 diabetes duration.”

iv. Clinical and MRI Features of Cerebral Small-Vessel Disease in Type 1 Diabetes.

“Type 1 diabetes is associated with a fivefold increased risk of stroke (1), with cerebral small-vessel disease (SVD) as the most common etiology (2). Cerebral SVD in type 1 diabetes, however, remains scarcely investigated and is challenging to study in vivo per se owing to the size of affected vasculature (3); instead, MRI signs of SVD are studied. In this study, we aimed to assess the prevalence of cerebral SVD in subjects with type 1 diabetes compared with healthy control subjects and to characterize diabetes-related variables associated with SVD in stroke-free people with type 1 diabetes.”

RESEARCH DESIGN AND METHODS This substudy was cross-sectional in design and included 191 participants with type 1 diabetes and median age 40.0 years (interquartile range 33.0–45.1) and 30 healthy age- and sex-matched control subjects. All participants underwent clinical investigation and brain MRIs, assessed for cerebral SVD.

RESULTS Cerebral SVD was more common in participants with type 1 diabetes than in healthy control subjects: any marker 35% vs. 10% (P = 0.005), cerebral microbleeds (CMBs) 24% vs. 3.3% (P = 0.008), white matter hyperintensities 17% vs. 6.7% (P = 0.182), and lacunes 2.1% vs. 0% (P = 1.000). Presence of CMBs was independently associated with systolic blood pressure (odds ratio 1.03 [95% CI 1.00–1.05], P = 0.035).”

Conclusions

Cerebral SVD is more common in participants with type 1 diabetes than in healthy control subjects. CMBs especially are more prevalent and are independently associated with hypertension. Our results indicate that cerebral SVD starts early in type 1 diabetes but is not explained solely by diabetes-related vascular risk factors or the generalized microvascular disease that takes place in diabetes (7).

There are only small-scale studies on cerebral SVD, especially CMBs, in type 1 diabetes. Compared with the current study, one study with similar diabetes characteristics (i.e., diabetes duration, glycemic control, and blood pressure levels) as in the current study, but lacking a control population, showed a higher prevalence of WMHs, with more than half of the participants affected, but similar prevalence of lacunes and lower prevalence of CMBs (8). In another study, including 67 participants with type 1 diabetes and 33 control subjects, there was no difference in WMH prevalence but a higher prevalence of CMBs in participants with type 1 diabetes and retinopathy compared with control subjects (9). […] In type 1 diabetes, albuminuria and systolic blood pressure independently increase the risk for both ischemic and hemorrhagic stroke (12). […] We conclude that cerebral SVD is more common in subjects with type 1 diabetes than in healthy control subjects. Future studies will focus on longitudinal development of SVD in type 1 diabetes and the associations with brain health and cognition.”

v. The Legacy Effect in Type 2 Diabetes: Impact of Early Glycemic Control on Future Complications (The Diabetes & Aging Study).

“In the U.S., an estimated 1.4 million adults are newly diagnosed with diabetes every year and present an important intervention opportunity for health care systems. In patients newly diagnosed with type 2 diabetes, the benefits of maintaining an HbA1c <7.0% (<53 mmol/mol) are well established. The UK Prospective Diabetes Study (UKPDS) found that a mean HbA1c of 7.0% (53 mmol/mol) lowers the risk of diabetes-related end points by 12–32% compared with a mean HbA1c of 7.9% (63 mmol/mol) (1,2). Long-term observational follow-up of this trial revealed that this early glycemic control has durable effects: Reductions in microvascular events persisted, reductions in cardiovascular events and mortality were observed 10 years after the trial ended, and HbA1c values converged (1). Similar findings were observed in the Diabetes Control and Complications Trial (DCCT) in patients with type 1 diabetes (24). These posttrial observations have been called legacy effects (also metabolic memory) (5), and they suggest the importance of early glycemic control for the prevention of future complications of diabetes. Although these clinical trial long-term follow-up studies demonstrated legacy effects, whether legacy effects exist in real-world populations, how soon after diabetes diagnosis legacy effects may begin, or for what level of glycemic control legacy effects may exist are not known.

In a previous retrospective cohort study, we found that patients with newly diagnosed diabetes and an initial 10-year HbA1c trajectory that was unstable (i.e., changed substantially over time) had an increased risk for future microvascular events, even after adjusting for HbA1c exposure (6). In the same cohort population, this study evaluates associations between the duration and intensity of glycemic control immediately after diagnosis and the long-term incidence of future diabetic complications and mortality. We hypothesized that a glycemic legacy effect exists in real-world populations, begins as early as the 1st year after diabetes diagnosis, and depends on the level of glycemic exposure.”

RESEARCH DESIGN AND METHODS This cohort study of managed care patients with newly diagnosed type 2 diabetes and 10 years of survival (1997–2013, average follow-up 13.0 years, N = 34,737) examined associations between HbA1c <6.5% (<48 mmol/mol), 6.5% to <7.0% (48 to <53 mmol/mol), 7.0% to <8.0% (53 to <64 mmol/mol), 8.0% to <9.0% (64 to <75 mmol/mol), or ≥9.0% (≥75 mmol/mol) for various periods of early exposure (0–1, 0–2, 0–3, 0–4, 0–5, 0–6, and 0–7 years) and incident future microvascular (end-stage renal disease, advanced eye disease, amputation) and macrovascular (stroke, heart disease/failure, vascular disease) events and death, adjusting for demographics, risk factors, comorbidities, and later HbA1c.

RESULTS Compared with HbA1c <6.5% (<48 mmol/mol) for the 0-to-1-year early exposure period, HbA1c levels ≥6.5% (≥48 mmol/mol) were associated with increased microvascular and macrovascular events (e.g., HbA1c 6.5% to <7.0% [48 to <53 mmol/mol] microvascular: hazard ratio 1.204 [95% CI 1.063–1.365]), and HbA1c levels ≥7.0% (≥53 mmol/mol) were associated with increased mortality (e.g., HbA1c 7.0% to <8.0% [53 to <64 mmol/mol]: 1.290 [1.104–1.507]). Longer periods of exposure to HbA1c levels ≥8.0% (≥64 mmol/mol) were associated with increasing microvascular event and mortality risk.

CONCLUSIONS Among patients with newly diagnosed diabetes and 10 years of survival, HbA1c levels ≥6.5% (≥48 mmol/mol) for the 1st year after diagnosis were associated with worse outcomes. Immediate, intensive treatment for newly diagnosed patients may be necessary to avoid irremediable long-term risk for diabetic complications and mortality.”

Do note that the effect sizes here are very large and this stuff seems really quite important. Judging from the results of this study, if you’re newly diagnosed and you only obtain a HbA1c of say, 7.3% in the first year, that may translate into a close to 30% increased risk of death more than 10 years into the future, compared to a scenario of an HbA1c of 6.3%. People who did not get their HbA1c measured within the first 3 months after diagnosis had a more than 20% increased risk of mortality during the study period. This seems like critical stuff to get right.

vi. Event Rates and Risk Factors for the Development of Diabetic Ketoacidosis in Adult Patients With Type 1 Diabetes: Analysis From the DPV Registry Based on 46,966 Patients.

“Diabetic ketoacidosis (DKA) is a life-threatening complication of type 1 diabetes mellitus (T1DM) that results from absolute insulin deficiency and is marked by acidosis, ketosis, and hyperglycemia (1). Therefore, prevention of DKA is one goal in T1DM care, but recent data indicate increased incidence (2).

For adult patients, only limited data are available on rates and risk factors for development of DKA, and this complication remains epidemiologically poorly characterized. The Diabetes Prospective Follow-up Registry (DPV) has followed patients with diabetes from 1995. Data for this study were collected from 2000 to 2016. Inclusion criteria were diagnosis of T1DM, age at diabetes onset ≥6 months, patient age at follow-up ≥18 years, and diabetes duration ≥1 year to exclude DKA at manifestation. […] In total, 46,966 patients were included in this study (average age 38.5 years [median 21.2], 47.6% female). The median HbA1c was 7.7% (61 mmol/mol), median diabetes duration was 13.6 years, and 58.3% of the patients were treated in large diabetes centers.

On average, 2.5 DKA-related hospital admissions per 100 patient-years (PY) were observed (95% CI 2.1–3.0). The rate was highest in patients aged 18–30 years (4.03/100 PY) and gradually declined with increasing age […] No significant differences between males (2.46/100 PY) and females (2.59/100 PY) were found […] Patients with HbA1c levels <7% (53 mmol/mol) had significantly fewer DKA admissions than patients with HbA1c ≥9% (75 mmol/mol) (0.88/100 PY vs. 6.04/100 PY; P < 0.001)”

“Regarding therapy, use of an insulin pump (continuous subcutaneous insulin infusion [CSII]) was not associated with higher DKA rates […], while patients aged 31–50 years on CSII showed lower rates than patients using multiple daily injections (2.21 vs. 3.12/100 PY; adjusted P < 0.05) […]. Treatment in a large center was associated with lower DKA-related hospital admissions […] In both adults and children, poor metabolic control was the strongest predictor of hospital admission due to DKA. […] In conclusion, the results of this study identify patients with T1DM at risk for DKA (high HbA1c, diabetes duration 5–10 years, migrants, age 30 years and younger) in real-life diabetes care. These at-risk individuals may need specific attention since structured diabetes education has been demonstrated to specifically reduce and prevent this acute complication.”

August 13, 2019 Posted by | Cardiology, Diabetes, Genetics, Immunology, Medicine, Molecular biology, Nephrology, Neurology, Ophthalmology, Studies | Leave a comment

Learning Phylogeny Through Simple Statistical Genetics

From a brief skim I concluded that a lot of the stuff Patterson talks about in this lecture, particularly in terms of the concepts and methods part (…which, as he also alludes to in his introduction, makes up a substantial proportion of the talk), is included/covered in this Ancient Admixture in Human History paper he coauthored, so if you’re either curious to know more, or perhaps just wondering what the talk might be about, it’s probably worth checking it out. In the latter case I would also recommend perhaps just watching the first few minutes of the talk; he provides a very informative outline of the talk in the first four and a half minutes of the video.

A few other links of relevance:

Martingale (probability theory).
GitHub – DReichLab/AdmixTools.
Human Genome Diversity Project.
Jackknife resampling.
Ancient North Eurasian.
Upper Palaeolithic Siberian genome reveals dual ancestry of Native Americans (Raghavan et al, 2014).
General theory for stochastic admixture graphs and F-statistics. This one is only very slightly related to the talk; I came across it while looking for stuff about admixture graphs, a topic he does briefly discuss in the lecture.

July 29, 2019 Posted by | Archaeology, Biology, Genetics, Lectures, Molecular biology, Statistics | Leave a comment

Viruses

This book is not great, but it’s also not bad – I ended up giving it three stars on goodreads, being much closer to 2 stars than 4. It’s a decent introduction to the field of virology, but not more than that. Below some quotes and links related to the book’s coverage.

“[I]t was not until the invention of the electron microscope in 1939 that viruses were first visualized and their structure elucidated, showing them to be a unique class of microbes. Viruses are not cells but particles. They consist of a protein coat which surrounds and protects their genetic material, or, as the famous immunologist Sir Peter Medawar (1915–87) termed it, ‘a piece of bad news wrapped up in protein’. The whole structure is called a virion and the outer coat is called the capsid. Capsids come in various shapes and sizes, each characteristic of the virus family to which it belongs. They are built up of protein subunits called capsomeres and it is the arrangement of these around the central genetic material that determines the shape of the virion. For example, pox viruses are brick-shaped, herpes viruses are icosahedral (twenty-sided spheres), the rabies virus is bullet-shaped, and the tobacco mosaic virus is long and thin like a rod […]. Some viruses have an outer layer surrounding the capsid called an envelope. […] Most viruses are too small to be seen under a light microscope. In general, they are around 100 to 500 times smaller than bacteria, varying in size from 20 to 300 nanometres in diameter […] Inside the virus capsid is its genetic material, or genome, which is either RNA or DNA depending on the type of virus […] Viruses usually have between 4 and 200 genes […] Cells of free-living organisms, including bacteria, contain a variety of organelles essential for life such as ribosomes that manufacture proteins, mitochondria, or other structures that generate energy, and complex membranes for transporting molecules within the cell, and also across the cell wall. Viruses, not being cells, have none of these and are therefore inert until they infect a living cell. Then they hijack a cell’s organelles and use what they need, often killing the cell in the process. Thus viruses are obliged to obtain essential components from other living things to complete their life cycle and are therefore called obligate parasites.”

“Plant viruses either enter cells through a break in the cell wall or are injected by a sap-sucking insect vector like aphids. They then spread very efficiently from cell to cell via plasmodesmata, pores that transport molecules between cells. In contrast, animal viruses infect cells by binding to specific cell surface receptor molecules. […] Once a virus has bound to its cellular receptor, the capsid penetrates the cell and its genome (DNA or RNA) is released into the cell cytoplasm. The main ‘aim’ of a virus is to reproduce successfully, and to do this its genetic material must download the information it carries. Mostly, this will take place in the cell’s nucleus where the virus can access the molecules it needs to begin manufacturing its own proteins. Some large viruses, like pox viruses, carry genes for the enzymes they need to make their proteins and so are more self-sufficient and can complete the whole life cycle in the cytoplasm. Once inside a cell, DNA viruses simply masquerade as pieces of cellular DNA, and their genes are transcribed and translated using as much of the cell’s machinery as they require. […] Because viruses have a high mutation rate, significant evolutionary change, estimated at around 1 per cent per year for HIV, can be measured over a short timescale. […] RNA viruses have no proof-reading system so they have a higher mutation rate than DNA viruses. […] By constantly evolving, […] viruses appear to have honed their skills for spreading from one host to another to reach an amazing degree of sophistication. For instance, the common cold virus (rhinovirus), while infecting cells lining the nasal cavities, tickles nerve endings to cause sneezing. During these ‘explosions’, huge clouds of virus-carrying mucus droplets are forcefully ejected, then float in the air until inhaled by other susceptible hosts. Similarly, by wiping out sheets of cells lining the intestine, rotavirus prevents the absorption of fluids from the gut cavity. This causes severe diarrhea and vomiting that effectively extrudes the virus’s offspring back into the environment to reach new hosts. Other highly successful viruses hitch a ride from one host to another with insects. […] As a virus’s generation time is so much shorter than ours, the evolution of genetic resistance to a new human virus is painfully slow, and constantly leaves viruses with the advantage.”

“The phytoplankton is a group of organisms that uses solar energy and carbon dioxide to generate energy by photosynthesis. As a by-product of this reaction, they produce almost half of the world’s oxygen and are therefore of vital importance to the chemical stability of the planet. Phytoplankton forms the base of the whole marine food-web, being grazed upon by zooplankton and young marine animals which in turn fall prey to fish and higher marine carnivores. By infecting and killing plankton microbes, marine viruses control the dynamics of all these essential populations and their interactions. For example, the common and rather beautiful phytoplankton Emiliania huxleyi regularly undergoes blooms that turn the ocean surface an opaque blue over areas so vast that they can be detected from space by satellites. These blooms disappear as quickly as they arise, and this boom-and-bust cycle is orchestrated by the viruses in the community that specifically infect E. huxleyi. Because they can produce thousands of offspring from every infected cell, virus numbers amplify in a matter of hours and so act as a rapid-response team, killing most of the bloom microbes in just a few days. […] Overall, marine viruses kill an estimated 20-40 per cent of marine bacteria every day, and as the major killer of marine microbes, they profoundly affect the carbon cycle by the so-called ‘viral shunt‘.”

“By the end of 2015 WHO reported 36.7 million people living with HIV globally, 70 per cent of whom are in sub-Saharan Africa. Since the first identification of HIV-induced acquired immunodeficiency syndrome (AIDS) approximately 78 million people have been infected with HIV, causing around 35 million deaths […] Antiviral drugs are key in curtailing HIV spread and are being rolled out worldwide, with present coverage of around 46 per cent of those in need. […] The HIVs are most closely related to primate retroviruses called simian immunodeficiency viruses (SIVs) and it is now clear that these HIV-like viruses have jumped from primates to humans in central Africa on several occasions in the past giving rise to human infections with HIV-1 types M, N, O, and P as well as HIV-2. Yet only one of these viruses, HIV-1 type M, has succeeded in spreading globally. The ancestor of this virus has been traced to a subspecies of chimpanzees (Pan troglodytes troglodytes), among whom it can cause an AIDS-like disease. Since these animals are hunted for bush meat, it is most likely that human infection occurred by blood contamination during the killing and butchering process. This event probably took place in south-east Cameroon where chimpanzees carrying an SIV most closely related to HIV-1 type M live.”

Flu viruses are paramyxoviruses with an RNA genome with eight genes that are segmented, meaning that instead of being a continuous RNA chain, each gene forms a separate strand. The H (haemaglutinin) and N (neuraminidase) genes are the most important in stimulating protective host immunity. There are sixteen different H and nine different N genes, all of which can be found in all combinations in bird flu viruses. Because these genes are separate RNA strands, on occasions they become mixed up, or recombined. So if two flu A viruses with different H and/or N genes infect a single cell, the offspring will carry varying combinations of genes from the two parent viruses. Most of these viruses will be unable to infect humans, but occasionally a new virus strain is produced that can jump directly to humans and cause a pandemic. […] The emergence of almost all recent novel flu viruses has been traced to China where they circulate freely among animals kept in cramped conditions in farms and live bird markets. […] once established in humans their spread has been much enhanced by travel, particularly air travel that can take a virus inside a traveller across the globe before they even realize they are infected. […] With over a billion people worldwide boarding international flights every year, novel viruses have an efficient mechanism for rapid spread.”

“Once an acute emerging virus such as a new strain of flu is successfully established in a population, it generally settles into a mode of cyclical epidemics during which many susceptible people are infected and become immune to further attack. When most are immune, the virus moves on, only returning when a new susceptible population has emerged, which generally consists of those born since the last epidemic. Before vaccination programmes became widespread, young children suffered from a series of well-recognized infectious diseases called the ‘childhood infections’. These included measles, mumps, rubella, and chickenpox, all caused by viruses […] following the introduction of vaccine programmes these have become a rarity, particularly in the developed world. […] Of the three viruses, measles is the most infectious and produces the severest disease. It killed millions of children each year before vaccination was introduced in the mid-20th century. Even today, this virus kills over 70,000 children annually in countries with low vaccine coverage. […] In developing countries, measles kills 1-5 per cent of those it infects”.

Smallpox virus is in a class of its own as the world’s worst killer virus. It first infected humans at least 5,000 years ago and killed around 300 million in the 20th century alone. The virus killed up to 30 per cent of those it infected, scarring and blinding many of the survivors. […] Worldwide, eradication of smallpox was declared in 1980.”

“Viruses spread between hosts in many different ways, but those that cause acute epidemics generally utilize fast and efficient methods, such as the airborne or faecal-oral routes. […] Broadly speaking, virus infections are distinguished by the organs they affect, with airborne viruses mainly causing respiratory illnesses, […] and those transmitted by faecal-oral contamination causing intestinal upsets, with nausea, vomiting, and diarrhoea. There are literally thousands of viruses capable of causing human epidemics […] worldwide, acute respiratory infections, mostly viral, cause an estimated four million deaths a year in children under 5. […] Most people get two or three colds a year, suggesting that the immune system, which is so good at protecting us against a second attack of measles, mumps, or rubella, is defeated by the common cold virus. But this is not the case. In fact, there are so many viruses that cause the typical symptoms of blocked nose, headache, malaise, sore throat, sneezing, coughing, and sometimes fever, that even if we live for a hundred years, we will not experience them all. The common cold virus, or rhinovirus, alone has over one hundred different types, and there are many other viruses that infect the cells lining the nose and throat and cause similar symptoms, often with subtle variations. […] Viruses that target the gut are just as diverse as respiratory viruses […] Rotaviruses are a major cause of gastroenteritis globally, particularly targeting children under 5. The disease varies in severity […] rotaviruses cause over 600,000 infant deaths a year worldwide […] Noroviruses are the second most common cause of viral gastroenteritis after rotaviruses, producing a milder disease of shorter duration. These viruses account for around 23 million cases of gastroenteritis every year […] Many virus families such as rotaviruses that rely on faecal-oral transmission and cause gastroenteritis in humans produce the same symptoms in animals, resulting in great economic loss to the farming industry. […] over the centuries, Rinderpest virus, the cause of cattle plague, has probably been responsible for more loss and hardship than any other. […] Rinderpest is classically described by the three Ds: discharge, diarrhoea, and death, the latter being caused by fluid loss with rapid dehydration. The disease kills around 90 per cent of animals infected. Rinderpest used to be a major problem in Europe and Asia, and when it was introduced into Africa in the late 19th century it killed over 90 per cent of cattle, with devastating economic loss. The Global Rinderpest Eradication Programme was set up in the 1980s aiming to use the effective vaccine to rid the world of the virus by 2010. This was successful, and in October 2010 the disease was officially declared eradicated, the first animal disease and second infectious disease ever to be eliminated.”

“At present, 1.8 million virus-associated cancers are diagnosed worldwide annually. This accounts for 18 per cent of all cancers, but since these human tumour viruses were only identified fairly recently, it is probable that there are several more out there waiting to be discovered. […] Primary liver cancer is a major global health problem, being one of the ten most common cancers worldwide, with over 250,000 cases diagnosed every year and only 5 per cent of sufferers surviving five years. The tumour is more common in men than women and is most prevalent in sub-Saharan Africa and South East Asia where the incidence reaches over 30 per 100,000 population per year, compared to fewer than 5 per 100,000 in the USA and Europe. Up to 80 per cent of these tumours are caused by a hepatitis virus, the remainder being related to liver damage from toxic agents such as alcohol. […] hepatitis B and C viruses cause liver cancer. […] a large study carried out on 22,000 men in Taiwan in the 1990s showed that those persistently infected with HBV were over 200 times more likely than non-carriers to develop liver cancer, and that over half the deaths in this group were due to liver cancer or cirrhosis. […] A vaccine against HBV is available, and its use has already caused a decline in HBV-related liver cancer in Taiwan, where a vaccination programme was implemented in the 1980s”.

“Most persistent viruses have evolved to cause mild or even asymptomatic infections, since a life-threatening disease would not only be detrimental to the host but also deprive the virus of its home. Indeed, some viruses apparently cause no ill effects at all, and have been discovered only by chance. One example is TTV, a tiny DNA virus found in 1997 during the search for the cause of hepatitis and named after the initials (TT) of the patient from whom it was first isolated. We now know that TTV, and its relative TTV-like mini virus, represent a whole spectrum of similar viruses that are carried by almost all humans, non-human primates, and a variety of other vertebrates, but so far they have not been associated with any disease. With modern, highly sensitive molecular techniques for identifying non-pathogenic viruses, we can expect to find more of these silent passengers in the future. […] Historically, diagnosis and treatment of virus infections have lagged far behind those of bacterial diseases and are only now catching up. […] Diagnostic laboratories are still unable to find a culprit virus in many so-called ‘viral’ meningitis, encephalitis, and respiratory infections. This strongly suggests that there are many pathogenic viruses waiting to be discovered”.

“There is no doubt that although vaccines are expensive to prepare and test, they are the safest, easiest, and most cost-effective way of controlling infectious diseases worldwide.”

Virology. Virus. RNA virus. DNA virus. Retrovirus. Reverse transcriptase. Integrase. Provirus.
Germ theory of disease.
Antonie van Leeuwenhoek. Louis Pasteur. Robert Koch. Adolf Mayer. Dmitri Ivanovsky. Martinus Beijerinck.
Tobacco mosaic virus.
Mimivirus.
Viral evolution – origins.
White spot syndrome.
Fibropapillomatosis.
Acyrthosiphon pisum.
Vibrio_cholerae#Genome (Vibrio cholerae are bacteria, but viruses play a very important role here regarding the toxin-producing genes – “Only cholera bacteria infected with the toxigenic phage are pathogenic to humans”).
Yellow fever.
Dengue fever.
CCR5.
Immune system. Cytokine. Interferon. Macrophage. Lymphocyte. Antigen. CD4++ T cells. CD8+ T-cell. Antibody. Regulatory T cell. Autoimmunity.
Zoonoses.
Arbovirus. Coronavirus. SARS-CoV. MERS-CoV. Ebolavirus. Henipavirus. Influenza virus. H5N1. HPAI. H7N9. Foot-and-mouth disease. Monkeypox virus. Chikungunya virus. Schmallenberg virus. Zika virus. Rift valley fever. Bluetongue disease. Arthrogryposis. West Nile fever. Chickenpox. Polio. Bocavirus.
Sylvatic cycle.
Nosocomial infections.
Subacute sclerosing panencephalitis.
Herpesviridae. CMV. Herpes simplex virus. Epstein–Barr virus. Human herpesvirus 6. Human betaherpesvirus 7. Kaposi’s sarcoma-associated herpesvirus (KSHV). Varicella-zoster virus (VZV). Infectious mononucleosis. Hepatitis. Rous sarcoma virus. Human T-lymphotropic virus. Adult t cell leukemia. HPV. Cervical cancer.
Oncovirus. Myc.
Variolation. Edward Jenner. Mary Wortley Montagu. Benjamin Jesty. James Phipps. Joseph Meister. Jonas Salk. Albert Sabin.
Marek’s disease. Rabies. Post-exposure prophylaxis.
Vaccine.
Aciclovir. Oseltamivir.
PCR.

 

June 10, 2019 Posted by | Biology, Books, Cancer/oncology, Immunology, Infectious disease, Medicine, Microbiology, Molecular biology | Leave a comment

Circadian Rhythms (II)

Below I have added some more observations from the book, as well as some links of interest.

“Most circadian clocks make use of a sun-based mechanism as the primary synchronizing (entraining) signal to lock the internal day to the astronomical day. For the better part of four billion years, dawn and dusk has been the main zeitgeber that allows entrainment. Circadian clocks are not exactly 24 hours. So to prevent daily patterns of activity and rest from drifting (freerunning) over time, light acts rather like the winder on a mechanical watch. If the clock is a few minutes fast or slow, turning the winder sets the clock back to the correct time. Although light is the critical zeitgeber for much behaviour, and provides the overarching time signal for the circadian system of most organisms, it is important to stress that many, if not all cells within an organism possess the capacity to generate a circadian rhythm, and that these independent oscillators are regulated by a variety of different signals which, in turn, drive countless outputs […]. Colin Pittendrigh was one of the first to study entrainment, and what he found in Drosophila has been shown to be true across all organisms, including us. For example, if you keep Drosophila, or a mouse or bird, in constant darkness it will freerun. If you then expose the animal to a short pulse of light at different times the shifting (phase shifting) effects on the freerunning rhythm vary. Light pulses given when the clock ‘thinks’ it is daytime (subjective day) will have little effect on the clock. However, light falling during the first half of the subjective night causes the animal to delay the start of its activity the following day, while light exposure during the second half of the subjective night advances activity onset. Pittendrigh called this the ‘phase response curve’ […] Remarkably, the PRC of all organisms looks very similar, with light exposure around dusk and during the first half of the night causing a delay in activity the next day, while light during the second half of the night and around dawn generates an advance. The precise shape of the PRC varies between species. Some have large delays and small advances (typical of nocturnal species) while others have small delays and big advances (typical of diurnal species). Light at dawn and dusk pushes and pulls the freerunning rhythm towards an exactly 24-hour cycle. […] Light can act directly to modify behaviour. In nocturnal rodents such as mice, light encourages these animals to seek shelter, reduce activity, and even sleep, while in diurnal species light promotes alertness and vigilance. So circadian patterns of activity are not only entrained by dawn and dusk but also driven directly by light itself. This direct effect of light on activity has been called ‘masking’, and combines with the predictive action of the circadian system to restrict activity to that period of the light/dark cycle to which the organism has evolved and is optimally adapted.”

“[B]irds, reptiles, amphibians, and fish (but not mammals) have ‘extra-ocular’ photoreceptors located within the pineal complex, hypothalamus, and other areas of the brain, and like the invertebrates, eye loss in many cases has little impact upon the ability of these animals to entrain. […] Mammals are strikingly different from all other vertebrates as they possess photoreceptor cells only within their eyes. Eye loss in all groups of mammals […] abolishes the capacity of these animals to entrain their circadian rhytms to the light/dark cycle. But astonishingly, the visual cells of the retina – the rods and cones – are not required for the detection of the dawn/dusk signal. There exists a third class of photoreceptors within the eye […] Studies in the late 1990s by Russell Foster and his colleagues showed that mice lacking all their rod and cone photoreceptors could still regulate their circadian rhythms to light perfectly normally. But when their eyes were covered the ability to entrain was lost […] work on the rodless/coneless mouse, along with [other] studies […], clearly demonstrated that the mammalian retina contains a small population of photosensitive retinal ganglion cells or pRGCs, which comprise approximately 1-2 per cent of all retinal ganglion cells […] Ophthalmologists now appreciate that eye loss deprives us of both vision and a proper sense of time. Furthermore, genetic diseases that result in the loss of the rods and cones and cause visual blindness, often spare the pRGCs. Under these circumstances, individuals who have their eyes but are visually blind, yet possess functional pRGCs, need to be advised to seek out sufficient light to entrain their circadian system. The realization that the eye provides us with both our sense of space and our sense of time has redefined the diagnosis, treatment, and appreciation of human blindness.”

“But where is ‘the’ circadian clock of mammals? […] [Robert] Moore and [Irving] Zucker’s work pinpointed the SCN as the likely neural locus of the light-entrainable circadian pacemaker in mammals […] and a decade later this was confirmed by definitive experiments from Michael Menaker’s laboratory undertaken at the University of Virginia. […] These experiments established the SCN as the ‘master circadian pacemaker’ of mammals. […] There are around 20,000 or so neurons in the mouse SCN, but they are not identical. Some receive light information from the pRGCs and pass this information on to other SCN neurons, while others project to the thalamus and other regions of the brain, and collectively these neurons secrete more than one hundred different neurotransmitters, neuropeptides, cytokines, and growth factors. The SCN itself is composed of several regions or clusters of neurons, which have different jobs. Furthermore, there is considerable variability in the oscillations of the individual cells, ranging from 21.25 to 26.25 hours. Although the individual cells in the SCN have their own clockwork mechanisms with varying periods, the cell autonomous oscillations in neural activity are synchronized at the system level within the SCN, providing a coherent near 24-hour signal to the rest of the mammal. […] SCN neurons exhibit a circadian rhythm of spontaneous action potentials (SAPs), with higher frequency during the daytime than the night which in turn drives many rhythmic changes by alternating stimulatory and inhibitory inputs to the appropriate target neurons in the brain and neuroendocrine systems. […] The SCN projects directly to thirty-five brain regions, mostly located in the hypothalamus, and particularly those regions of the hypothalamus that regulate hormone release. Indeed, many pituitary hormones, such as cortisol, are under tight circadian control. Furthermore, the SCN regulates the activity of the autonomous nervous system, which in turn places multiple aspects of physiology, including the sensitivity of target tissues to hormonal signals, under circadian control. In addition to these direct neuronal connections, the SCN communicates to the rest of the body using diffusible chemical signals.”

“The SCN is the master clock in mammals but it is not the only clock. There are liver clocks, muscle clocks, pancreas clocks, adipose tissue clocks, and clocks of some sort in every organ and tissue examined to date. While lesioning of the SCN disrupts global behavioural rhythms such as locomotor activity, the disruption of clock function within just the liver or lung leads to circadian disorder that is confined to the target organ. In tissue culture, liver, heart, lung, skeletal muscle, and other organ tissues such as mammary glands express circadian rhythms, but these rhythms dampen and disappear after only a few cycles. This occurs because some individual clock cells lose rhythmicity, but more commonly because the individual cellular clocks become uncoupled from each other. The cells continue to tick, but all at different phases so that an overall 24-hour rhythm within the tissue or organ is lost. The discovery that virtually all cells of the body have clocks was one of the big surprises in circadian rhythms research. […] the SCN, entrained by pRGCs, acts as a pacemaker to coordinate, but not drive, the circadian activity of billions of individual peripheral circadian oscillators throughout the tissues and organs of the body. The signalling pathways used by the SCN to phase-entrain peripheral clocks are still uncertain, but we know that the SCN does not send out trillions of separate signals around the body that target specific cellular clocks. Rather there seems to be a limited number of neuronal and humoral signals which entrain peripheral clocks that in turn time their local physiology and gene expression.”

“As in Drosophilia […], the mouse clockwork also comprises three transcriptional-translational feedback loops with multiple interacting components. […] [T]he generation of a robust circadian rhythm that can be entrained by the environment is achieved via multiple elements, including the rate of transcription, translation, protein complex assembly, phosphorylation, other post-translation modification events, movement into the nucleus, transcriptional inhibition, and protein degradation. […] [A] complex arrangement is needed because from the moment a gene is switched on, transcription and translation usually takes two hours at most. As a result, substantial delays must be imposed at different stages to produce a near 24-hour oscillation. […] Although the molecular players may differ from Drosophilia and mice, and indeed even between different insects, the underlying principles apply across the spectrum of animal life. […] In fungi, plants, and cyanobacteria the clock genes are all different from each other and different again from the animal clock genes, suggesting that clocks evolved independently in the great evolutionary lineages of life on earth. Despite these differences, all these clocks are based upon a fundamental TTFL.”

“Circadian entrainment is surprisingly slow, taking several days to adjust to an advanced or delayed light/dark cycle. In most mammals, including jet-lagged humans, behavioural shifts are limited to approximately one hour (one time zone) per day. […] Changed levels of PER1 and PER2 act to shift the molecular clockwork, advancing the clock at dawn and delaying the clock at dusk. However, per mRNA and PER protein levels fall rapidly even if the animal remains exposed to light. As a result, the effects of light on the molecular clock are limited and entrainment is a gradual process requiring repeated shifting stimuli over multiple days. This phenomenon explains why we get jet lag: the clock cannot move immediately to a new dawn/dusk cycle because there is a ‘brake’ on the effects of light on the clock. […] The mechanism that provides this molecular brake is the production of SLK1 protein. […] Experiments on mice in which SLK1 has been suppressed show very rapid entrainment to simulated jet-lag.”

“We spend approximately 36 per cent of our entire lives asleep, and while asleep we do not eat, drink, or knowingly pass on our genes. This suggests that this aspect of our 24-hour behaviour provides us with something of huge value. If we are deprived of sleep, the sleep drive becomes so powerful that it can only be satisfied by sleep. […] Almost all life shows a 24-hour pattern of activity and rest, as we live on a planet that revolves once every 24 hours causing profound changes in light, temperature, and food availability. […] Life seems to have made an evolutionary ‘decision’ to be active at a specific part of the day/night cycle, and a species specialized to be active during the day will be far less effective at night. Conversely, nocturnal animals that are beautifully adapted to move around and hunt under dim or no light fail miserably during the day. […] no species can operate with the same effectiveness across the 24-hour light/dark environment. Species are adapted to a particular temporal niche just as they are to a physical niche. Activity at the wrong time often means death. […] Sleep may be the suspension of most physical activity, but a huge amount of essential physiology occurs during this time. Many diverse processes associated with the restoration and rebuilding of metabolic pathways are known to be up-regulated during sleep […] During sleep the body performs a broad range of essential ‘housekeeping’ functions without which performance and health during the active phase deteriorates rapidly. But these housekeeping functions would not be why sleep evolved in the first place. […] Evolution has allocated these key activities to the most appropriate time of day. […] In short, sleep has probably evolved as a species-specific response to a 24-hour world in which light, temperature, and food availability change dramatically. Sleep is a period of physical inactivity when individuals avoid movement within an environment to which they are poorly adapted, while using this time to undertake essential housekeeping functions demanded by their biology.”

“Sleep propensity in humans is closely correlated with the melatonin profile but this may be correlation and not causation. Indeed, individuals who do not produce melatonin (e.g. tetraplegic individuals, people on beta-blockers, or pinealectomized patients) still exhibit circadian sleep/wake rhythms with only very minor detectable changes. Another correlation between melatonin and sleep relates to levels of alertness. When melatonin is suppressed by light at night alertness levels increase, suggesting that melatonin and sleep propensity are directly connected. However, increases in alertness occur before a significant drop in blood melatonin. Furthermore, increased light during the day will also improve alertness when melatonin levels are already low. These findings suggest that melatonin is not a direct mediator of alertness and hence sleepiness. Taking synthetic melatonin or synthetic analogues of melatonin produces a mild sleepiness in about 70 per cent of people, especially when no natural melatonin is being released. The mechanism whereby melatonin produces mild sedation remains unclear.”

Links:

Teleost multiple tissue (tmt) opsin.
Melanopsin.
Suprachiasmatic nucleus.
Neuromedin S.
Food-entrainable circadian oscillators in the brain.
John Harrison. Seymour Benzer. Ronald Konopka. Jeffrey C. Hall. Michael Rosbash. Michael W. Young.
Circadian Oscillators: Around the Transcription-Translation Feedback Loop and on to Output.
Period (gene). Timeless (gene). CLOCK. Cycle (gene). Doubletime (gene). Cryptochrome. Vrille Gene.
Basic helix-loop-helix.
The clockwork orange Drosophila protein functions as both an activator and a repressor of clock gene expression.
RAR-related orphan receptor. RAR-related orphan receptor alpha.
BHLHE41.
The two-process model of sleep regulation: a reappraisal.

September 30, 2018 Posted by | Books, Genetics, Medicine, Molecular biology, Neurology, Ophthalmology | Leave a comment

A few diabetes papers of interest

i. Islet Long Noncoding RNAs: A Playbook for Discovery and Characterization.

“This review will 1) highlight what is known about lncRNAs in the context of diabetes, 2) summarize the strategies used in lncRNA discovery pipelines, and 3) discuss future directions and the potential impact of studying the role of lncRNAs in diabetes.”

“Decades of mouse research and advances in genome-wide association studies have identified several genetic drivers of monogenic syndromes of β-cell dysfunction, as well as 113 distinct type 2 diabetes (T2D) susceptibility loci (1) and ∼60 loci associated with an increased risk of developing type 1 diabetes (T1D) (2). Interestingly, these studies discovered that most T1D and T2D susceptibility loci fall outside of coding regions, which suggests a role for noncoding elements in the development of disease (3,4). Several studies have demonstrated that many causal variants of diabetes are significantly enriched in regions containing islet enhancers, promoters, and transcription factor binding sites (5,6); however, not all diabetes susceptibility loci can be explained by associations with these regulatory regions. […] Advances in RNA sequencing (RNA-seq) technologies have revealed that mammalian genomes encode tens of thousands of RNA transcripts that have similar features to mRNAs, yet are not translated into proteins (7). […] detailed characterization of many of these transcripts has challenged the idea that the central role for RNA in a cell is to give rise to proteins. Instead, these RNA transcripts make up a class of molecules called noncoding RNAs (ncRNAs) that function either as “housekeeping” ncRNAs, such as transfer RNAs (tRNAs) and ribosomal RNAs (rRNAs), that are expressed ubiquitously and are required for protein synthesis or as “regulatory” ncRNAs that control gene expression. While the functional mechanisms of short regulatory ncRNAs, such as microRNAs (miRNAs), small interfering RNAs (siRNAs), and Piwi-interacting RNAs (piRNAs), have been described in detail (810), the most abundant and functionally enigmatic regulatory ncRNAs are called long noncoding RNAs (lncRNAs) that are loosely defined as RNAs larger than 200 nucleotides (nt) that do not encode for protein (1113). Although using a definition based strictly on size is somewhat arbitrary, this definition is useful both bioinformatically […] and technically […]. While the 200-nt size cutoff has simplified identification of lncRNAs, this rather broad classification means several features of lncRNAs, including abundance, cellular localization, stability, conservation, and function, are inherently heterogeneous (1517). Although this represents one of the major challenges of lncRNA biology, it also highlights the untapped potential of lncRNAs to provide a novel layer of gene regulation that influences islet physiology and pathophysiology.”

“Although the role of miRNAs in diabetes has been well established (9), analyses of lncRNAs in islets have lagged behind their short ncRNA counterparts. However, several recent studies provide evidence that lncRNAs are crucial components of the islet regulome and may have a role in diabetes (27). […] misexpression of several lncRNAs has been correlated with diabetes complications, such as diabetic nephropathy and retinopathy (2931). There are also preliminary studies suggesting that circulating lncRNAs, such as Gas5, MIAT1, and SENCR, may represent effective molecular biomarkers of diabetes and diabetes-related complications (32,33). Finally, several recent studies have explored the role of lncRNAs in the peripheral metabolic tissues that contribute to energy homeostasis […]. In addition to their potential as genetic drivers and/or biomarkers of diabetes and diabetes complications, lncRNAs can be exploited for the treatment of diabetes. For example, although tremendous efforts have been dedicated to generating replacement β-cells for individuals with diabetes (35,36), human pluripotent stem cell–based β-cell differentiation protocols remain inefficient, and the end product is still functionally and transcriptionally immature compared with primary human β-cells […]. This is largely due to our incomplete knowledge of in vivo differentiation regulatory pathways, which likely include a role for lncRNAs. […] Inherent characteristics of lncRNAs have also made them attractive candidates for drug targeting, which could be exploited for developing new diabetes therapies.”

“With the advancement of high-throughput sequencing techniques, the list of islet-specific lncRNAs is growing exponentially; however, functional characterization is missing for the majority of these lncRNAs. […] Tens of thousands of lncRNAs have been identified in different cell types and model organisms; however, their functions largely remain unknown. Although the tools for determining lncRNA function are technically restrictive, uncovering novel regulatory mechanisms will have the greatest impact on understanding islet function and identifying novel therapeutics for diabetes. To date, no biochemical assay has been used to directly determine the molecular mechanisms by which islet lncRNAs function, which highlights both the infancy of the field and the difficulty in implementing these techniques. […] Due to the infancy of the lncRNA field, most of the biochemical and genetic tools used to interrogate lncRNA function have only recently been developed or are adapted from techniques used to study protein-coding genes and we are only beginning to appreciate the limits and challenges of borrowing strategies from the protein-coding world.”

“The discovery of lncRNAs as a novel class of tissue-specific regulatory molecules has spawned an exciting new field of biology that will significantly impact our understanding of pancreas physiology and pathophysiology. As the field continues to grow, there is growing appreciation that lncRNAs will provide many of the missing components to existing molecular pathways that regulate islet biology and contribute to diabetes when they become dysfunctional. However, to date, most of the experimental emphasis on lncRNAs has focused on large-scale discovery using genome-wide approaches, and there remains a paucity of functional analysis.”

ii. Diabetes and Trajectories of Estimated Glomerular Filtration Rate: A Prospective Cohort Analysis of the Atherosclerosis Risk in Communities Study.

“Diabetes is among the strongest common risk factors for end-stage renal disease, and in industrialized countries, diabetes contributes to ∼50% of cases (3). Less is known about the pattern of kidney function decline associated with diabetes that precedes end-stage renal disease. Identifying patterns of estimated glomerular filtration rate (eGFR) decline could inform monitoring practices for people at high risk of chronic kidney disease (CKD) progression. A better understanding of when and in whom eGFR decline occurs would be useful for the design of clinical trials because eGFR decline >30% is now often used as a surrogate end point for CKD progression (4). Trajectories among persons with diabetes are of particular interest because of the possibility for early intervention and the prevention of CKD development. However, eGFR trajectories among persons with new diabetes may be complex due to the hypothesized period of hyperfiltration by which GFR increases, followed by progressive, rapid decline (5). Using data from the Atherosclerosis Risk in Communities (ARIC) study, an ongoing prospective community-based cohort of >15,000 participants initiated in 1987 with serial measurements of creatinine over 26 years, our aim was to characterize patterns of eGFR decline associated with diabetes, identify demographic, genetic, and modifiable risk factors within the population with diabetes that were associated with steeper eGFR decline, and assess for evidence of early hyperfiltration.”

“We categorized people into groups of no diabetes, undiagnosed diabetes, and diagnosed diabetes at baseline (visit 1) and compared baseline clinical characteristics using ANOVA for continuous variables and Pearson χ2 tests for categorical variables. […] To estimate individual eGFR slopes over time, we used linear mixed-effects models with random intercepts and random slopes. These models were fit on diabetes status at baseline as a nominal variable to adjust the baseline level of eGFR and included an interaction term between diabetes status at baseline and time to estimate annual decline in eGFR by diabetes categories. Linear mixed models were run unadjusted and adjusted, with the latter model including the following diabetes and kidney disease–related risk factors: age, sex, race–center, BMI, systolic blood pressure, hypertension medication use, HDL, prevalent coronary heart disease, annual family income, education status, and smoking status, as well as each variable interacted with time. Continuous covariates were centered at the analytic population mean. We tested model assumptions and considered different covariance structures, comparing nested models using Akaike information criteria. We identified the unstructured covariance model as the most optimal and conservative approach. From the mixed models, we described the overall mean annual decline by diabetes status at baseline and used the random effects to estimate best linear unbiased predictions to describe the distributions of yearly slopes in eGFR by diabetes status at baseline and displayed them using kernel density plots.”

“Because of substantial variation in annual eGFR slope among people with diagnosed diabetes, we sought to identify risk factors that were associated with faster decline. Among those with diagnosed diabetes, we compared unadjusted and adjusted mean annual decline in eGFR by race–APOL1 risk status (white, black– APOL1 low risk, and black–APOL1 high risk) [here’s a relevant link, US], systolic blood pressure […], smoking status […], prevalent coronary heart disease […], diabetes medication use […], HbA1c […], and 1,5-anhydroglucitol (≥10 and <10 μg/mL) [relevant link, US]. Because some of these variables were only available at visit 2, we required that participants included in this subgroup analysis attend both visits 1 and 2 and not be missing information on APOL1 or the variables assessed at visit 2 to ensure a consistent sample size. In addition to diabetes and kidney disease–related risk factors in the adjusted model, we also included diabetes medication use and HbA1c to account for diabetes severity in these analyses. […] to explore potential hyperfiltration, we used a linear spline model to allow the slope to change for each diabetes category between the first 3 years of follow-up (visit 1 to visit 2) and the subsequent time period (visit 2 to visit 5).”

“There were 15,517 participants included in the analysis: 13,698 (88%) without diabetes, 634 (4%) with undiagnosed diabetes, and 1,185 (8%) with diagnosed diabetes at baseline. […] At baseline, participants with undiagnosed and diagnosed diabetes were older, more likely to be black or have hypertension and coronary heart disease, and had higher mean BMI and lower mean HDL compared with those without diabetes […]. Income and education levels were also lower among those with undiagnosed and diagnosed diabetes compared with those without diabetes. […] Overall, there was a nearly linear association between eGFR and age over time, regardless of diabetes status […]. The crude mean annual decline in eGFR was slowest among those without diabetes at baseline (decline of −1.6 mL/min/1.73 m2/year [95% CI −1.6 to −1.5]), faster among those with undiagnosed diabetes compared with those without diabetes (decline of −2.1 mL/min/1.73 m2/year [95% CI −2.2 to −2.0][…]), and nearly twice as rapid among those with diagnosed diabetes compared with those without diabetes (decline of −2.9 mL/min/1.73 m2/year [95% CI −3.0 to −2.8][…]). Adjustment for diabetes and kidney disease–related risk factors attenuated the results slightly, but those with undiagnosed and diagnosed diabetes still had statistically significantly steeper declines than those without diabetes (decline among no diabetes −1.4 mL/min/1.73 m2/year [95% CI −1.5 to −1.4] and decline among undiagnosed diabetes −1.8 mL/min/1.73 m2/year [95% CI −2.0 to −1.7], difference vs. no diabetes of −0.4 mL/min/1.73 m2/year [95% CI −0.5 to −0.3; P < 0.001]; decline among diagnosed diabetes −2.5 mL/min/1.73 m2/year [95% CI −2.6 to −2.4], difference vs. no diabetes of −1.1 mL/min/1.73 m2/ year [95% CI −1.2 to −1.0; P < 0.001]). […] The decline in eGFR per year varied greatly across individuals, particularly among those with diabetes at baseline […] Among participants with diagnosed diabetes at baseline, those who were black, had systolic blood pressure ≥140 mmHg, used diabetes medications, had an HbA1c ≥7% [≥53 mmol/mol], or had 1,5-anhydroglucitol <10 μg/mL were at risk for steeper annual declines than their counterparts […]. Smoking status and prevalent coronary heart disease were not associated with significantly steeper eGFR decline in unadjusted analyses. Adjustment for risk factors, diabetes medication use, and HbA1c attenuated the differences in decline for all subgroups with the exception of smoking status, leaving black race along with APOL1-susceptible genotype, systolic blood pressure ≥140 mmHg, current smoking, insulin use, and HbA1c ≥9% [≥75 mmol/mol] as the risk factors indicative of steeper decline.”

CONCLUSIONS Diabetes is an important risk factor for kidney function decline. Those with diagnosed diabetes declined almost twice as rapidly as those without diabetes. Among people with diagnosed diabetes, steeper declines were seen in those with modifiable risk factors, including hypertension and glycemic control, suggesting areas for continued targeting in kidney disease prevention. […] Few other community-based studies have evaluated differences in kidney function decline by diabetes status over a long period through mid- and late life. One study of 10,184 Canadians aged ≥66 years with creatinine measured during outpatient visits showed results largely consistent with our findings but with much shorter follow-up (median of 2 years) (19). Other studies of eGFR change in a general population have found smaller declines than our results (20,21). A study conducted in Japanese participants aged 40–79 years found a decline of only −0.4 mL/min/1.73 m2/year over the course of two assessments 10 years apart (compared with our estimate among those without diabetes: −1.6 mL/min/1.73 m2/year). This is particularly interesting, as Japan is known to have a higher prevalence of CKD and end-stage renal disease than the U.S. (20). However, this study evaluated participants over a shorter time frame and required attendance at both assessments, which may have decreased the likelihood of capturing severe cases and resulted in underestimation of decline.”

“The Baltimore Longitudinal Study of Aging also assessed kidney function over time in a general population of 446 men, ranging in age from 22 to 97 years at baseline, each with up to 14 measurements of creatinine clearance assessed between 1958 and 1981 (21). They also found a smaller decline than we did (−0.8 mL/min/year), although this study also had notable differences. Their main analysis excluded participants with hypertension and history of renal disease or urinary tract infection and those treated with diuretics and/or antihypertensive medications. Without those exclusions, their overall estimate was −1.1 mL/min/year, which better reflects a community-based population and our results. […] In our evaluation of risk factors that might explain the variation in decline seen among those with diagnosed diabetes, we observed that black race, systolic blood pressure ≥140 mmHg, insulin use, and HbA1c ≥9% (≥75 mmol/mol) were particularly important. Although the APOL1 high-risk genotype is a known risk factor for eGFR decline, African Americans with low-risk APOL1 status continued to be at higher risk than whites even after adjustment for traditional risk factors, diabetes medication use, and HbA1c.”

“Our results are relevant to the design and conduct of clinical trials. Hard clinical outcomes like end-stage renal disease are relatively rare, and a 30–40% decline in eGFR is now accepted as a surrogate end point for CKD progression (4). We provide data on patient subgroups that may experience accelerated trajectories of kidney function decline, which has implications for estimating sample size and ensuring adequate power in future clinical trials. Our results also suggest that end points of eGFR decline might not be appropriate for patients with new-onset diabetes, in whom declines may actually be slower than among persons without diabetes. Slower eGFR decline among those with undiagnosed diabetes, who are likely early in the course of diabetes, is consistent with the hypothesis of hyperfiltration. Similar to other studies, we found that persons with undiagnosed diabetes had higher GFR at the outset, but this was a transient phenomenon, as they ultimately experienced larger declines in kidney function than those without diabetes over the course of follow-up (2325). Whether hyperfiltration is a universal aspect of early disease and, if not, whether it portends worse long-term outcomes is uncertain. Existing studies investigating hyperfiltration as a precursor to adverse kidney outcomes are inconsistent (24,26,27) and often confounded by diabetes severity factors like duration (27). We extended this literature by separating undiagnosed and diagnosed diabetes to help address that confounding.”

iii. Saturated Fat Is More Metabolically Harmful for the Human Liver Than Unsaturated Fat or Simple Sugars.

OBJECTIVE Nonalcoholic fatty liver disease (i.e., increased intrahepatic triglyceride [IHTG] content), predisposes to type 2 diabetes and cardiovascular disease. Adipose tissue lipolysis and hepatic de novo lipogenesis (DNL) are the main pathways contributing to IHTG. We hypothesized that dietary macronutrient composition influences the pathways, mediators, and magnitude of weight gain-induced changes in IHTG.

RESEARCH DESIGN AND METHODS We overfed 38 overweight subjects (age 48 ± 2 years, BMI 31 ± 1 kg/m2, liver fat 4.7 ± 0.9%) 1,000 extra kcal/day of saturated (SAT) or unsaturated (UNSAT) fat or simple sugars (CARB) for 3 weeks. We measured IHTG (1H-MRS), pathways contributing to IHTG (lipolysis ([2H5]glycerol) and DNL (2H2O) basally and during euglycemic hyperinsulinemia), insulin resistance, endotoxemia, plasma ceramides, and adipose tissue gene expression at 0 and 3 weeks.

RESULTS Overfeeding SAT increased IHTG more (+55%) than UNSAT (+15%, P < 0.05). CARB increased IHTG (+33%) by stimulating DNL (+98%). SAT significantly increased while UNSAT decreased lipolysis. SAT induced insulin resistance and endotoxemia and significantly increased multiple plasma ceramides. The diets had distinct effects on adipose tissue gene expression.”

CONCLUSIONS NAFLD has been shown to predict type 2 diabetes and cardiovascular disease in multiple studies, even independent of obesity (1), and also to increase the risk of progressive liver disease (17). It is therefore interesting to compare effects of different diets on liver fat content and understand the underlying mechanisms. We examined whether provision of excess calories as saturated (SAT) or unsaturated (UNSAT) fats or simple sugars (CARB) influences the metabolic response to overfeeding in overweight subjects. All overfeeding diets increased IHTGs. The SAT diet induced a greater increase in IHTGs than the UNSAT diet. The composition of the diet altered sources of excess IHTGs. The SAT diet increased lipolysis, whereas the CARB diet stimulated DNL. The SAT but not the other diets increased multiple plasma ceramides, which increase the risk of cardiovascular disease independent of LDL cholesterol (18). […] Consistent with current dietary recommendations (3638), the current study shows that saturated fat is the most harmful dietary constituent regarding IHTG accumulation.”

iv. Primum Non Nocere: Refocusing Our Attention on Severe Hypoglycemia Prevention.

“Severe hypoglycemia, defined as low blood glucose requiring assistance for recovery, is arguably the most dangerous complication of type 1 diabetes as it can result in permanent cognitive impairment, seizure, coma, accidents, and death (1,2). Since the Diabetes Control and Complications Trial (DCCT) demonstrated that intensive intervention to normalize glucose prevents long-term complications but at the price of a threefold increase in the rate of severe hypoglycemia (3), hypoglycemia has been recognized as the major limitation to achieving tight glycemic control. Severe hypoglycemia remains prevalent among adults with type 1 diabetes, ranging from ∼1.4% per year in the DCCT/EDIC (Epidemiology of Diabetes Interventions and Complications) follow-up cohort (4) to ∼8% in the T1D Exchange clinic registry (5).

One the greatest risk factors for severe hypoglycemia is impaired awareness of hypoglycemia (6), which increases risk up to sixfold (7,8). Hypoglycemia unawareness results from deficient counterregulation (9), where falling glucose fails to activate the autonomic nervous system to produce neuroglycopenic symptoms that normally help patients identify and respond to episodes (i.e., sweating, palpitations, hunger) (2). An estimated 20–25% of adults with type 1 diabetes have impaired hypoglycemia awareness (8), which increases to more than 50% after 25 years of disease duration (10).

Screening for hypoglycemia unawareness to identify patients at increased risk of severe hypoglycemic events should be part of routine diabetes care. Self-identified impairment in awareness tends to agree with clinical evaluation (11). Therefore, hypoglycemia unawareness can be easily and effectively screened […] Interventions for hypoglycemia unawareness include a range of behavioral and medical options. Avoiding hypoglycemia for at least several weeks may partially reverse hypoglycemia unawareness and reduce risk of future episodes (1). Therefore, patients with hypoglycemia and unawareness may be advised to raise their glycemic and HbA1c targets (1,2). Diabetes technology can play a role, including continuous subcutaneous insulin infusion (CSII) to optimize insulin delivery, continuous glucose monitoring (CGM) to give technological awareness in the absence of symptoms (14), or the combination of the two […] Aside from medical management, structured or hypoglycemia-specific education programs that aim to prevent hypoglycemia are recommended for all patients with severe hypoglycemia or hypoglycemia unawareness (14). In randomized trials, psychoeducational programs that incorporate increased education, identification of personal risk factors, and behavior change support have improved hypoglycemia unawareness and reduced the incidence of both nonsevere and severe hypoglycemia over short periods of follow-up (17,18) and extending up to 1 year (19).”

“Given that the presence of hypoglycemia unawareness increases the risk of severe hypoglycemia, which is the strongest predictor of a future episode (2,4), the implication that intervention can break the life-threatening and traumatizing cycle of hypoglycemia unawareness and severe hypoglycemia cannot be overstated. […] new evidence of durability of effect across treatment regimen without increasing the risk for long-term complications creates an imperative for action. In combination with existing screening tools and a body of literature investigating novel interventions for hypoglycemia unawareness, these results make the approach of screening, recognition, and intervention very compelling as not only a best practice but something that should be incorporated in universal guidelines on diabetes care, particularly for individuals with type 1 diabetes […] Hyperglycemia is […] only part of the puzzle in diabetes management. Long-term complications are decreasing across the population with improved interventions and their implementation (24). […] it is essential to shift our historical obsession with hyperglycemia and its long-term complications to equally emphasize the disabling, distressing, and potentially fatal near-term complication of our treatments, namely severe hypoglycemia. […] The health care providers’ first dictum is primum non nocere — above all, do no harm. ADA must refocus our attention on severe hypoglycemia as an iatrogenic and preventable complication of our interventions.”

v. Anti‐vascular endothelial growth factor combined with intravitreal steroids for diabetic macular oedema.

“Background

The combination of steroid and anti‐vascular endothelial growth factor (VEGF) intravitreal therapeutic agents could potentially have synergistic effects for treating diabetic macular oedema (DMO). On the one hand, if combined treatment is more effective than monotherapy, there would be significant implications for improving patient outcomes. Conversely, if there is no added benefit of combination therapy, then people could be potentially exposed to unnecessary local or systemic side effects.

Objectives

To assess the effects of intravitreal agents that block vascular endothelial growth factor activity (anti‐VEGF agents) plus intravitreal steroids versus monotherapy with macular laser, intravitreal steroids or intravitreal anti‐VEGF agents for managing DMO.”

“There were eight RCTs (703 participants, 817 eyes) that met our inclusion criteria with only three studies reporting outcomes at one year. The studies took place in Iran (3), USA (2), Brazil (1), Czech Republic (1) and South Korea (1). […] When comparing anti‐VEGF/steroid with anti‐VEGF monotherapy as primary therapy for DMO, we found no meaningful clinical difference in change in BCVA [best corrected visual acuity] […] or change in CMT [central macular thickness] […] at one year. […] There was very low‐certainty evidence on intraocular inflammation from 8 studies, with one event in the anti‐VEGF/steroid group (313 eyes) and two events in the anti‐VEGF group (322 eyes). There was a greater risk of raised IOP (Peto odds ratio (OR) 8.13, 95% CI 4.67 to 14.16; 635 eyes; 8 RCTs; moderate‐certainty evidence) and development of cataract (Peto OR 7.49, 95% CI 2.87 to 19.60; 635 eyes; 8 RCTs; moderate‐certainty evidence) in eyes receiving anti‐VEGF/steroid compared with anti‐VEGF monotherapy. There was low‐certainty evidence from one study of an increased risk of systemic adverse events in the anti‐VEGF/steroid group compared with the anti‐VEGF alone group (Peto OR 1.32, 95% CI 0.61 to 2.86; 103 eyes).”

“One study compared anti‐VEGF/steroid versus macular laser therapy. At one year investigators did not report a meaningful difference between the groups in change in BCVA […] or change in CMT […]. There was very low‐certainty evidence suggesting an increased risk of cataract in the anti‐VEGF/steroid group compared with the macular laser group (Peto OR 4.58, 95% 0.99 to 21.10, 100 eyes) and an increased risk of elevated IOP in the anti‐VEGF/steroid group compared with the macular laser group (Peto OR 9.49, 95% CI 2.86 to 31.51; 100 eyes).”

“Authors’ conclusions

Combination of intravitreal anti‐VEGF plus intravitreal steroids does not appear to offer additional visual benefit compared with monotherapy for DMO; at present the evidence for this is of low‐certainty. There was an increased rate of cataract development and raised intraocular pressure in eyes treated with anti‐VEGF plus steroid versus anti‐VEGF alone. Patients were exposed to potential side effects of both these agents without reported additional benefit.”

vi. Association between diabetic foot ulcer and diabetic retinopathy.

“More than 25 million people in the United States are estimated to have diabetes mellitus (DM), and 15–25% will develop a diabetic foot ulcer (DFU) during their lifetime [1]. DFU is one of the most serious and disabling complications of DM, resulting in significantly elevated morbidity and mortality. Vascular insufficiency and associated neuropathy are important predisposing factors for DFU, and DFU is the most common cause of non-traumatic foot amputation worldwide. Up to 70% of all lower leg amputations are performed on patients with DM, and up to 85% of all amputations are preceded by a DFU [2, 3]. Every year, approximately 2–3% of all diabetic patients develop a foot ulcer, and many require prolonged hospitalization for the treatment of ensuing complications such as infection and gangrene [4, 5].

Meanwhile, a number of studies have noted that diabetic retinopathy (DR) is associated with diabetic neuropathy and microvascular complications [610]. Despite the magnitude of the impact of DFUs and their consequences, little research has been performed to investigate the characteristics of patients with a DFU and DR. […] the aim of this study was to investigate the prevalence of DR in patients with a DFU and to elucidate the potential association between DR and DFUs.”

“A retrospective review was conducted on DFU patients who underwent ophthalmic and vascular examinations within 6 months; 100 type 2 diabetic patients with DFU were included. The medical records of 2496 type 2 diabetic patients without DFU served as control data. DR prevalence and severity were assessed in DFU patients. DFU patients were compared with the control group regarding each clinical variable. Additionally, DFU patients were divided into two groups according to DR severity and compared. […] Out of 100 DFU patients, 90 patients (90%) had DR and 55 (55%) had proliferative DR (PDR). There was no significant association between DR and DFU severities (R = 0.034, p = 0.734). A multivariable analysis comparing type 2 diabetic patients with and without DFUs showed that the presence of DR [OR, 226.12; 95% confidence interval (CI), 58.07–880.49; p < 0.001] and proliferative DR [OR, 306.27; 95% CI, 64.35–1457.80; p < 0.001), higher HbA1c (%, OR, 1.97, 95% CI, 1.46–2.67; p < 0.001), higher serum creatinine (mg/dL, OR, 1.62, 95% CI, 1.06–2.50; p = 0.027), older age (years, OR, 1.12; 95% CI, 1.06–1.17; p < 0.001), higher pulse pressure (mmHg, OR, 1.03; 95% CI, 1.00–1.06; p = 0.025), lower cholesterol (mg/dL, OR, 0.94; 95% CI, 0.92–0.97; p < 0.001), lower BMI (kg/m2, OR, 0.87, 95% CI, 0.75–1.00; p = 0.044) and lower hematocrit (%, OR, 0.80, 95% CI, 0.74–0.87; p < 0.001) were associated with DFUs. In a subgroup analysis of DFU patients, the PDR group had a longer duration of diabetes mellitus, higher serum BUN, and higher serum creatinine than the non-PDR group. In the multivariable analysis, only higher serum creatinine was associated with PDR in DFU patients (OR, 1.37; 95% CI, 1.05–1.78; p = 0.021).

Conclusions

Diabetic retinopathy is prevalent in patients with DFU and about half of DFU patients had PDR. No significant association was found in terms of the severity of these two diabetic complications. To prevent blindness, patients with DFU, and especially those with high serum creatinine, should undergo retinal examinations for timely PDR diagnosis and management.”

August 29, 2018 Posted by | Diabetes, Epidemiology, Genetics, Medicine, Molecular biology, Nephrology, Ophthalmology, Statistics, Studies | Leave a comment

Developmental Biology (II)

Below I have included some quotes from the middle chapters of the book and some links related to the topic coverage. As I already pointed out earlier, this is an excellent book on these topics.

Germ cells have three key functions: the preservation of the genetic integrity of the germline; the generation of genetic diversity; and the transmission of genetic information to the next generation. In all but the simplest animals, the cells of the germline are the only cells that can give rise to a new organism. So, unlike body cells, which eventually all die, germ cells in a sense outlive the bodies that produced them. They are, therefore, very special cells […] In order that the number of chromosomes is kept constant from generation to generation, germ cells are produced by a specialized type of cell division, called meiosis, which halves the chromosome number. Unless this reduction by meiosis occurred, the number of chromosomes would double each time the egg was fertilized. Germ cells thus contain a single copy of each chromosome and are called haploid, whereas germ-cell precursor cells and the other somatic cells of the body contain two copies and are called diploid. The halving of chromosome number at meiosis means that when egg and sperm come together at fertilization, the diploid number of chromosomes is restored. […] An important property of germ cells is that they remain pluripotent—able to give rise to all the different types of cells in the body. Nevertheless, eggs and sperm in mammals have certain genes differentially switched off during germ-cell development by a process known as genomic imprinting […] Certain genes in eggs and sperm are imprinted, so that the activity of the same gene is different depending on whether it is of maternal or paternal origin. Improper imprinting can lead to developmental abnormalities in humans. At least 80 imprinted genes have been identified in mammals, and some are involved in growth control. […] A number of developmental disorders in humans are associated with imprinted genes. Infants with Prader-Willi syndrome fail to thrive and later can become extremely obese; they also show mental retardation and mental disturbances […] Angelman syndrome results in severe motor and mental retardation. Beckwith-Wiedemann syndrome is due to a generalized disruption of imprinting on a region of chromosome 7 and leads to excessive foetal overgrowth and an increased predisposition to cancer.”

“Sperm are motile cells, typically designed for activating the egg and delivering their nucleus into the egg cytoplasm. They essentially consist of a nucleus, mitochondria to provide an energy source, and a flagellum for movement. The sperm contributes virtually nothing to the organism other than its chromosomes. In mammals, sperm mitochondria are destroyed following fertilization, and so all mitochondria in the animal are of maternal origin. […] Different organisms have different ways of ensuring fertilization by only one sperm. […] Early development is similar in both male and female mammalian embryos, with sexual differences only appearing at later stages. The development of the individual as either male or female is genetically fixed at fertilization by the chromosomal content of the egg and sperm that fuse to form the fertilized egg. […] Each sperm carries either an X or Y chromosome, while the egg has an X. The genetic sex of a mammal is thus established at the moment of conception, when the sperm introduces either an X or a Y chromosome into the egg. […] In the absence of a Y chromosome, the default development of tissues is along the female pathway. […] Unlike animals, plants do not set aside germ cells in the embryo and germ cells are only specified when a flower develops. Any meristem cell can, in principle, give rise to a germ cell of either sex, and there are no sex chromosomes. The great majority of flowering plants give rise to flowers that contain both male and female sexual organs, in which meiosis occurs. The male sexual organs are the stamens; these produce pollen, which contains the male gamete nuclei corresponding to the sperm of animals. At the centre of the flower are the female sex organs, which consist of an ovary of two carpels, which contain the ovules. Each ovule contains an egg cell.”

“The character of specialized cells such as nerve, muscle, or skin is the result of a particular pattern of gene activity that determines which proteins are synthesized. There are more than 200 clearly recognizable differentiated cell types in mammals. How these particular patterns of gene activity develop is a central question in cell differentiation. Gene expression is under a complex set of controls that include the actions of transcription factors, and chemical modification of DNA. External signals play a key role in differentiation by triggering intracellular signalling pathways that affect gene expression. […] the central feature of cell differentiation is a change in gene expression, which brings about a change in the proteins in the cells. The genes expressed in a differentiated cell include not only those for a wide range of ‘housekeeping’ proteins, such as the enzymes involved in energy metabolism, but also genes encoding cell-specific proteins that characterize a fully differentiated cell: hemoglobin in red blood cells, keratin in skin epidermal cells, and muscle-specific actin and myosin protein filaments in muscle. […] several thousand different genes are active in any given cell in the embryo at any one time, though only a small number of these may be involved in specifying cell fate or differentiation. […] Cell differentiation is known to be controlled by a wide range of external signals but it is important to remember that, while these external signals are often referred to as being ‘instructive’, they are ‘selective’, in the sense that the number of developmental options open to a cell at any given time is limited. These options are set by the cell’s internal state which, in turn, reflects its developmental history. External signals cannot, for example, convert an endodermal cell into a muscle or nerve cell. Most of the molecules that act as developmentally important signals between cells during development are proteins or peptides, and their effect is usually to induce a change in gene expression. […] The same external signals can be used again and again with different effects because the cells’ histories are different. […] At least 1,000 different transcription factors are encoded in the genomes of the fly and the nematode, and as many as 3,000 in the human genome. On average, around five different transcription factors act together at a control region […] In general, it can be assumed that activation of each gene involves a unique combination of transcription factors.”

“Stem cells involve some special features in relation to differentiation. A single stem cell can divide to produce two daughter cells, one of which remains a stem cell while the other gives rise to a lineage of differentiating cells. This occurs in our skin and gut all the time and also in the production of blood cells. It also occurs in the embryo. […] Embryonic stem (ES) cells from the inner cell mass of the early mammalian embryo when the primitive streak forms, can, in culture, differentiate into a wide variety of cell types, and have potential uses in regenerative medicine. […] it is now possible to make adult body cells into stem cells, which has important implications for regenerative medicine. […] The goal of regenerative medicine is to restore the structure and function of damaged or diseased tissues. As stem cells can proliferate and differentiate into a wide range of cell types, they are strong candidates for use in cell-replacement therapy, the restoration of tissue function by the introduction of new healthy cells. […] The generation of insulin-producing pancreatic β cells from ES cells to replace those destroyed in type 1 diabetes is a prime medical target. Treatments that direct the differentiation of ES cells towards making endoderm derivatives such as pancreatic cells have been particularly difficult to find. […] The neurodegenerative Parkinson disease is another medical target. […] To generate […] stem cells of the patient’s own tissue type would be a great advantage, and the recent development of induced pluripotent stem cells (iPS cells) offers […] exciting new opportunities. […] There is [however] risk of tumour induction in patients undergoing cell-replacement therapy with ES cells or iPS cells; undifferentiated pluripotent cells introduced into the patient could cause tumours. Only stringent selection procedures that ensure no undifferentiated cells are present in the transplanted cell population will overcome this problem. And it is not yet clear how stable differentiated ES cells and iPS cells will be in the long term.”

“In general, the success rate of cloning by body-cell nuclear transfer in mammals is low, and the reasons for this are not yet well understood. […] Most cloned mammals derived from nuclear transplantation are usually abnormal in some way. The cause of failure is incomplete reprogramming of the donor nucleus to remove all the earlier modifications. A related cause of abnormality may be that the reprogrammed genes have not gone through the normal imprinting process that occurs during germ-cell development, where different genes are silenced in the male and female parents. The abnormalities in adults that do develop from cloned embryos include early death, limb deformities and hypertension in cattle, and immune impairment in mice. All these defects are thought to be due to abnormalities of gene expression that arise from the cloning process. Studies have shown that some 5% of the genes in cloned mice are not correctly expressed and that almost half of the imprinted genes are incorrectly expressed.”

“Organ development involves large numbers of genes and, because of this complexity, general principles can be quite difficult to distinguish. Nevertheless, many of the mechanisms used in organogenesis are similar to those of earlier development, and certain signals are used again and again. Pattern formation in development in a variety of organs can be specified by position information, which is specified by a gradient in some property. […] Not surprisingly, the vascular system, including blood vessels and blood cells, is among the first organ systems to develop in vertebrate embryos, so that oxygen and nutrients can be delivered to the rapidly developing tissues. The defining cell type of the vascular system is the endothelial cell, which forms the lining of the entire circulatory system, including the heart, veins, and arteries. Blood vessels are formed by endothelial cells and these vessels are then covered by connective tissue and smooth muscle cells. Arteries and veins are defined by the direction of blood flow as well as by structural and functional differences; the cells are specified as arterial or venous before they form blood vessels but they can switch identity. […] Differentiation of the vascular cells requires the growth factor VEGF (vascular endothelial growth factor) and its receptors, and VEGF stimulates their proliferation. Expression of the Vegf gene is induced by lack of oxygen and thus an active organ using up oxygen promotes its own vascularization. New blood capillaries are formed by sprouting from pre-existing blood vessels and proliferation of cells at the tip of the sprout. […] During their development, blood vessels navigate along specific paths towards their targets […]. Many solid tumours produce VEGF and other growth factors that stimulate vascular development and so promote the tumour’s growth, and blocking new vessel formation is thus a means of reducing tumour growth. […] In humans, about 1 in 100 live-born infants has some congenital heart malformation, while in utero, heart malformation leading to death of the embryo occurs in between 5 and 10% of conceptions.”

“Separation of the digits […] is due to the programmed cell death of the cells between these digits’ cartilaginous elements. The webbed feet of ducks and other waterfowl are simply the result of less cell death between the digits. […] the death of cells between the digits is essential for separating the digits. The development of the vertebrate nervous system also involves the death of large numbers of neurons.”

Links:

Budding.
Gonad.
Down Syndrome.
Fertilization. In vitro fertilisation. Preimplantation genetic diagnosis.
SRY gene.
X-inactivation. Dosage compensation.
Cellular differentiation.
MyoD.
Signal transduction. Enhancer (genetics).
Epigenetics.
Hematopoiesis. Hematopoietic stem cell transplantation. Hemoglobin. Sickle cell anemia.
Skin. Dermis. Fibroblast. Epidermis.
Skeletal muscle. Myogenesis. Myoblast.
Cloning. Dolly.
Organogenesis.
Limb development. Limb bud. Progress zone model. Apical ectodermal ridge. Polarizing region/Zone of polarizing activity. Sonic hedgehog.
Imaginal disc. Pax6. Aniridia. Neural tube.
Branching morphogenesis.
Pistil.
ABC model of flower development.

July 16, 2018 Posted by | Biology, Books, Botany, Cancer/oncology, Diabetes, Genetics, Medicine, Molecular biology, Ophthalmology | Leave a comment

A few diabetes papers of interest

i. Clinical Inertia in Type 2 Diabetes Management: Evidence From a Large, Real-World Data Set.

Despite clinical practice guidelines that recommend frequent monitoring of HbA1c (every 3 months) and aggressive escalation of antihyperglycemic therapies until glycemic targets are reached (1,2), the intensification of therapy in patients with uncontrolled type 2 diabetes (T2D) is often inappropriately delayed. The failure of clinicians to intensify therapy when clinically indicated has been termed “clinical inertia.” A recently published systematic review found that the median time to treatment intensification after an HbA1c measurement above target was longer than 1 year (range 0.3 to >7.2 years) (3). We have previously reported a rather high rate of clinical inertia in patients uncontrolled on metformin monotherapy (4). Treatment was not intensified early (within 6 months of metformin monotherapy failure) in 38%, 31%, and 28% of patients when poor glycemic control was defined as an HbA1c >7% (>53 mmol/mol), >7.5% (>58 mmol/mol), and >8% (>64 mmol/mol), respectively.

Using the electronic health record system at Cleveland Clinic (2005–2016), we identified a cohort of 7,389 patients with T2D who had an HbA1c value ≥7% (≥53 mmol/mol) (“index HbA1c”) despite having been on a stable regimen of two oral antihyperglycemic drugs (OADs) for at least 6 months prior to the index HbA1c. This HbA1c threshold would generally be expected to trigger treatment intensification based on current guidelines. Patient records were reviewed for the 6-month period following the index HbA1c, and changes in diabetes therapy were evaluated for evidence of “intensification” […] almost two-thirds of patients had no evidence of intensification in their antihyperglycemic therapy during the 6 months following the index HbA1c ≥7% (≥53 mmol/mol), suggestive of poor glycemic control. Most alarming was the finding that even among patients in the highest index HbA1c category (≥9% [≥75 mmol/mol]), therapy was not intensified in 44% of patients, and slightly more than half (53%) of those with an HbA1c between 8 and 8.9% (64 and 74 mmol/mol) did not have their therapy intensified.”

“Unfortunately, these real-world findings confirm a high prevalence of clinical inertia with regard to T2D management. The unavoidable conclusion from these data […] is that physicians are not responding quickly enough to evidence of poor glycemic control in a high percentage of patients, even in those with HbA1c levels far exceeding typical treatment targets.

ii. Gestational Diabetes Mellitus and Diet: A Systematic Review and Meta-analysis of Randomized Controlled Trials Examining the Impact of Modified Dietary Interventions on Maternal Glucose Control and Neonatal Birth Weight.

“Medical nutrition therapy is a mainstay of gestational diabetes mellitus (GDM) treatment. However, data are limited regarding the optimal diet for achieving euglycemia and improved perinatal outcomes. This study aims to investigate whether modified dietary interventions are associated with improved glycemia and/or improved birth weight outcomes in women with GDM when compared with control dietary interventions. […]

From 2,269 records screened, 18 randomized controlled trials involving 1,151 women were included. Pooled analysis demonstrated that for modified dietary interventions when compared with control subjects, there was a larger decrease in fasting and postprandial glucose (−4.07 mg/dL [95% CI −7.58, −0.57]; P = 0.02 and −7.78 mg/dL [95% CI −12.27, −3.29]; P = 0.0007, respectively) and a lower need for medication treatment (relative risk 0.65 [95% CI 0.47, 0.88]; P = 0.006). For neonatal outcomes, analysis of 16 randomized controlled trials including 841 participants showed that modified dietary interventions were associated with lower infant birth weight (−170.62 g [95% CI −333.64, −7.60]; P = 0.04) and less macrosomia (relative risk 0.49 [95% CI 0.27, 0.88]; P = 0.02). The quality of evidence for these outcomes was low to very low. Baseline differences between groups in postprandial glucose may have influenced glucose-related outcomes. […] we were unable to resolve queries regarding potential concerns for sources of bias because of lack of author response to our queries. We have addressed this by excluding these studies in the sensitivity analysis. […] after removal of the studies with the most substantial methodological concerns in the sensitivity analysis, differences in the change in fasting plasma glucose were no longer significant. Although differences in the change in postprandial glucose and birth weight persisted, they were attenuated.”

“This review highlights limitations of the current literature examining dietary interventions in GDM. Most studies are too small to demonstrate significant differences in our primary outcomes. Seven studies had fewer than 50 participants and only two had more than 100 participants (n = 125 and 150). The short duration of many dietary interventions and the late gestational age at which they were started (38) may also have limited their impact on glycemic and birth weight outcomes. Furthermore, we cannot conclude if the improvements in maternal glycemia and infant birth weight are due to reduced energy intake, improved nutrient quality, or specific changes in types of carbohydrate and/or protein. […] These data suggest that dietary interventions modified above and beyond usual dietary advice for GDM have the potential to offer better maternal glycemic control and infant birth weight outcomes. However, the quality of evidence was judged as low to very low due to the limitations in the design of included studies, the inconsistency between their results, and the imprecision in their effect estimates.”

iii. Lifetime Prevalence and Prognosis of Prediabetes Without Progression to Diabetes.

Impaired fasting glucose, also termed prediabetes, is increasingly prevalent and is associated with adverse cardiovascular risk (1). The cardiovascular risks attributed to prediabetes may be driven primarily by the conversion from prediabetes to overt diabetes (2). Given limited data on outcomes among nonconverters in the community, the extent to which some individuals with prediabetes never go on to develop diabetes and yet still experience adverse cardiovascular risk remains unclear. We therefore investigated the frequency of cardiovascular versus noncardiovascular deaths in people who developed early- and late-onset prediabetes without ever progressing to diabetes.”

“We used data from the Framingham Heart Study collected on the Offspring Cohort participants aged 18–77 years at the time of initial fasting plasma glucose (FPG) assessment (1983–1987) who had serial FPG testing over subsequent examinations with continuous surveillance for outcomes including cause-specific mortality (3). As applied in prior epidemiological investigations (4), we used a case-control design focusing on the cause-specific outcome of cardiovascular death to minimize the competing risk issues that would be encountered in time-to-event analyses. To focus on outcomes associated with a given chronic glycemic state maintained over the entire lifetime, we restricted our analyses to only those participants for whom data were available over the life course and until death. […] We excluded individuals with unknown age of onset of glycemic impairment (i.e., age ≥50 years with prediabetes or diabetes at enrollment). […] We analyzed cause-specific mortality, allowing for relating time-varying exposures with lifetime risk for an event (4). We related glycemic phenotypes to cardiovascular versus noncardiovascular cause of death using a case-control design, where cases were defined as individuals who died of cardiovascular disease (death from stroke, heart failure, or other vascular event) or coronary heart disease (CHD) and controls were those who died of other causes.”

“The mean age of participants at enrollment was 42 ± 7 years (43% women). The mean age at death was 73 ± 10 years. […] In our study, approximately half of the individuals presented with glycemic impairment in their lifetime, of whom two-thirds developed prediabetes but never diabetes. In our study, these individuals had lower cardiovascular-related mortality compared with those who later developed diabetes, even if the prediabetes onset was early in life. However, individuals with early-onset prediabetes, despite lifelong avoidance of overt diabetes, had greater propensity for death due to cardiovascular or coronary versus noncardiovascular disease compared with those who maintained lifelong normal glucose status. […] Prediabetes is a heterogeneous entity. Whereas some forms of prediabetes are precursors to diabetes, other types of prediabetes never progress to diabetes but still confer increased propensity for death from a cardiovascular cause.”

iv. Learning From Past Failures of Oral Insulin Trials.

Very recently one of the largest type 1 diabetes prevention trials using daily administration of oral insulin or placebo was completed. After 9 years of study enrollment and follow-up, the randomized controlled trial failed to delay the onset of clinical type 1 diabetes, which was the primary end point. The unfortunate outcome follows the previous large-scale trial, the Diabetes Prevention Trial–Type 1 (DPT-1), which again failed to delay diabetes onset with oral insulin or low-dose subcutaneous insulin injections in a randomized controlled trial with relatives at risk for type 1 diabetes. These sobering results raise the important question, “Where does the type 1 diabetes prevention field move next?” In this Perspective, we advocate for a paradigm shift in which smaller mechanistic trials are conducted to define immune mechanisms and potentially identify treatment responders. […] Mechanistic trials will allow for better trial design and patient selection based upon molecular markers prior to large randomized controlled trials, moving toward a personalized medicine approach for the prevention of type 1 diabetes.

“Before a disease can be prevented, it must be predicted. The ability to assess risk for developing type 1 diabetes (T1D) has been well documented over the last two decades (1). Using genetic markers, human leukocyte antigen (HLA) DQ and DR typing (2), islet autoantibodies (1), and assessments of glucose tolerance (intravenous or oral glucose tolerance tests) has led to accurate prediction models for T1D development (3). Prospective birth cohort studies Diabetes Autoimmunity Study in the Young (DAISY) in Colorado (4), Type 1 Diabetes Prediction and Prevention (DIPP) study in Finland (5), and BABYDIAB studies in Germany have followed genetically at-risk children for the development of islet autoimmunity and T1D disease onset (6). These studies have been instrumental in understanding the natural history of T1D and making T1D a predictable disease with the measurement of antibodies in the peripheral blood directed against insulin and proteins within β-cells […]. Having two or more islet autoantibodies confers an ∼85% risk of developing T1D within 15 years and nearly 100% over time (7). […] T1D can be predicted by measuring islet autoantibodies, and thousands of individuals including young children are being identified through screening efforts, necessitating the need for treatments to delay and prevent disease onset.”

“Antigen-specific immunotherapies hold the promise of potentially inducing tolerance by inhibiting effector T cells and inducing regulatory T cells, which can act locally at tissue-specific sites of inflammation (12). Additionally, side effects are minimal with these therapies. As such, insulin and GAD have both been used as antigen-based approaches in T1D (13). Oral insulin has been evaluated in two large randomized double-blinded placebo-controlled trials over the last two decades. First in the Diabetes Prevention Trial–Type 1 (DPT-1) and then in the TrialNet clinical trials network […] The DPT-1 enrolled relatives at increased risk for T1D having islet autoantibodies […] After 6 years of treatment, there was no delay in T1D onset. […] The TrialNet study screened, enrolled, and followed 560 at-risk relatives over 9 years from 2007 to 2016, and results have been recently published (16). Unfortunately, this trial failed to meet the primary end point of delaying or preventing diabetes onset.”

“Many factors influence the potency and efficacy of antigen-specific therapy such as dose, frequency of dosing, route of administration, and, importantly, timing in the disease process. […] Over the last two decades, most T1D clinical trial designs have randomized participants 1:1 or 2:1, drug to placebo, in a double-blind two-arm design, especially those intervention trials in new-onset T1D (18). Primary end points have been delay in T1D onset for prevention trials or stimulated C-peptide area under the curve at 12 months with new-onset trials. These designs have served the field well and provided reliable human data for efficacy. However, there are limitations including the speed at which these trials can be completed, the number of interventions evaluated, dose optimization, and evaluation of mechanistic hypotheses. Alternative clinical trial designs, such as adaptive trial designs using Bayesian statistics, can overcome some of these issues. Adaptive designs use accumulating data from the trial to modify certain aspects of the study, such as enrollment and treatment group assignments. This “learn as we go” approach relies on biomarkers to drive decisions on planned trial modifications. […] One of the significant limitations for adaptive trial designs in the T1D field, at the present time, is the lack of validated biomarkers for short-term readouts to inform trial adaptations. However, large-scale collaborative efforts are ongoing to define biomarkers of T1D-specific immune dysfunction and β-cell stress and death (9,22).”

T1D prevention has proven much more difficult than originally thought, challenging the paradigm that T1D is a single disease. T1D is indeed a heterogeneous disease in terms of age of diagnosis, islet autoantibody profiles, and the rate of loss of residual β-cell function after clinical onset. Children have a much more rapid loss of residual insulin production (measured as C-peptide area under the curve following a mixed-meal tolerance test) after diagnosis than older adolescents and adults (23,24), indicating that childhood and adult-onset T1D are not identical. Further evidence for subtypes of T1D come from studies of human pancreata of T1D organ donors in which children (0–14 years of age) within 1 year of diagnosis had many more inflamed islets compared with older adolescents and adults aged 15–39 years old (25). Additionally, a younger age of T1D onset (<7 years) has been associated with higher numbers of CD20+ B cells within islets and fewer insulin-containing islets compared with an age of onset ≥13 years associated with fewer CD20+ islet infiltrating cells and more insulin-containing islets (26,27). This suggests a much more aggressive autoimmune process in younger children and distinct endotypes (a subtype of a condition defined by a distinct pathophysiologic mechanism), which has recently been proposed for T1D (27).”

“Safe and specific therapies capable of being used in children are needed for T1D prevention. The vast majority of drug development involves small biotechnology companies, specialty pharmaceutical firms, and large pharmaceutical companies, more so than traditional academia. A large amount of preclinical and clinical research (phase 1, 2, and 3 studies) are needed to advance a drug candidate through the development pipeline to achieve U.S. Food and Drug Administration (FDA) approval for a given disease. A recent analysis of over 4,000 drugs from 835 companies in development during 2003–2011 revealed that only 10.4% of drugs that enter clinical development at phase 1 (safety studies) advance to FDA approval (32). However, the success rate increases 50% for the lead indication of a drug, i.e., a drug specifically developed for one given disease (32). Reasons for this include strong scientific rationale and early efficacy signals such as correlating pharmacokinetic (drug levels) to pharmacodynamic (drug target effects) tests for the lead indication. Lead indications also tend to have smaller, better-defined “homogenous” patient populations than nonlead indications for the same drug. This would imply that the T1D field needs more companies developing drugs specifically for T1D, not type 2 diabetes or other autoimmune diseases with later testing to broaden a drug’s indication. […] In a similar but separate analysis, selection biomarkers were found to substantially increase the success rate of drug approvals across all phases of drug development. Using a selection biomarker as part of study inclusion criteria increased drug approval threefold from 8.4% to 25.9% when used in phase 1 trials, 28% to 46% when transitioning from a phase 2 to phase 3 efficacy trial, and 55% to 76% for a phase 3 trial to likelihood of approval (33). These striking data support the concept that enrichment of patient enrollment at the molecular level is a more successful strategy than heterogeneous enrollment in clinical intervention trials. […] Taken together, new drugs designed specifically for children at risk for T1D and a biomarker selecting patients for a treatment response may increase the likelihood for a successful prevention trial; however, experimental confirmation in clinical trials is needed.”

v. Metabolic Karma — The Atherogenic Legacy of Diabetes: The 2017 Edwin Bierman Award Lecture.

“Cardiovascular (CV) disease remains the major cause of mortality and is associated with significant morbidity in both type 1 and type 2 diabetes (14). Despite major improvements in the management of traditional risk factors, including hypertension, dyslipidemia, and glycemic control prevention, retardation and reversal of atherosclerosis, as manifested clinically by myocardial infarction, stroke, and peripheral vascular disease, remain a major unmet need in the population with diabetes. For example, in the Steno-2 study and in its most recent report of the follow-up phase, at least a decade after cessation of the active treatment phase, there remained a high risk of death, primarily from CV disease despite aggressive control of the traditional risk factors, in this originally microalbuminuric population with type 2 diabetes (5,6). In a meta-analysis of major CV trials where aggressive glucose lowering was instituted […] the beneficial effect of intensive glycemic control on CV disease was modest, at best (7). […] recent trials with two sodium–glucose cotransporter 2 inhibitors, empagliflozin and canagliflozin (11,12), and two long-acting glucagon-like peptide 1 agonists, liraglutide and semaglutide (13,14), have reported CV benefits that have led in some of these trials to a decrease in CV and all-cause mortality. However, even with these recent positive CV outcomes, CV disease remains the major burden in the population with diabetes (15).”

“This unmet need of residual CV disease in the population with diabetes remains unexplained but may occur as a result of a range of nontraditional risk factors, including low-grade inflammation and enhanced thrombogenicity as a result of the diabetic milieu (16). Furthermore, a range of injurious pathways as a result of chronic hyperglycemia previously studied in vitro in endothelial cells (17) or in models of microvascular complications may also be relevant and are a focus of this review […] [One] major factor that is likely to promote atherosclerosis in the diabetes setting is increased oxidative stress. There is not only increased generation of ROS from diverse sources but also reduced antioxidant defense in diabetes (40). […] vascular ROS accumulation is closely linked to atherosclerosis and vascular inflammation provide the impetus to consider specific antioxidant strategies as a novel therapeutic approach to decrease CV disease, particularly in the setting of diabetes.”

“One of the most important findings from numerous trials performed in subjects with type 1 and type 2 diabetes has been the identification that previous episodes of hyperglycemia can have a long-standing impact on the subsequent development of CV disease. This phenomenon known as “metabolic memory” or the “legacy effect” has been reported in numerous trials […] The underlying explanation at a molecular and/or cellular level for this phenomenon remains to be determined. Our group, as well as others, has postulated that epigenetic mechanisms may participate in conferring metabolic memory (5153). In in vitro studies initially performed in aortic endothelial cells, transient incubation of these cells in high glucose followed by subsequent return of these cells to a normoglycemic environment was associated with increased gene expression of the p65 subunit of NF-κB, NF-κB activation, and expression of NF-κB–dependent proteins, including MCP-1 and VCAM-1 (54).

In further defining a potential epigenetic mechanism that could explain the glucose-induced upregulation of genes implicated in vascular inflammation, a specific histone methylation mark was identified in the promoter region of the p65 gene (54). This histone 3 lysine 4 monomethylation (H3K4m1) occurred as a result of mobilization of the histone methyl transferase, Set7. Furthermore, knockdown of Set7 attenuated glucose-induced p65 upregulation and prevented the persistent upregulation of this gene despite these endothelial cells returning to a normoglycemic milieu (55). These findings, confirmed in animal models exposed to transient hyperglycemia (54), provide the rationale to consider Set7 as an appropriate target for end-organ protective therapies in diabetes. Although specific Set7 inhibitors are currently unavailable for clinical development, the current interest in drugs that block various enzymes, such as Set7, that influence histone methylation in diseases, such as cancer (56), could lead to agents that warrant testing in diabetes. Studies addressing other sites of histone methylation as well as other epigenetic pathways including DNA methylation and acetylation have been reported or are currently in progress (55,57,58), particularly in the context of diabetes complications. […] As in vitro and preclinical studies increase our knowledge and understanding of the pathogenesis of diabetes complications, it is likely that we will identify new molecular targets leading to better treatments to reduce the burden of macrovascular disease. Nevertheless, these new treatments will need to be considered in the context of improved management of traditional risk factors.”

vi. Perceived risk of diabetes seriously underestimates actual diabetes risk: The KORA FF4 study.

“According to the International Diabetes Federation (IDF), almost half of the people with diabetes worldwide are unaware of having the disease, and even in high-income countries, about one in three diabetes cases is not diagnosed [1,2]. In the USA, 28% of diabetes cases are undiagnosed [3]. In DEGS1, a recent population-based German survey, 22% of persons with HbA1c ≥ 6.5% were unaware of their disease [4]. Persons with undiagnosed diabetes mellitus (UDM) have a more than twofold risk of mortality compared to persons with normal glucose tolerance (NGT) [5,6]; many of them also have undiagnosed diabetes complications like retinopathy and chronic kidney disease [7,8]. […] early detection of diabetes and prediabetes is beneficial for patients, but may be delayed by patients´ being overly optimistic about their own health. Therefore, it is important to address how persons with UDM or prediabetes perceive their diabetes risk.”

“The proportion of persons who perceived their risk of having UDM at the time of the interview as “negligible”, “very low” or “low” was 87.1% (95% CI: 85.0–89.0) in NGT [normal glucose tolerance individuals], 83.9% (81.2–86.4) in prediabetes, and 74.2% (64.5–82.0) in UDM […]. The proportion of persons who perceived themselves at risk of developing diabetes in the following years ranged from 14.6% (95% CI: 12.6–16.8) in NGT to 20.6% (17.9–23.6) in prediabetes to 28.7% (20.5–38.6) in UDM […] In univariate regression models, perceiving oneself at risk of developing diabetes was associated with younger age, female sex, higher school education, obesity, self-rated poor general health, and parental diabetes […] the proportion of better educated younger persons (age ≤ 60 years) with prediabetes, who perceived themselves at risk of developing diabetes was 35%, whereas this figure was only 13% in less well educated older persons (age > 60 years).”

The present study shows that three out of four persons with UDM [undiagnosed diabetes mellitus] believed that the probability of having undetected diabetes was low or very low. In persons with prediabetes, more than 70% believed that they were not at risk of developing diabetes in the next years. People with prediabetes were more inclined to perceive themselves at risk of diabetes if their self-rated general health was poor, their mother or father had diabetes, they were obese, they were female, their educational level was high, and if they were younger. […] People with undiagnosed diabetes or prediabetes considerably underestimate their probability of having or developing diabetes. […] perceived diabetes risk was lower in men, lower educated and older persons. […] Our results showed that people with low and intermediate education strongly underestimate their risk of diabetes and may qualify as target groups for detection of UDM and prediabetes.”

“The present results were in line with results from the Dutch Hoorn Study [18,19]. Adriaanse et al. reported that among persons with UDM, only 28.3% perceived their likeliness of having diabetes to be at least 10% [18], and among persons with high risk of diabetes (predicted from a symptom risk questionnaire), the median perceived likeliness of having diabetes was 10.8% [19]. Again, perceived risk did not fully reflect the actual risk profiles. For BMI, there was barely any association with perceived risk of diabetes in the Dutch study [19].”

July 2, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Immunology, Medicine, Molecular biology, Pharmacology, Studies | Leave a comment

Developmental Biology (I)

On goodreads I called the book “[a]n excellent introduction to the field of developmental biology” and I gave it five stars.

Below I have included some sample observations from the first third of the book or so, as well as some supplementary links.

“The major processes involved in development are: pattern formation; morphogenesis or change in form; cell differentiation by which different types of cell develop; and growth. These processes involve cell activities, which are determined by the proteins present in the cells. Genes control cell behaviour by controlling where and when proteins are synthesized, and cell behaviour provides the link between gene action and developmental processes. What a cell does is determined very largely by the proteins it contains. The hemoglobin in red blood cells enables them to transport oxygen; the cells lining the vertebrate gut secrete specialized digestive enzymes. These activities require specialized proteins […] In development we are concerned primarily with those proteins that make cells different from one another and make them carry out the activities required for development of the embryo. Developmental genes typically code for proteins involved in the regulation of cell behaviour. […] An intriguing question is how many genes out of the total genome are developmental genes – that is, genes specifically required for embryonic development. This is not easy to estimate. […] Some studies suggest that in an organism with 20,000 genes, about 10% of the genes may be directly involved in development.”

“The fate of a group of cells in the early embryo can be determined by signals from other cells. Few signals actually enter the cells. Most signals are transmitted through the space outside of cells (the extracellular space) in the form of proteins secreted by one cell and detected by another. Cells may interact directly with each other by means of molecules located on their surfaces. In both these cases, the signal is generally received by receptor proteins in the cell membrane and is subsequently relayed through other signalling proteins inside the cell to produce the cellular response, usually by turning genes on or off. This process is known as signal transduction. These pathways can be very complex. […] The complexity of the signal transduction pathway means that it can be altered as the cell develops so the same signal can have a different effect on different cells. How a cell responds to a particular signal depends on its internal state and this state can reflect the cell’s developmental history — cells have good memories. Thus, different cells can respond to the same signal in very different ways. So the same signal can be used again and again in the developing embryo. There are thus rather few signalling proteins.”

“All vertebrates, despite their many outward differences, have a similar basic body plan — the segmented backbone or vertebral column surrounding the spinal cord, with the brain at the head end enclosed in a bony or cartilaginous skull. These prominent structures mark the antero-posterior axis with the head at the anterior end. The vertebrate body also has a distinct dorso-ventral axis running from the back to the belly, with the spinal cord running along the dorsal side and the mouth defining the ventral side. The antero-posterior and dorso-ventral axes together define the left and right sides of the animal. Vertebrates have a general bilateral symmetry around the dorsal midline so that outwardly the right and left sides are mirror images of each other though some internal organs such as the heart and liver are arranged asymmetrically. How these axes are specified in the embryo is a key issue. All vertebrate embryos pass through a broadly similar set of developmental stages and the differences are partly related to how and when the axes are set up, and how the embryo is nourished. […] A quite rare but nevertheless important event before gastrulation in mammalian embryos, including humans, is the splitting of the embryo into two, and identical twins can then develop. This shows the remarkable ability of the early embryo to regulate [in this context, regulation refers to ‘the ability of an embryo to restore normal development even if some portions are removed or rearranged very early in development’ – US] and develop normally when half the normal size […] In mammals, there is no sign of axes or polarity in the fertilized egg or during early development, and it only occurs later by an as yet unknown mechanism.”

“How is left–right established? Vertebrates are bilaterally symmetric about the midline of the body for many structures, such as eyes, ears, and limbs, but most internal organs are asymmetric. In mice and humans, for example, the heart is on the left side, the right lung has more lobes than the left, the stomach and spleen lie towards the left, and the bulk of the liver is towards the right. This handedness of organs is remarkably consistent […] Specification of left and right is fundamentally different from specifying the other axes of the embryo, as left and right have meaning only after the antero-posterior and dorso-ventral axes have been established. If one of these axes were reversed, then so too would be the left–right axis and this is the reason that handedness is reversed when you look in a mirror—your dorsoventral axis is reversed, and so left becomes right and vice versa. The mechanisms by which left–right symmetry is initially broken are still not fully understood, but the subsequent cascade of events that leads to organ asymmetry is better understood. The ‘leftward’ flow of extracellular fluid across the embryonic midline by a population of ciliated cells has been shown to be critical in mouse embryos in inducing asymmetric expression of genes involved in establishing left versus right. The antero-posterior patterning of the mesoderm is most clearly seen in the differences in the somites that form vertebrae: each individual vertebra has well defined anatomical characteristics depending on its location along the axis. Patterning of the skeleton along the body axis is based on the somite cells acquiring a positional value that reflects their position along the axis and so determines their subsequent development. […] It is the Hox genes that define positional identity along the antero-posterior axis […]. The Hox genes are members of the large family of homeobox genes that are involved in many aspects of development and are the most striking example of a widespread conservation of developmental genes in animals. The name homeobox comes from their ability to bring about a homeotic transformation, converting one region into another. Most vertebrates have clusters of Hox genes on four different chromosomes. A very special feature of Hox gene expression in both insects and vertebrates is that the genes in the clusters are expressed in the developing embryo in a temporal and spatial order that reflects their order on the chromosome. Genes at one end of the cluster are expressed in the head region, while those at the other end are expressed in the tail region. This is a unique feature in development, as it is the only known case where a spatial arrangement of genes on a chromosome corresponds to a spatial pattern in the embryo. The Hox genes provide the somites and adjacent mesoderm with positional values that determine their subsequent development.”

“Many of the genes that control the development of flies are similar to those controlling development in vertebrates, and indeed in many other animals. it seems that once evolution finds a satisfactory way of developing animal bodies, it tends to use the same mechanisms and molecules over and over again with, of course, some important modifications. […] The insect body is bilaterally symmetrical and has two distinct and largely independent axes: the antero-posterior and dorso-ventral axes, which are at right angles to each other. These axes are already partly set up in the fly egg, and become fully established and patterned in the very early embryo. Along the antero-posterior axis the embryo becomes divided into a number of segments, which will become the head, thorax, and abdomen of the larva. A series of evenly spaced grooves forms more or less simultaneously and these demarcate parasegments, which later give rise to the segments of the larva and adult. Of the fourteen larval parasegments, three contribute to mouthparts of the head, three to the thoracic region, and eight to the abdomen. […] Development is initiated by a gradient of the protein Bicoid, along the axis running from anterior to posterior in the egg; this provides the positional information required for further patterning along this axis. Bicoid is a transcription factor and acts as a morphogen—a graded concentration of a molecule that switches on particular genes at different threshold concentrations, thereby initiating a new pattern of gene expression along the axis. Bicoid activates anterior expression of the gene hunchback […]. The hunchback gene is switched on only when Bicoid is present above a certain threshold concentration. The protein of the hunchback gene, in turn, is instrumental in switching on the expression of the other genes, along the antero-posterior axis. […] The dorso-ventral axis is specified by a different set of maternal genes from those that specify the anterior-posterior axis, but by a similar mechanism. […] Once each parasegment is delimited, it behaves as an independent developmental unit, under the control of a particular set of genes. The parasegments are initially similar but each will soon acquire its own unique identity mainly due to Hox genes.”

“Because plant cells have rigid cell walls and, unlike animal cells, cannot move, a plant’s development is very much the result of patterns of oriented cell divisions and increase in cell size. Despite this difference, cell fate in plant development is largely determined by similar means as in animals – by a combination of positional signals and intercellular communication. […] The logic behind the spatial layouts of gene expression that pattern a developing flower is similar to that of Hox gene action in patterning the body axis in animals, but the genes involved are completely different. One general difference between plant and animal development is that most of the development occurs not in the embryo but in the growing plant. Unlike an animal embryo, the mature plant embryo inside a seed is not simply a smaller version of the organism it will become. All the ‘adult’ structures of the plant – shoots, roots, stalks, leaves, and flowers – are produced in the adult plant from localized groups of undifferentiated cells known as meristems. […] Another important difference between plant and animal cells is that a complete, fertile plant can develop from a single differentiated somatic cell and not just from a fertilized egg. This suggests that, unlike the differentiated cells of adult animals, some differentiated cells of the adult plant may retain totipotency and so behave like animal embryonic stem cells. […] The small organic molecule auxin is one of the most important and ubiquitous chemical signals in plant development and plant growth.”

“All animal embryos undergo a dramatic change in shape during their early development. This occurs primarily during gastrulation, the process that transforms a two-dimensional sheet of cells into the complex three-dimensional animal body, and involves extensive rearrangements of cell layers and the directed movement of cells from one location to another. […] Change in form is largely a problem in cell mechanics and requires forces to bring about changes in cell shape and cell migration. Two key cellular properties involved in changes in animal embryonic form are cell contraction and cell adhesiveness. Contraction in one part of a cell can change the cell’s shape. Changes in cell shape are generated by forces produced by the cytoskeleton, an internal protein framework of filaments. Animal cells stick to one another, and to the external support tissue that surrounds them (the extracellular matrix), through interactions involving cell-surface proteins. Changes in the adhesion proteins at the cell surface can therefore determine the strength of cell–cell adhesion and its specificity. These adhesive interactions affect the surface tension at the cell membrane, a property that contributes to the mechanics of the cell behaviour. Cells can also migrate, with contraction again playing a key role. An additional force that operates during morphogenesis, particularly in plants but also in a few aspects of animal embryogenesis, is hydrostatic pressure, which causes cells to expand. In plants there is no cell movement or change in shape, and changes in form are generated by oriented cell division and cell expansion. […] Localized contraction can change the shape of the cells as well as the sheet they are in. For example, folding of a cell sheet—a very common feature in embryonic development—is caused by localized changes in cell shape […]. Contraction on one side of a cell results in it acquiring a wedge-like form; when this occurs among a few cells locally in a sheet, a bend occurs at the site, deforming the sheet.”

“The integrity of tissues in the embryo is maintained by adhesive interactions between cells and between cells and the extracellular matrix; differences in cell adhesiveness also help maintain the boundaries between different tissues and structures. Cells stick to each other by means of cell adhesion molecules, such as cadherins, which are proteins on the cell surface that can bind strongly to proteins on other cell surfaces. About 30 different types of cadherins have been identified in vertebrates. […] Adhesion of a cell to the extracellular matrix, which contains proteins such as collagen, is by the binding of integrins in the cell membrane to these matrix molecules. […] Convergent extension plays a key role in gastrulation of [some] animals and […] morphogenetic processes. It is a mechanism for elongating a sheet of cells in one direction while narrowing its width, and occurs by rearrangement of cells within the sheet, rather than by cell migration or cell division. […] For convergent extension to take place, the axes along which the cells will intercalate and extend must already have been defined. […] Gastrulation in vertebrates involves a much more dramatic and complex rearrangement of tissues than in sea urchins […] But the outcome is the same: the transformation of a two-dimensional sheet of cells into a three-dimensional embryo, with ectoderm, mesoderm, and endoderm in the correct positions for further development of body structure. […] Directed dilation is an important force in plants, and results from an increase in hydrostatic pressure inside a cell. Cell enlargement is a major process in plant growth and morphogenesis, providing up to a fiftyfold increase in the volume of a tissue. The driving force for expansion is the hydrostatic pressure exerted on the cell wall as a result of the entry of water into cell vacuoles by osmosis. Plant-cell expansion involves synthesis and deposition of new cell-wall material, and is an example of directed dilation. The direction of cell growth is determined by the orientation of the cellulose fibrils in the cell wall.”

Links:

Developmental biology.
August Weismann. Hans Driesch. Hans Spemann. Hilde Mangold. Spemann-Mangold organizer.
Induction. Cleavage.
Developmental model organisms.
Blastula. Embryo. Ectoderm. Mesoderm. Endoderm.
Gastrulation.
Xenopus laevis.
Notochord.
Neurulation.
Organogenesis.
DNA. Gene. Protein. Transcription factor. RNA polymerase.
Epiblast. Trophoblast/trophectoderm. Inner cell mass.
Pluripotency.
Polarity in embryogenesis/animal-vegetal axis.
Primitive streak.
Hensen’s node.
Neural tube. Neural fold. Neural crest cells.
Situs inversus.
Gene silencing. Morpholino.
Drosophila embryogenesis.
Pair-rule gene.
Cell polarity.
Mosaic vs regulative development.
Caenorhabditis elegans.
Fate mapping.
Plasmodesmata.
Arabidopsis thaliana.
Apical-basal axis.
Hypocotyl.
Phyllotaxis.
Primordium.
Quiescent centre.
Filopodia.
Radial cleavage. Spiral cleavage.

June 11, 2018 Posted by | Biology, Books, Botany, Genetics, Molecular biology | Leave a comment

Blood (II)

Below I have added some quotes from the chapters of the book I did not cover in my first post, as well as some supplementary links.

Haemoglobin is of crucial biological importance; it is also easy to obtain safely in large quantities from donated blood. These properties have resulted in its becoming the most studied protein in human history. Haemoglobin played a key role in the history of our understanding of all proteins, and indeed the science of biochemistry itself. […] Oxygen transport defines the primary biological function of blood. […] Oxygen gas consists of two atoms of oxygen bound together to form a symmetrical molecule. However, oxygen cannot be transported in the plasma alone. This is because water is very poor at dissolving oxygen. Haemoglobin’s primary function is to increase this solubility; it does this by binding the oxygen gas on to the iron in its haem group. Every haem can bind one oxygen molecule, increasing the amount of oxygen able to dissolve in the blood.”

“An iron atom can exist in a number of different forms depending on how many electrons it has in its atomic orbitals. In its ferrous (iron II) state iron can bind oxygen readily. The haemoglobin protein has therefore evolved to stabilize its haem iron cofactor in this ferrous state. The result is that over fifty times as much oxygen is stored inside the confines of the red blood cell compared to outside in the watery plasma. However, using iron to bind oxygen comes at a cost. Iron (II) can readily lose one of its electrons to the bound oxygen, a process called ‘oxidation’. So the same form of iron that can bind oxygen avidly (ferrous) also readily reacts with that same oxygen forming an unreactive iron III state, called ‘ferric’. […] The complex structure of the protein haemoglobin is required to protect the ferrous iron from oxidizing. The haem iron is held in a precise configuration within the protein. Specific amino acids are ideally positioned to stabilize the iron–oxygen bond and prevent it from oxidizing. […] the iron stays ferrous despite the presence of the nearby oxygen. Having evolved over many hundreds of millions of years, this stability is very difficult for chemists to mimic in the laboratory. This is one reason why, desirable as it might be in terms of cost and convenience, it is not currently possible to replace blood transfusions with a simple small chemical iron oxygen carrier.”

“Given the success of the haem iron and globin combination in haemoglobin, it is no surprise that organisms have used this basic biochemical architecture for a variety of purposes throughout evolution, not just oxygen transport in blood. One example is the protein myoglobin. This protein resides inside animal cells; in the human it is found in the heart and skeletal muscle. […] Myoglobin has multiple functions. Its primary role is as an aid to oxygen diffusion. Whereas haemoglobin transports oxygen from the lung to the cell, myoglobin transports it once it is inside the cell. As oxygen is so poorly soluble in water, having a chain of molecules inside the cell that can bind and release oxygen rapidly significantly decreases the time it takes the gas to get from the blood capillary to the part of the cell—the mitochondria—where it is needed. […] Myoglobin can also act as an emergency oxygen backup store. In humans this is trivial and of questionable importance. Not so in diving mammals such as whales and dolphins that have as much as thirty times the myoglobin content of the terrestrial equivalent; indeed those mammals that dive for the longest duration have the most myoglobin. […] The third known function of myoglobin is to protect the muscle cells from damage by nitric oxide gas.”

“The heart is the organ that pumps blood around the body. If the heart stops functioning, blood does not flow. The driving force for this flow is the pressure difference between the arterial blood leaving the heart and the returning venous blood. The decreasing pressure in the venous side explains the need for unidirectional valves within veins to prevent the blood flowing in the wrong direction. Without them the return of the blood through the veins to the heart would be too slow, especially when standing up, when the venous pressure struggles to overcome gravity. […] normal [blood pressure] ranges rise slowly with age. […] high resistance in the arterial circulation at higher blood pressures [places] additional strain on the left ventricle. If the heart is weak, it may fail to achieve the extra force required to pump against this resistance, resulting in heart failure. […] in everyday life, a low blood pressure is rarely of concern. Indeed, it can be a sign of fitness as elite athletes have a much lower resting blood pressure than the rest of the population. […] the effect of exercise training is to thicken the muscles in the walls of the heart and enlarge the chambers. This enables more blood to be pumped per beat during intense exercise. The consequence of this extra efficiency is that when an athlete is resting—and therefore needs no more oxygen than a more sedentary person—the heart rate and blood pressure are lower than average. Most people’s experience of hypotension will be reflected by dizzy spells and lack of balance, especially when moving quickly to an upright position. This is because more blood pools in the legs when you stand up, meaning there is less blood for the heart to pump. The immediate effect should be for the heart to beat faster to restore the pressure. If there is a delay, the decrease in pressure can decrease the blood flow to the brain and cause dizziness; in extreme cases this can lead to fainting.”

“If hypertension is persistent, patients are most likely to be treated with drugs that target specific pathways that the body uses to control blood pressure. For example angiotensin is a protein that can trigger secretion of the hormone aldosterone from the adrenal gland. In its active form angiotensin can directly constrict blood vessels, while aldosterone enhances salt and water retention, so raising blood volume. Both these effects increase blood pressure. Angiotensin is converted into its active form by an enzyme called ‘Angiotensin Converting Enzyme’ (ACE). An ACE inhibitor drug prevents this activity, keeping angiotensin in its inactive form; this will therefore drop the patient’s blood pressure. […] The metal calcium controls many processes in the body. Its entry into muscle cells triggers muscle contraction. Preventing this entry can therefore reduce the force of contraction of the heart and the ability of arteries to constrict. Both of these will have the effect of decreasing blood pressure. Calcium enters muscle cells via specific protein-based channels. Drugs that block these channels (calcium channel blockers) are therefore highly effective at treating hypertension.”

Autoregulation is a homeostatic process designed to ensure that blood flow remains constant [in settings where constancy is desirable]. However, there are many occasions when an organism actively requires a change in blood flow. It is relatively easy to imagine what these are. In the short term, blood supplies oxygen and nutrients. When these are used up rapidly, or their supply becomes limited, the response will be to increase blood flow. The most obvious example is the twenty-fold increase in oxygen and glucose consumption that occurs in skeletal muscle during exercise when compared to rest. If there were no accompanying increase in blood flow to the muscle the oxygen supply would soon run out. […] There are hundreds of molecules known that have the ability to increase or decrease blood flow […] The surface of all blood vessels is lined by a thin layer of cells, the ‘endothelium’. Endothelial cells form a barrier between the blood and the surrounding tissue, controlling access of materials into and out of the blood. For example white blood cells can enter or leave the circulation via interacting with the endothelium; this is the route by which neutrophils migrate from the blood to the site of tissue damage or bacterial/viral attack as part of the innate immune response. However, the endothelium is not just a selective barrier. It also plays an active role in blood physiology and biochemistry.”

“Two major issues [related to blood transfusions] remained at the end of the 19th century: the problem of clotting, which all were aware of; and the problem of blood group incompatbility, which no one had the slightest idea even existed. […] For blood transfusions to ever make a recovery the key issues of blood clotting and adverse side effects needed to be resolved. In 1875 the Swedish biochemist Olof Hammarsten showed that adding calcium accelerated the rate of blood clotting (we now know the mechanism for this is that key enzymes in blood platelets that catalyse fibrin formation require calcium for their function). It therefore made sense to use chemicals that bind calcium to try to prevent clotting. Calcium ions are positively charged; adding negatively charged ions such as oxalate and citrate neutralized the calcium, preventing its clot-promoting action. […] At the same time as anticoagulants were being discovered, the reason why some blood transfusions failed even when there were no clots was becoming clear. It had been shown that animal blood given to humans tended to clump together or agglutinate, eventually bursting and releasing free haemoglobin and causing kidney damage. In the early 1900s, working in Vienna, Karl Landsteiner showed the same effect could occur with human-to-human transfusion. The trick was the ability to separate blood cells from serum. This enabled mixing blood cells from a variety of donors with plasma from a variety of participants. Using his laboratory staff as subjects, Landsteiner showed that only some combinations caused the agglutination reaction. Some donor cells (now known as type O) never clumped. Others clumped depending on the nature of the plasma in a reproducible manner. A careful study of Landsteiner’s results revealed the ABO blood type distinctions […]. Versions of these agglutination tests still form the basis of checking transfused blood today.”

“No blood product can be made completely sterile, no matter how carefully it is processed. The best that can be done is to ensure that no new bacteria or viruses are added during the purification, storage, and transportation processes. Nothing can be done to inactivate any viruses that are already present in the donor’s blood, for the harsh treatments necessary to do this would inevitably damage the viability of the product or be prohibitively expensive to implement on the industrial scale that the blood market has become. […] In the 1980s over half the US haemophiliac population was HIV positive.”

“Three fundamentally different ways have been attempted to replace red blood cell transfusions. The first uses a completely chemical approach and makes use of perfluorocarbons, inert chemicals that, in liquid form, can dissolve gasses without reacting with them. […] Perfluorocarbons can dissolve oxygen much more effectively than water. […] The problem with their use as a blood substitute is that the amount of oxygen dissolved in these solutions is linear with increasing pressure. This means that the solution lacks the advantages of the sigmoidal binding curve of haemoglobin, which has evolved to maximize the amount of oxygen captured from the limited fraction found in air (20 per cent oxygen). However, to deliver the same amount of oxygen as haemoglobin, patients using the less efficient perfluorocarbons in their blood need to breathe gas that is almost 100 per cent pure oxygen […]; this restricts the use of these compounds. […] The second type of blood substitute makes use of haemoglobin biology. Initial attempts used purified haemoglobin itself. […] there is no haemoglobin-based blood substitute in general use today […] The problem for the lack of uptake is not that blood substitutes cannot replace red blood cell function. A variety of products have been shown to stay in the vasculature for several days, provide volume support, and deliver oxygen. However, they have suffered due to adverse side effects, most notably cardiac complications. […] In nature the plasma proteins haptoglobin and haemopexin bind and detoxify any free haemoglobin and haem released from red blood cells. The challenge for blood substitute research is to mimic these effects in a product that can still deliver oxygen. […] Despite ongoing research, these problems may prove to be insurmountable. There is therefore interest in a third approach. This is to grow artificial red blood cells using stem cell technology.”

Links:

Porphyrin. Globin.
Felix Hoppe-Seyler. Jacques Monod. Jeffries Wyman. Jean-Pierre Changeux.
Allosteric regulation. Monod-Wyman-Changeux model.
Structural Biochemistry/Hemoglobin (wikibooks). (Many of the topics covered in this link – e.g. comments on affinity, T/R-states, oxygen binding curves, the Bohr effect, etc. – are also covered in the book, so although I do link to some of the other topics also covered in this link below it should be noted that I did in fact leave out quite a few potentially relevant links on account of those topics being covered in the above link).
1,3-Bisphosphoglycerate.
Erythrocruorin.
Haemerythrin.
Hemocyanin.
Cytoglobin.
Neuroglobin.
Sickle cell anemia. Thalassaemia. Hemoglobinopathy. Porphyria.
Pulse oximetry.
Daniel Bernoulli. Hydrodynamica. Stephen Hales. Karl von Vierordt.
Arterial line.
Sphygmomanometer. Korotkoff sounds. Systole. Diastole. Blood pressure. Mean arterial pressure. Hypertension. Antihypertensive drugs. Atherosclerosis Pathology. Beta blocker. Diuretic.
Autoregulation.
Guanylate cyclase. Glyceryl trinitrate.
Blood transfusion. Richard Lower. Jean-Baptiste Denys. James Blundell.
Parabiosis.
Penrose Inquiry.
ABLE (Age of Transfused Blood in Critically Ill Adults) trial.
RECESS trial.

June 7, 2018 Posted by | Biology, Books, Cardiology, Chemistry, History, Medicine, Molecular biology, Pharmacology, Studies | Leave a comment

Molecular biology (III)

Below I have added a few quotes and links related to the last few chapters of the book‘s coverage.

“Normal ageing results in part from exhaustion of stem cells, the cells that reside in most organs to replenish damaged tissue. As we age DNA damage accumulates and this eventually causes the cells to enter a permanent non-dividing state called senescence. This protective ploy however has its downside as it limits our lifespan. When too many stem cells are senescent the body is compromised in its capacity to renew worn-out tissue, causing the effects of ageing. This has a knock-on effect of poor intercellular communication, mitochondrial dysfunction, and loss of protein balance (proteostasis). Low levels of chronic inflammation also increase with ageing and could be the trigger for changes associated with many age-related disorders.”

“There has been a dramatic increase in ageing research using yeast and invertebrates, leading to the discovery of more ‘ageing genes’ and their pathways. These findings can be extrapolated to humans since longevity pathways are conserved between species. The major pathways known to influence ageing have a common theme, that of sensing and metabolizing nutrients. […] The field was advanced by identification of the mammalian Target Of Rapamycin, aptly named mTOR. mTOR acts as a molecular sensor that integrates growth stimuli with nutrient and oxygen availability. Small molecules such as rapamycin that reduce mTOR signalling act in a similar way to severe dietary restriction in slowing the ageing process in organisms such as yeast and worms. […] Rapamycin and its derivatives (rapalogs) have been involved in clinical trials on reducing age-related pathologies […] Another major ageing pathway is telomere maintenance. […] Telomere attrition is a hallmark of ageing and studies have established an association between shorter telomere length (TL) and the risk of various common age-related ailments […] Telomere loss is accelerated by known determinants of ill health […] The relationship between TL and cancer appears complex.”

“Cancer is not a single disease but a range of diseases caused by abnormal growth and survival of cells that have the capacity to spread. […] One of the early stages in the acquisition of an invasive phenotype is epithelial-mesenchymal transition (EMT). Epithelial cells form skin and membranes and for this they have a strict polarity (a top and a bottom) and are bound in position by close connections with adjacent cells. Mesenchymal cells on the other hand are loosely associated, have motility, and lack polarization. The transition between epithelial and mesenchymal cells is a normal process during embryogenesis and wound healing but is deregulated in cancer cells. EMT involves transcriptional reprogramming in which epithelial structural proteins are lost and mesenchymal ones acquired. This facilitates invasion of a tumour into surrounding tissues. […] Cancer is a genetic disease but mostly not inherited from the parents. Normal cells evolve to become cancer cells by acquiring successive mutations in cancer-related genes. There are two main classes of cancer genes, the proto-oncogenes and the tumour suppressor genes. The proto-oncogenes code for protein products that promote cell proliferation. […] A mutation in a proto-oncogene changes it to an ‘oncogene’ […] One gene above all others is associated with cancer suppression and that is TP53. […] approximately half of all human cancers carry a mutated TP53 and in many more, p53 is deregulated. […] p53 plays a key role in eliminating cells that have either acquired activating oncogenes or excessive genomic damage. Thus mutations in the TP53 gene allows cancer cells to survive and divide further by escaping cell death […] A mutant p53 not only lacks the tumour suppressor functions of the normal or wild type protein but in many cases it also takes on the role of an oncogene. […] Overall 5-10 per cent of cancers occur due to inherited or germ line mutations that are passed from parents to offspring. Many of these genes code for DNA repair enzymes […] The vast majority of cancer mutations are not inherited; instead they are sporadic with mutations arising in somatic cells. […] At least 15 per cent of cancers are attributable to infectious agents, examples being HPV and cervical cancer, H. pylori and gastric cancer, and also hepatitis B or C and liver cancer.”

“There are about 10 million different sites at which people can vary in their DNA sequence withing the 3 billion bases in our DNA. […] A few, but highly variable sequences or minisatellites are chosen for DNA profiling. These give a highly sensitive procedure suitable for use with small amounts of body fluids […] even shorter sequences called microsatellite repeats [are also] used. Each marker or microsatellite is a short tandem repeat (STR) of two to five base pairs of DNA sequence. A single STR will be shared by up to 20 per cent of the population but by using a dozen or so identification markers in profile, the error is miniscule. […] Microsatellites are extremely useful for analysing low-quality or degraded DNA left at a crime scene as their short sequences are usually preserved. However, DNA in specimens that have not been optimally preserved persists in exceedingly small amounts and is also highly fragmented. It is probably also riddled by contamination and chemical damage. Such sources of DNA sources of DNA are too degraded to obtain a profile using genomic STRs and in these cases mitochondrial DNA, being more abundant, is more useful than nuclear DNA for DNA profiling. […]  Mitochondrial DNA profiling is the method of choice for determining the identities of missing or unknown people when a maternally linked relative can be found. Molecular biologists can amplify hypervariable regions of mitochondrial DNA by PCR to obtain enough material for analysis. The DNA products are sequenced and single nucleotide differences are sought with a reference DNA from a maternal relative. […] It has now become possible for […] ancient DNA to reveal much more than genotype matches. […] Pigmentation characteristics can now be determined from ancient DNA since skin, hair, and eye colour are some of the easiest characteristics to predict. This is due to the limited number of base differences or SNPs required to explain most of the variability.”

“A broad range of debilitating and fatal conditions, non of which can be cured, are associated with mitochondrial DNA mutations. […] [M]itochondrial DNA mutates ten to thirty times faster than nuclear DNA […] Mitochondrial DNA mutates at a higher rate than nuclear DNA due to higher numbers of DNA molecules and reduced efficiency in controlling DNA replication errors. […] Over 100,000 copies of mitochondrial DNA are present in the cytoplasm of the human egg or oocyte. After fertilization, only maternal mitochondria survive; the small numbers of the father’s mitochondria in the zygote are targeted for destruction. Thus all mitochondrial DNA for all cell types in the resulting embryo is maternal-derived. […] Patients affected by mitochondrial disease usually have a mixture of wild type (normal) and mutant mitochondrial DNA and the disease severity depends on the ratio of the two. Importantly the actual level of mutant DNA in a mother’s heteroplas[m]y […curiously the authors throughout the coverage insist on spelling this ‘heteroplasty’, which according to google is something quite different – I decided to correct the spelling error (?) here – US] is not inherited and offspring can be better or worse off than the mother. This also causes uncertainty since the ratio of wild type to mutant mitochondria may change during development. […] Over 700 mutations in mitochondrial DNA have been found leading to myopathies, neurodegeneration, diabetes, cancer, and infertility.”

Links:

Dementia. Alzheimer’s disease. Amyloid hypothesis. Tau protein. Proteopathy. Parkinson’s disease. TP53-inducible glycolysis and apoptosis regulator (TIGAR).
Progeria. Progerin. Werner’s syndrome. Xeroderma pigmentosum. Cockayne syndrome.
Shelterin.
Telomerase.
Alternative lengthening of telomeres: models, mechanisms and implications (Nature).
Coats plus syndrome.
Neoplasia. Tumor angiogenesis. Inhibitor protein MDM2.
Li–Fraumeni syndrome.
Non-coding RNA networks in cancer (Nature).
Cancer stem cell. (“The reason why current cancer therapies often fail to eradicate the disease is that the CSCs survive current DNA damaging treatments and repopulate the tumour.” See also this IAS lecture which covers closely related topics – US.)
Imatinib.
Restriction fragment length polymorphism (RFLP).
CODIS.
MC1R.
Archaic human admixture with modern humans.
El Tor strain.
DNA barcoding.
Hybrid breakdown/-inviability.
Trastuzumab.
Digital PCR.
Pearson’s syndrome.
Mitochondrial replacement therapy.
Synthetic biology.
Artemisinin.
Craig Venter.
Genome editing.
Indel.
CRISPR.
Tyrosinemia.

June 3, 2018 Posted by | Biology, Books, Cancer/oncology, Genetics, Medicine, Molecular biology | Leave a comment

Blood (I)

As I also mentioned on goodreads I was far from impressed with the first few pages of this book – but I read on, and the book actually turned out to include a decent amount of very reasonable coverage. Taking into consideration the way the author started out the three star rating should be considered a high rating, and in some parts of the book the author covers very complicated stuff in a really very decent manner, considering the format of the book and its target group.

Below I have added some quotes and some links to topics/people/ideas/etc. covered in the first half of the book.

“[Clotting] makes it difficult to study the components of blood. It also [made] it impossible to store blood for transfusion [in the past]. So there was a need to find a way to prevent clotting. Fortunately the discovery that the metal calcium accelerated the rate of clotting enabled the development of a range of compounds that bound calcium and therefore prevented this process. One of them, citrate, is still in common use today [here’s a relevant link, US] when blood is being prepared for storage, or to stop blood from clotting while it is being pumped through kidney dialysis machines and other extracorporeal circuits. Adding citrate to blood, and leaving it alone, will result in gravity gradually separating the blood into three layers; the process can be accelerated by rapid spinning in a centrifuge […]. The top layer is clear and pale yellow or straw-coloured in appearance. This is the plasma, and it contains no cells. The bottom layer is bright red and contains the dense pellet of red cells that have sunk to the bottom of the tube. In-between these two layers is a very narrow layer, called the ‘buffy coat’ because of its pale yellow-brown appearance. This contains white blood cells and platelets. […] red cells, white cells, and platelets […] define the primary functions of blood: oxygen transport, immune defence, and coagulation.”

“The average human has about five trillion red blood cells per litre of blood or thirty trillion […] in total, making up a quarter of the total number of cells in the body. […] It is clear that the red cell has primarily evolved to perform a single function, oxygen transportation. Lacking a nucleus, and the requisite machinery to control the synthesis of new proteins, there is a limited ability for reprogramming or repair. […] each cell [makes] a complete traverse of the body’s circulation about once a minute. In its three- to four-month lifetime, this means every cell will do the equivalent of 150,000 laps around the body. […] Red cells lack mitochondria; they get their energy by fermenting glucose. […] A prosaic explanation for their lack of mitochondria is that it prevents the loss of any oxygen picked up from the lungs on the cells’ journey to the tissues that need it. The shape of the red cell is both deformable and elastic. In the bloodstream each cell is exposed to large shear forces. Yet, due to the properties of the membrane, they are able to constrict to enter blood vessels smaller in diameter than their normal size, bouncing back to their original shape on exiting the vessel the other side. This ability to safely enter very small openings allows capillaries to be very small. This in turn enables every cell in the body to be close to a capillary. Oxygen consequently only needs to diffuse a short distance from the blood to the surrounding tissue; this is vital as oxygen diffusion outside the bloodstream is very slow. Various pathologies, such as diabetes, peripheral vascular disease, and septic shock disturb this deformability of red blood cells, with deleterious consequences.”

“Over thirty different substances, proteins and carbohydrates, contribute to an individual’s blood group. By far the best known are the ABO and Rhesus systems. This is not because the proteins and carbohydrates that comprise these particular blood group types are vitally important for red cell function, but rather because a failure to account for these types during a blood transfusion can have catastrophic consequences. The ABO blood group is sugar-based […] blood from an O person can be safely given to anyone (with no sugar antigens this person is a ‘universal’ donor). […] As all that is needed to convert A and B to O is to remove a sugar, there is commercial and medical interest in devising ways to do this […] the Rh system […] is protein-based rather than sugar based. […] Rh proteins sit in the lipid membrane of the cell and control the transport of molecules into and out of the cell, most probably carbon dioxide and ammonia. The situation is complex, with over thirty different subgroups relating to subtle differences in the protein structure.”

“Unlike the red cells, all white cell subtypes contain nuclei. Some also contain on their surface a set of molecules called the ‘major histocompatibility complex’ (MHC). In humans, these receptors are also called ‘human leucocyte antigens’ (HLA). Their role is to recognize fragments of protein from pathogens and trigger the immune response that will ultimately destroy the invaders. Crudely, white blood cells can be divided into those that attack ‘on sight’ any foreign material — whether it be a fragment of inanimate material such as a splinter or an invading microorganism — and those that form part of a defence mechanism that recognizes specific biomolecules and marshals a slower, but equally devastating response. […] cells of the non-specific (or innate) immune system […] are divided into those that have nuclei with multiple lobed shapes (polymorphonuclear leukocytes or PMN) and those that have a single lobe nucleus ([…] ‘mononuclear leucocytes‘ or ‘MN’). PMN contain granules inside them and so are sometimes called ‘granulocytes‘.”

“Neutrophils are by far the most abundant PMN, making up over half of the total white blood cell count. The primary role of a neutrophil is to engulf a foreign object such as an invading microorganism. […] Eosinophils and basophils are the least abundant PMN cell type, each making up less than 2 per cent of white blood cells. The role of basophils is to respond to tissue injury by triggering an inflammatory response. […] When activated, basophils and mast cells degranulate, releasing molecules such as histamine, leukotrienes, and cytokines. Some of these molecules trigger an increase in blood flow causing redness and heat in the damaged site, others sensitize the area to pain. Greater permeability of the blood vessels results in plasma leaking out of the vessels and into the surrounding tissue at an increased rate, causing swelling. […] This is probably an evolutionary adaption to prevent overuse of a damaged part of the body but also helps to bring white cells and proteins to the damaged, inflamed area. […] The main function of eosinophils is to tackle invaders too large to be engulfed by neutrophils, such as the multicellular parasitic tapeworms and nematodes. […] Monocytes are a type of mononuclear leucocyte (MN) making up about 5 per cent of white blood cells. They spend even less tiem in the circulation than neutrophils, generally less than ten hours, but their time in the blood circulation does not end in death. Instead, they are converted into a cell called a ‘macrophage‘ […] Their role is similar to the neutrophil, […] the ultimate fate of both the red blood cell and the neutrophil is to be engulfed by a macrophage. An excess of monocytes in a blood count (monocytosis) is an indicator of chronic inflammation”.

“Blood has to flow freely. Therefore, the red cells, white cells, and platelets are all suspended in a watery solution called ‘plasma’. But plasma is more than just water. In fact if it were only water all the cells would burst. Plasma has to have a very similar concentration of molecules and ions as the cells. This is because cells are permeable to water. So if the concentration of dissolved substances in the plasma was significantly higher than that in the cells, water would flow from the cells to the plasma in an attempt to equalize this gradient by diluting the plasma; this would result in cell shrinkage. Even worse, if the concentration in the plasma was lower than in the cells, water would flow into the cells from the plasma, and the resulting pressure increase would burst the cells, releasing all their contents into the plasma in the process. […] Plasma contains much more than just the ions required to prevent cells bursting or shrinking. It also contains key components designed to assist in cellular function. The protein clotting factors that are part of the coagulation cascade are always present in low concentrations […] Low levels of antibodies, produced by the lymphocytes, circulate […] In addition to antibodies, the plasma contains C-reactive proteins, Mannose-binding lectin and complement proteins that function as ‘opsonins‘ […] A host of other proteins perform roles independent of oxygen delivery or immune defence. By far the most abundant protein in serum is albumin. […] Blood is the transport infrastructure for any molecule that needs to be moved around the body. Some, such as the water-soluble fuel glucose, and small hormones like insulin, dissolve freely in the plasma. Others that are less soluble hitch a ride on proteins [….] Dangerous reactive molecules, such as iron, are also bound to proteins, in this case transferrin.”

Immunoglobulins are produced by B lymphocytes and either remain bound on the surface of the cell (as part of the B cell receptor) or circulate freely in the plasma (as antibodies). Whatever their location, their purpose is the same – to bind to and capture foreign molecules (antigens). […] To perform the twin role of binding the antigen and the phagocytosing cell, immunoglobulins need to have two distinct parts to their structure — one that recognizes the foreign antigen and one that can be recognized — and destroyed — by the host defence system. The host defence system does not vary; a specific type of immunoglobulin will be recognized by one of the relatively few types of immune cells or proteins. Therefore this part of the immunoglobulin structure is not variable. But the nature of the foreign antigen will vary greatly; so the antigen-recognizing part of the structure must be highly variable. It is this that leads to the great variety of immunoglobulins. […] within the blood there is an army of potential binding sites that can recognize and bind to almost any conceivable chemical structure. Such variety is why the body is able to adapt and kill even organisms it has never encountered before. Indeed the ability to make an immunoglobulin recognize almost any structure has resulted in antibody binding assays being used historically in diagnostic tests ranging from pregnancy to drugs testing.”

“[I]mmunoglobulins consist of two different proteins — a heavy chain and a light chain. In the human heavy chain there are about forty different V (variable) segments, twenty-five different D (Diversity) segments, and six J (Joining) segments. The light chain also contains variable V and J segments. A completed immunoglobulin has a heavy chain with only one V, D, and J segment, and a light chain with only one V and D segment. It is the shuffling of these segments during development of the mature B lymphocyte that creates the diversity required […] the hypervariable regions are particularly susceptible to mutation during development. […] A separate class of immunoglobulin-like molecules also provide the key to cell-to-cell communication in the immune system. In humans, with the exception of the egg and sperm cells, all cells that possess a nucleus also have a protein on their surface called ‘Human Leucocyte Antigen (HLA) Class I’. The function of HLA Class I is to display fragments (antigens) of all the proteins currently being made inside the cell. It therefore acts like a billboard displaying the current highlights of cellular activity. Any proteins recognized as non-self by cytotoxic T cell lymphocytes will result in the whole cell being targeted for destruction […]. Another form of HLA, Class II, is only present on the surface of specialized cells of the immune system termed antigen presenting cells. In contrast to HLA Class I, the surface of HLA Class II cells displays antigens that originate from outside of the cell.”

Galen.
Bloodletting.
Marcello Malpighi.
William Harvey. De Motu Cordis.
Andreas Vesalius. De humani corporis fabrica.
Ibn al-Nafis. Michael Servetus. Realdo Colombo. Andrea Cesalpino.
Pulmonary circulation.
Hematopoietic stem cell. Bone marrow. Erythropoietin.
Hemoglobin.
Anemia.
Peroxidase.
Lymphocytes. NK cells. Granzyme. B lymphocytes. T lymphocytes. Antibody/Immunoglobulin. Lymphoblast.
Platelet. Coagulation cascade. Fibrinogen. Fibrin. Thrombin. Haemophilia. Hirudin. Von Willebrand disease. Haemophilia A. -ll- B.
Tonicity. Colloid osmotic pressure.
Adaptive immune system. Vaccination. VariolationAntiserum. Agostino Bassi. Muscardine. Louis Pasteur. Élie Metchnikoff. Paul Ehrlich.
Humoral immunity. Membrane attack complex.
Niels Kaj Jerne. David Talmage. Frank Burnet. Clonal selection theory. Peter Medawar.
Susumu Tonegawa.

June 2, 2018 Posted by | Biology, Books, Immunology, Medicine, Molecular biology | Leave a comment

A few diabetes papers of interest

i. Reevaluating the Evidence for Blood Pressure Targets in Type 2 Diabetes.

“There is general consensus that treating adults with type 2 diabetes mellitus (T2DM) and hypertension to a target blood pressure (BP) of <140/90 mmHg helps prevent cardiovascular disease (CVD). Whether more intensive BP control should be routinely targeted remains a matter of debate. While the American Diabetes Association (ADA) BP guidelines recommend an individualized assessment to consider different treatment goals, the American College of Cardiology/American Heart Association BP guidelines recommend a BP target of <130/80 mmHg for most individuals with hypertension, including those with T2DM (13).

In large part, these discrepant recommendations reflect the divergent results of the Action to Control Cardiovascular Risk in Diabetes-BP trial (ACCORD-BP) among people with T2DM and the Systolic Blood Pressure Intervention Trial (SPRINT), which excluded people with diabetes (4,5). Both trials evaluated the effect of intensive compared with standard BP treatment targets (<120 vs. <140 mmHg systolic) on a composite CVD end point of nonfatal myocardial infarction or stroke or death from cardiovascular causes. SPRINT also included unstable angina and acute heart failure in its composite end point. While ACCORD-BP did not show a significant benefit from the intervention (hazard ratio [HR] 0.88; 95% CI 0.73–1.06), SPRINT found a significant 25% relative risk reduction on the primary end point favoring intensive therapy (0.75; 0.64–0.89).”

“To some extent, CVD mechanisms and causes of death differ in T2DM patients compared with the general population. Microvascular disease (particularly kidney disease), accelerated vascular calcification, and diabetic cardiomyopathy are common in T2DM (1315). Moreover, the rate of sudden cardiac arrest is markedly increased in T2DM and related, in part, to diabetes-specific factors other than ischemic heart disease (16). Hypoglycemia is a potential cause of CVD mortality that is specific to diabetes (17). In addition, polypharmacy is common and may increase CVD risk (18). Furthermore, nonvascular causes of death account for approximately 40% of the premature mortality burden experienced by T2DM patients (19). Whether these disease processes may render patients with T2DM less amenable to derive a mortality benefit from intensive BP control, however, is not known and should be the focus of future research.

In conclusion, the divergent results between ACCORD-BP and SPRINT are most readily explained by the apparent lack of benefit of intensive BP control on CVD and all-cause mortality in ACCORD-BP, rather than differences in the design, population characteristics, or interventions between the trials. This difference in effects on mortality may be attributable to differential mechanisms underlying CVD mortality in T2DM, to chance, or to both. These observations suggest that caution should be exercised extrapolating the results of SPRINT to patients with T2DM and support current ADA recommendations to individualize BP targets, targeting a BP of <140/90 mmHg in the majority of patients with T2DM and considering lower BP targets when it is anticipated that individual benefits outweigh risks.”

ii. Modelling incremental benefits on complications rates when targeting lower HbA1c levels in people with Type 2 diabetes and cardiovascular disease.

“Glucose‐lowering interventions in Type 2 diabetes mellitus have demonstrated reductions in microvascular complications and modest reductions in macrovascular complications. However, the degree to which targeting different HbA1c reductions might reduce risk is unclear. […] Participant‐level data for Trial Evaluating Cardiovascular Outcomes with Sitagliptin (TECOS) participants with established cardiovascular disease were used in a Type 2 diabetes‐specific simulation model to quantify the likely impact of different HbA1c decrements on complication rates. […] The use of the TECOS data limits our findings to people with Type 2 diabetes and established cardiovascular disease. […] Ten‐year micro‐ and macrovascular rates were estimated with HbA1c levels fixed at 86, 75, 64, 53 and 42 mmol/mol (10%, 9%, 8%, 7% and 6%) while holding other risk factors constant at their baseline levels. Cumulative relative risk reductions for each outcome were derived for each HbA1c decrement. […] Of 5717 participants studied, 72.0% were men and 74.2% White European, with a mean (sd) age of 66.2 (7.9) years, systolic blood pressure 134 (16.9) mmHg, LDL‐cholesterol 2.3 (0.9) mmol/l, HDL‐cholesterol 1.13 (0.3) mmol/l and median Type 2 diabetes duration 9.6 (5.1–15.6) years. Ten‐year cumulative relative risk reductions for modelled HbA1c values of 75, 64, 53 and 42 mmol/mol, relative to 86 mmol/mol, were 4.6%, 9.3%, 15.1% and 20.2% for myocardial infarction; 6.0%, 12.8%, 19.6% and 25.8% for stroke; 14.4%, 26.6%, 37.1% and 46.4% for diabetes‐related ulcer; 21.5%, 39.0%, 52.3% and 63.1% for amputation; and 13.6%, 25.4%, 36.0% and 44.7 for single‐eye blindness. […] We did not investigate outcomes for renal failure or chronic heart failure as previous research conducted to create the model did not find HbA1c to be a statistically significant independent risk factor for either condition, therefore no clinically meaningful differences would be expected from modelling different HbA1c levels 11.”

“For microvascular complications, the absolute median estimates tended to be lower than for macrovascular complications at the same HbA1c level, but cumulative relative risk reductions were greater. For amputation the 10‐year absolute median estimate for a modelled constant HbA1c of 86 mmol/mol (10%) was 3.8% (3.7, 3.9), with successively lower values for each modelled 1% HbA1c decrement. Compared with the 86 mmol/mol (10%) HbA1c level, median relative risk reductions for amputation were 21.5% (21.1, 21.9) at 75 mmol/mol (9%) increasing to 52.3% (52.0, 52.6) at 53 mmol/mol (7%). […] Relative risk reductions in micro‐ and macrovascular complications for each 1% HbA1c reduction were similar for each decrement. The exception was all‐cause mortality, where the relative risk reductions for 1% HbA1c decrements were greater at higher baseline HbA1c levels. These simulated outcomes differ from the Diabetes Control and Complications Trial outcome in people with Type 1 diabetes, where lowering HbA1c from higher baseline levels had a greater impact on microvascular risk reduction 18.”

iii. Laser photocoagulation for proliferative diabetic retinopathy (Cochrane review).

“Diabetic retinopathy is a complication of diabetes in which high blood sugar levels damage the blood vessels in the retina. Sometimes new blood vessels grow in the retina, and these can have harmful effects; this is known as proliferative diabetic retinopathy. Laserphotocoagulation is an intervention that is commonly used to treat diabetic retinopathy, in which light energy is applied to the retinawith the aim of stopping the growth and development of new blood vessels, and thereby preserving vision. […] The aim of laser photocoagulation is to slow down the growth of new blood vessels in the retina and thereby prevent the progression of visual loss (Ockrim 2010). Focal laser photocoagulation uses the heat of light to seal or destroy abnormal blood vessels in the retina. Individual vessels are treated with a small number of laser burns.

PRP [panretinal photocoagulation, US] aims to slow down the growth of new blood vessels in a wider area of the retina. Many hundreds of laser burns are placed on the peripheral parts of the retina to stop blood vessels from growing (RCOphth 2012). It is thought that the anatomic and functional changes that result from photocoagulation may improve the oxygen supply to the retina, and so reduce the stimulus for neovascularisation (Stefansson 2001). Again the exact mechanisms are unclear, but it is possible that the decreased area of retinal tissue leads to improved oxygenation and a reduction in the levels of anti-vascular endothelial growth factor. A reduction in levels of anti-vascular endothelial growth factor may be important in reducing the risk of harmful new vessels forming. […] Laser photocoagulation is a well-established common treatment for DR and there are many different potential strategies for delivery of laser treatment that are likely to have different effects. A systematic review of the evidence for laser photocoagulation will provide important information on benefits and harms to guide treatment choices. […] This is the first in a series of planned reviews on laser photocoagulation. Future reviews will compare different photocoagulation techniques.”

“We identified a large number of trials of laser photocoagulation of diabetic retinopathy (n = 83) but only five of these studies were eligible for inclusion in the review, i.e. they compared laser photocoagulation with currently available lasers to no (or deferred) treatment. Three studies were conducted in the USA, one study in the UK and one study in Japan. A total of 4786 people (9503 eyes) were included in these studies. The majority of participants in four of these trials were people with proliferative diabetic retinopathy; one trial recruited mainly people with non-proliferative retinopathy.”

“At 12 months there was little difference between eyes that received laser photocoagulation and those allocated to no treatment (or deferred treatment), in terms of loss of 15 or more letters of visual acuity (risk ratio (RR) 0.99, 95% confidence interval (CI) 0.89 to1.11; 8926 eyes; 2 RCTs, low quality evidence). Longer term follow-up did not show a consistent pattern, but one study found a 20% reduction in risk of loss of 15 or more letters of visual acuity at five years with laser treatment. Treatment with laser reduced the risk of severe visual loss by over 50% at 12 months (RR 0.46, 95% CI 0.24to 0.86; 9276 eyes; 4 RCTs, moderate quality evidence). There was a beneficial effect on progression of diabetic retinopathy with treated eyes experiencing a 50% reduction in risk of progression of diabetic retinopathy (RR 0.49, 95% CI 0.37 to 0.64; 8331 eyes; 4 RCTs, low quality evidence) and a similar reduction in risk of vitreous haemorrhage (RR 0.56, 95% CI 0.37 to 0.85; 224 eyes; 2RCTs, low quality evidence).”

“Overall there is not a large amount of evidence from RCTs on the effects of laser photocoagulation compared to no treatment or deferred treatment. The evidence is dominated by two large studies conducted in the US population (DRS 1978; ETDRS 1991). These two studies were generally judged to be at low or unclear risk of bias, with the exception of inevitable unmasking of patients due to differences between intervention and control. […] In current clinical guidelines, e.g. RCOphth 2012, PRP is recommended in high-risk PDR. The recommendation is that “as retinopathy approaches the proliferative stage, laser scatter treatment (PRP) should be increasingly considered to prevent progression to high risk PDR” based on other factors such as patients’ compliance or planned cataract surgery.

These recommendations need to be interpreted while considering the risk of visual loss associated with different levels of severity of DR, as well as the risk of progression. Since PRP reduces the risk of severe visual loss, but not moderate visual loss that is more related to diabetic maculopathy, most ophthalmologists judge that there is little benefit in treating non-proliferative DR at low risk of severe visual damage, as patients would incur the known adverse effects of PRP, which, although mild, include pain and peripheral visual field loss and transient DMO [diabetic macular oedema, US]. […] This review provides evidence that laser photocoagulation is beneficial in treating diabetic retinopathy. […] based on the baseline risk of progression of the disease, and risk of visual loss, the current approach of caution in treating non-proliferative DR with laser would appear to be justified.

By current standards the quality of the evidence is not high, however, the effects on risk of progression and risk of severe visual loss are reasonably large (50% relative risk reduction).”

iv. Immune Recognition of β-Cells: Neoepitopes as Key Players in the Loss of Tolerance.

I should probably warn beforehand that this one is rather technical. It relates reasonably closely to topics covered in the molecular biology book I recently covered here on the blog, and if I had not read that book quite recently I almost certainly would not have been able to read the paper – so the coverage below is more ‘for me’ than ‘for you’. Anyway, some quotes:

“Prior to the onset of type 1 diabetes, there is progressive loss of immune self-tolerance, evidenced by the accumulation of islet autoantibodies and emergence of autoreactive T cells. Continued autoimmune activity leads to the destruction of pancreatic β-cells and loss of insulin secretion. Studies of samples from patients with type 1 diabetes and of murine disease models have generated important insights about genetic and environmental factors that contribute to susceptibility and immune pathways that are important for pathogenesis. However, important unanswered questions remain regarding the events that surround the initial loss of tolerance and subsequent failure of regulatory mechanisms to arrest autoimmunity and preserve functional β-cells. In this Perspective, we discuss various processes that lead to the generation of neoepitopes in pancreatic β-cells, their recognition by autoreactive T cells and antibodies, and potential roles for such responses in the pathology of disease. Emerging evidence supports the relevance of neoepitopes generated through processes that are mechanistically linked with β-cell stress. Together, these observations support a paradigm in which neoepitope generation leads to the activation of pathogenic immune cells that initiate a feed-forward loop that can amplify the antigenic repertoire toward pancreatic β-cell proteins.”

“Enzymatic posttranslational processes that have been implicated in neoepitope generation include acetylation (10), citrullination (11), glycosylation (12), hydroxylation (13), methylation (either protein or DNA methylation) (14), phosphorylation (15), and transglutamination (16). Among these, citrullination and transglutamination are most clearly implicated as processes that generate neoantigens in human disease, but evidence suggests that others also play a role in neoepitope formation […] Citrulline, which is among the most studied PTMs in the context of autoimmunity, is a diagnostic biomarker of rheumatoid arthritis (RA). […] Anticitrulline antibodies are among the earliest immune responses that are diagnostic of RA and often correlate with disease severity (18). We have recently documented the biological consequences of citrulline modifications and autoimmunity that arise from pancreatic β-cell proteins in the development of T1D (19). In particular, citrullinated GAD65 and glucose-regulated protein (GRP78) elicit antibody and T-cell responses in human T1D and in NOD diabetes, respectively (20,21).”

Carbonylation is an irreversible, iron-catalyzed oxidative modification of the side chains of lysine, arginine, threonine, or proline. Mitochondrial functions are particularly sensitive to carbonyl modification, which also has detrimental effects on other intracellular enzymatic pathways (30). A number of diseases have been linked with altered carbonylation of self-proteins, including Alzheimer and Parkinson diseases and cancer (27). There is some data to support that carbonyl PTM is a mechanism that directs unstable self-proteins into cellular degradation pathways. It is hypothesized that carbonyl PTM [post-translational modification] self-proteins that fail to be properly degraded in pancreatic β-cells are autoantigens that are targeted in T1D. Recently submitted studies have identified several carbonylated pancreatic β-cell neoantigens in human and murine models of T1D (27). Among these neoantigens are chaperone proteins that are required for the appropriate folding and secretion of insulin. These studies imply that although some PTM self-proteins may be direct targets of autoimmunity, others may alter, interrupt, or disturb downstream metabolic pathways in the β-cell. In particular, these studies indicated that upstream PTMs resulted in misfolding and/or metabolic disruption between proinsulin and insulin production, which provides one explanation for recent observations of increased proinsulin-to-insulin ratios in the progression of T1D (31).”

“Significant hypomethylation of DNA has been linked with several classic autoimmune diseases, such as SLE, multiple sclerosis, RA, Addison disease, Graves disease, and mixed connective tissue disease (36). Therefore, there is rationale to consider the possible influence of epigenetic changes on protein expression and immune recognition in T1D. Relevant to T1D, epigenetic modifications occur in pancreatic β-cells during progression of diabetes in NOD mice (37). […] Consequently, DNMTs [DNA methyltransferases] and protein arginine methyltransferases are likely to play a role in the regulation of β-cell differentiation and insulin gene expression, both of which are pathways that are altered in the presence of inflammatory cytokines. […] Eizirik et al. (38) reported that exposure of human islets to proinflammatory cytokines leads to modulation of transcript levels and increases in alternative splicing for a number of putative candidate genes for T1D. Their findings suggest a mechanism through which alternative splicing may lead to the generation of neoantigens and subsequent presentation of novel β-cell epitopes (39).”

“The phenomenon of neoepitope recognition by autoantibodies has been shown to be relevant in a variety of autoimmune diseases. For example, in RA, antibody responses directed against various citrullinated synovial proteins are remarkably disease-specific and routinely used as a diagnostic test in the clinic (18). Appearance of the first anticitrullinated protein antibodies occurs years prior to disease onset, and accumulation of additional autoantibody specificities correlates closely with the imminent onset of clinical arthritis (44). There is analogous evidence supporting a hierarchical emergence of autoantibody specificities and multiple waves of autoimmune damage in T1D (3,45). Substantial data from longitudinal studies indicate that insulin and GAD65 autoantibodies appear at the earliest time points during progression, followed by additional antibody specificities directed at IA-2 and ZnT8.”

“Multiple autoimmune diseases often cluster within families (or even within one person), implying shared etiology. Consequently, relevant insights can be gleaned from studies of more traditional autoantibody-mediated systemic autoimmune diseases, such as SLE and RA, where inter- and intramolecular epitope spreading are clearly paradigms for disease progression (47). In general, early autoimmunity is marked by restricted B- and T-cell epitopes, followed by an expanded repertoire coinciding with the onset of more significant tissue pathology […] Akin to T1D, other autoimmune syndromes tend to cluster to subcellular tissues or tissue components that share biological or biochemical properties. For example, SLE is marked by autoimmunity to nucleic acid–bearing macromolecules […] Unlike other systemic autoantibody-mediated diseases, such as RA and SLE, there is no clear evidence that T1D-related autoantibodies play a pathogenic role. Autoantibodies against citrulline-containing neoepitopes of proteoglycan are thought to trigger or intensify arthritis by forming immune complexes with this autoantigen in the joints of RA patients with anticitrullinated protein antibodies. In a similar manner, autoantibodies and immune complexes are hallmarks of tissue pathology in SLE. Therefore, it remains likely that autoantibodies or the B cells that produce them contribute to the pathogenesis of T1D.”

“In summation, the existing literature demonstrates that oxidation, citrullination, and deamidation can have a direct impact on T-cell recognition that contributes to loss of tolerance.”

“There is a general consensus that the pathogenesis of T1D is initiated when individuals who possess a high level of genetic risk (e.g., susceptible HLA, insulin VNTR, PTPN22 genotypes) are exposed to environmental factors (e.g., enteroviruses, diet, microbiome) that precipitate a loss of tolerance that manifests through the appearance of insulin and/or GAD autoantibodies. This early autoimmunity is followed by epitope spreading, increasing both the number of antigenic targets and the diversity of epitopes within these targets. These processes create a feed-forward loop antigen release that induces increasing inflammation and increasing numbers of distinct T-cell specificities (64). The formation and recognition of neoepitopes represents one mechanism through which epitope spreading can occur. […] mechanisms related to neoepitope formation and recognition can be envisioned at multiple stages of T1D pathogenesis. At the level of genetic risk, susceptible individuals may exhibit a genetically driven impairment of their stress response, increasing the likelihood of neoepitope formation. At the level of environmental exposure, many of the insults that are thought to initiate T1D are known to cause neoepitope formation. During the window of β-cell destruction that encompasses early autoimmunity through dysglycemia and diagnosis of T1D it remains unclear when neoepitope responses appear in relation to “classic” responses to insulin and GAD65. However, by the time of onset, neoepitope responses are clearly present and remain as part of the ongoing autoimmunity that is present during established T1D. […] The ultimate product of both direct and indirect generation of neoepitopes is an accumulation of robust and diverse autoimmune B- and T-cell responses, accelerating the pathological destruction of pancreatic islets. Clearly, the emergence of sophisticated methods of tissue and single-cell proteomics will identify novel neoepitopes, including some that occur at near the earliest stages of disease. A detailed mechanistic understanding of the pathways that lead to specific classes of neoepitopes will certainly suggest targets of therapeutic manipulation and intervention that would be hoped to impede the progression of disease.”

v. Diabetes technology: improving care, improving patient‐reported outcomes and preventing complications in young people with Type 1 diabetes.

“With the evolution of diabetes technology, those living with Type 1 diabetes are given a wider arsenal of tools with which to achieve glycaemic control and improve patient‐reported outcomes. Furthermore, the use of these technologies may help reduce the risk of acute complications, such as severe hypoglycaemia and diabetic ketoacidosis, as well as long‐term macro‐ and microvascular complications. […] Unfortunately, diabetes goals are often unmet and people with Type 1 diabetes too frequently experience acute and long‐term complications of this condition, in addition to often having less than ideal psychosocial outcomes. Increasing realization of the importance of patient‐reported outcomes is leading to diabetes care delivery becoming more patient‐centred. […] Optimal diabetes management requires both the medical and psychosocial needs of people with Type 1 diabetes and their caregivers to be addressed. […] The aim of this paper was to demonstrate how, by incorporating technology into diabetes care, we can increase patient‐centered care, reduce acute and chronic diabetes complications, and improve clinical outcomes and quality of life.”

[The paper’s Table 2 on page 422 of the pdf-version is awesome, it includes a lot of different Hba1c estimates from various patient populations all across the world. The numbers included in the table are slightly less awesome, as most populations only achieve suboptimal metabolic control.]

“The risks of all forms of complications increase with higher HbA1c concentration, increasing diabetes duration, hypertension, presence of other microvascular complications, obesity, insulin resistance, hyperlipidaemia and smoking 6. Furthermore, the Diabetes Research in Children (DirecNet) study has shown that individuals with Type 1 diabetes have white matter differences in the brain and cognitive differences compared with individuals without Type 1 diabetes. These studies showed that the degree of structural differences in the brain were related to the degree of chronic hyperglycaemia, hypoglycaemia and glucose variability 7. […] In addition to long‐term complications, people with Type 1 diabetes are also at risk of acute complications. Severe hypoglycaemia, a hypoglycaemic event resulting in altered/loss of consciousness or seizures, is a serious complication of insulin therapy. If unnoticed and untreated, severe hypoglycaemia can result in death. […] The incidence of diabetic ketoacidosis, a life‐threatening consequence of diabetes, remains unacceptably high in children with established diabetes (Table 5). The annual incidence of ketoacidosis was 5% in the Prospective Diabetes Follow‐Up Registry (DPV) in Germany and Austria, 6.4% in the National Paediatric Diabetes Audit (NPDA), and 7.1% in the Type 1 Diabetes Exchange (T1DX) registry 10. Psychosocial factors including female gender, non‐white race, lower socio‐economic status, and elevated HbA1c all contribute to increased risk of diabetic ketoacidosis 11.”

“Depression is more common in young people with Type 1 diabetes than in young people without a chronic disease […] Depression can make it more difficult to engage in diabetes self‐management behaviours, and as a result, contributes to suboptimal glycaemic control and lower rates of self‐monitoring of blood glucose (SMBG) in young people with Type 1 diabetes 15. […] Unlike depression, diabetes distress is not a clinical diagnosis but rather emotional distress that comes from the burden of living with and managing diabetes 16. A recent systematic review found that roughly one‐third of young people with Type 1 diabetes (age 10–20 years) have some level of diabetes distress and that diabetes distress was consistently associated with higher HbA1c and worse self‐management 17. […] Eating and weight‐related comorbidities also exist for individuals with Type 1 diabetes. There is a higher incidence of obesity in individuals with Type 1 diabetes on intensive insulin therapy. […] Adolescent girls and young adult women with Type 1 diabetes are more likely to omit insulin for weight loss and have disordered eating habits 20.”

“In addition to screening for and treating depression and diabetes distress to improve overall diabetes management, it is equally important to assess quality of life as well as positive coping factors that may also influence self‐management and well‐being. For example, lower scores on the PROMIS® measure of global health, which assesses social relationships as well as physical and mental well‐being, have been linked to higher depression scores and less frequent blood glucose checks 13. Furthermore, coping strategies such as problem‐solving, emotional expression, and acceptance have been linked to lower HbA1c and enhanced quality of life 21.”

“Self‐monitoring of blood glucose via multiple finger sticks for capillary blood samples per day has been the ‘gold standard’ for glucose monitoring, but SMBG only provides glucose measurements as snapshots in time. Still, the majority of young people with Type 1 diabetes use SMBG as their main method to assess glycaemia. Data from the T1DX registry suggest that an increased frequency of SMBG is associated with lower HbA1c levels 23. The development of continuous glucose monitoring (CGM) provides more values, along with the rate and direction of glucose changes. […] With continued use, CGM has been shown to decrease the incidence of hypoglycaemia and HbA1c levels 26. […] Insulin can be administered via multiple daily injections or continuous subcutaneous insulin infusion (insulin pumps). Over the last 30 years, insulin pumps have become smaller with more features, making them a valuable alternative to multiple daily injections. Insulin pump use in various registries ranges from as low as 5.9% among paediatric patients in the New Zealand national register 28 to as high as 74% in the German/Austrian DPV in children aged <6 years (Table 2) 29. Recent data suggest that consistent use of insulin pumps can result in improved HbA1c values and decreased incidence of severe hypoglycaemia 30, 31. Insulin pumps have been associated with improved quality of life 32. The data on insulin pumps and diabetic ketoacidosis are less clear.”

“The majority of Type 1 diabetes management is carried out outside the clinical setting and in individuals’ daily lives. People with Type 1 diabetes must make complex treatment decisions multiple times daily; thus, diabetes self‐management skills are central to optimal diabetes management. Unfortunately, many people with Type 1 diabetes and their caregivers are not sufficiently familiar with the necessary diabetes self‐management skills. […] Parents are often the first who learn these skills. As children become older, they start receiving more independence over their diabetes care; however, the transition of responsibilities from caregiver to child is often unstructured and haphazard. It is important to ensure that both individuals with diabetes and their caregivers have adequate self‐management skills throughout the diabetes journey.”

“In the developed world (nations with the highest gross domestic product), 87% of the population has access to the internet and 68% report using a smartphone 39. Even in developing countries, 54% of people use the internet and 37% own smartphones 39. In many areas, smartphones are the primary source of internet access and are readily available. […] There are >1000 apps for diabetes on the Apple App Store and the Google Play store. Many of these apps have focused on nutrition, blood glucose logging, and insulin dosing. Given the prevalence of smartphones and the interest in having diabetes apps handy, there is the potential for using a smartphone to deliver education and decision support tools. […] The new psychosocial position statement from the ADA recommends routine psychosocial screening in clinic. These recommendations include screening for: 1) depressive symptoms annually, at diagnosis, or with changes in medical status; 2) anxiety and worry about hypoglycaemia, complications and other diabetes‐specific worries; 3) disordered eating and insulin omission for purposes of weight control; 4) and diabetes distress in children as young as 7 or 8 years old 16. Implementation of in‐clinic screening for depression in young people with Type 1 diabetes has already been shown to be feasible, acceptable and able to identify individuals in need of treatment who may otherwise have gone unnoticed for a longer period of time which would have been having a detrimental impact on physical health and quality of life 13, 40. These programmes typically use tablets […] to administer surveys to streamline the screening process and automatically score measures 13, 40. This automation allows psychologists and social workers to focus on care delivery rather than screening. In addition to depression screening, automated tablet‐based screening for parental depression, distress and anxiety; problem‐solving skills; and resilience/positive coping factors can help the care team understand other psychosocial barriers to care. This approach allows the development of patient‐ and caregiver‐centred interventions to improve these barriers, thereby improving clinical outcomes and complication rates.”

“With the advent of electronic health records, registries and downloadable medical devices, people with Type 1 diabetes have troves of data that can be analysed to provide insights on an individual and population level. Big data analytics for diabetes are still in the early stages, but present great potential for improving diabetes care. IBM Watson Health has partnered with Medtronic to deliver personalized insights to individuals with diabetes based on device data 48. Numerous other systems […] allow people with Type 1 diabetes to access their data, share their data with the healthcare team, and share de‐identified data with the research community. Data analysis and insights such as this can form the basis for the delivery of personalized digital health coaching. For example, historical patterns can be analysed to predict activity and lead to pro‐active insulin adjustment to prevent hypoglycaemia. […] Improvements to diabetes care delivery can occur at both the population level and at the individual level using insights from big data analytics.”

vi. Route to improving Type 1 diabetes mellitus glycaemic outcomes: real‐world evidence taken from the National Diabetes Audit.

“While control of blood glucose levels reduces the risk of diabetes complications, it can be very difficult for people to achieve. There has been no significant improvement in average glycaemic control among people with Type 1 diabetes for at least the last 10 years in many European countries 6.

The National Diabetes Audit (NDA) in England and Wales has shown relatively little change in the levels of HbA1c being achieved in people with Type 1 diabetes over the last 10 years, with >70% of HbA1c results each year being >58 mmol/mol (7.5%) 7.

Data for general practices in England are published by the NDA. NHS Digital publishes annual prescribing data, including British National Formulary (BNF) codes 7, 8. Together, these data provide an opportunity to investigate whether there are systematic associations between HbA1c levels in people with Type 1 diabetes and practice‐level population characteristics, diabetes service levels and use of medication.”

“The Quality and Outcomes Framework (a payment system for general practice performance) provided a baseline list of all general practices in England for each year, the practice list size and number of people (both with Type 1 and Type 2 diabetes) on their diabetes register. General practice‐level data of participating practices were taken from the NDA 2013–2014, 2014–2015 and 2015–2016 (5455 practices in the last year). They include Type 1 diabetes population characteristics, routine review checks and the proportions of people achieving target glycaemic control and/or being at higher glycaemic risk.

Diabetes medication data for all people with diabetes were taken from the general practice prescribing in primary care data for 2013–2014, 2014–2015 and 2015–2016, including insulin and blood glucose monitoring (BGM) […] A total of 20 indicators were created that covered the epidemiological, service, medication, technological, costs and outcomes performance for each practice and year. The variance in these indicators over the 4‐year period and among general practices was also considered. […] The values of the indicators found to be in the 90th percentile were used to quantify the potential of highest performing general practices. […] In total 13 085 practice‐years of data were analysed, covering 437 000 patient‐years of management.”

“There was significant variation among the participating general practices (Fig. 3) in the proportion of people achieving target glycaemic control target [percentage of people with HbA1c ≤58 mmol/mol (7.5%)] and in the proportion at high glycaemic risk [percentage of people with HbA1c >86 mmol/mol (10%)]. […] Our analysis showed that, at general practice level, the median target glycaemic control attainment was 30%, while the 10th percentile was 16%, and the 90th percentile was 45%. The corresponding median for the high glycaemic risk percentage was 16%, while the 10th percentile (corresponding to the best performing practices) was 6% and the 90th percentile (greatest proportion of Type 1 diabetes at high glycaemic risk) was 28%. Practices in the deciles for both lowest target glycaemic control and highest high glycaemic risk had 49% of the results in the 58–86 mmol/mol range. […] A very wide variation was found in the percentage of insulin for presumed pump use (deduced from prescriptions of fast‐acting vial insulin), with a median of 3.8% at general practice level. The 10th percentile was 0% and the 90th percentile was 255% of the median inferred pump usage.”

“[O]ur findings suggest that if all practices optimized service and therapies to the levels achieved by the top decile then 16 100 (7%) more people with Type 1 diabetes would achieve the glycaemic control target of 58 mmol/mol (7.5%) and 11 500 (5%) fewer people would have HbA1c >86 mmol/mol (10%). Put another way, if the results for all practices were at the top decile level, 36% vs 29% of people with Type 1 diabetes would achieve the glycaemic control target of HbA1c ≤ 58 mmol/mol (7.5%), and as few as 10% could have HbA1c levels > 86 mmol/mol (10%) compared with 15% currently (Fig. 6). This has significant implications for the potential to improve the longer‐term outcomes of people with Type 1 diabetes, given the close link between glycaemia and complications in such individuals 5, 10, 11.”

“We found that the significant variation among the participating general practices (Fig. 2) in terms of the proportion of people with HbA1c ≤58 mmol/mol (7.5%) was only partially related to a lower proportion of people with HbA1c >86 mmol/mol (10%). There was only a weak relationship between level of target glycaemia achieved and avoidance of very suboptimal glycaemia. The overall r2 value was 0.6. This suggests that there is a degree of independence between these outcomes, so that success factors at a general practice level differ for people achieving optimal glycaemia vs those factors affecting avoiding a level of at risk glycaemia.”

May 30, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Immunology, Medicine, Molecular biology, Ophthalmology, Studies | Leave a comment

Molecular biology (II)

Below I have added some more quotes and links related to the book’s coverage:

“[P]roteins are the most abundant molecules in the body except for water. […] Proteins make up half the dry weight of a cell whereas DNA and RNA make up only 3 per cent and 20 per cent respectively. […] The approximately 20,000 protein-coding genes in the human genome can, by alternative splicing, multiple translation starts, and post-translational modifications, produce over 1,000,000 different proteins, collectively called ‘the proteome‘. It is the size of the proteome and not the genome that defines the complexity of an organism. […] For simple organisms, such as viruses, all the proteins coded by their genome can be deduced from its sequence and these comprise the viral proteome. However for higher organisms the complete proteome is far larger than the genome […] For these organisms not all the proteins coded by the genome are found in any one tissue at any one time and therefore a partial proteome is usually studied. What are of interest are those proteins that are expressed in specific cell types under defined conditions.”

“Enzymes are proteins that catalyze or alter the rate of chemical reactions […] Enzymes can speed up reactions […] but they can also slow some reactions down. Proteins play a number of other critical roles. They are involved in maintaining cell shape and providing structural support to connective tissues like cartilage and bone. Specialized proteins such as actin and myosin are required [for] muscular movement. Other proteins act as ‘messengers’ relaying signals to regulate and coordinate various cell processes, e.g. the hormone insulin. Yet another class of protein is the antibodies, produced in response to foreign agents such as bacteria, fungi, and viruses.”

“Proteins are composed of amino acids. Amino acids are organic compounds with […] an amino group […] and a carboxyl group […] In addition, amino acids carry various side chains that give them their individual functions. The twenty-two amino acids found in proteins are called proteinogenic […] but other amino acids exist that are non-protein functioning. […] A peptide bond is formed between two amino acids by the removal of a water molecule. […] each individual unit in a peptide or protein is known as an amino acid residue. […] Chains of less than 50-70 amino acid residues are known as peptides or polypeptides and >50-70 as proteins, although many proteins are composed of more than one polypeptide chain. […] Proteins are macromolecules consisting of one or more strings of amino acids folded into highly specific 3D-structures. Each amino acid has a different size and carries a different side group. It is the nature of the different side groups that facilitates the correct folding of a polypeptide chain into a functional tertiary protein structure.”

“Atoms scatter the waves of X-rays mainly through their electrons, thus forming secondary or reflected waves. The pattern of X-rays diffracted by the atoms in the protein can be captured on a photographic plate or an image sensor such as a charge coupled device placed behind the crystal. The pattern and relative intensity of the spots on the diffraction image are then used to calculate the arrangement of atoms in the original protein. Complex data processing is required to convert the series of 2D diffraction or scatter patterns into a 3D image of the protein. […] The continued success and significance of this technique for molecular biology is witnessed by the fact that almost 100,000 structures of biological molecules have been determined this way, of which most are proteins.”

“The number of proteins in higher organisms far exceeds the number of known coding genes. The fact that many proteins carry out multiple functions but in a regulated manner is one way a complex proteome arises without increasing the number of genes. Proteins that performed a single role in the ancestral organism have acquired extra and often disparate functions through evolution. […] The active site of an enzyme employed in catalysis is only a small part of the protein, leaving spare capacity for acquiring a second function. […] The glycolytic pathway is involved in the breakdown of sugars such as glucose to release energy. Many of the highly conserved and ancient enzymes from this pathway have developed secondary or ‘moonlighting’ functions. Proteins often change their location in the cell in order to perform a ‘second job’. […] The limited size of the genome may not be the only evolutionary pressure for proteins to moonlight. Combining two functions in one protein can have the advantage of coordinating multiple activities in a cell, enabling it to respond quickly to changes in the environment without the need for lengthy transcription and translational processes.”

Post-translational modifications (PTMs) […] is [a] process that can modify the role of a protein by addition of chemical groups to amino acids in the peptide chain after translation. Addition of phosphate groups (phosphorylation), for example, is a common mechanism for activating or deactivating an enzyme. Other common PTMs include addition of acetyl groups (acetylation), glucose (glucosylation), or methyl groups (methylation). […] Some additions are reversible, facilitating the switching between active and inactive states, and others are irreversible such as marking a protein for destruction by ubiquitin. [The difference between reversible and irreversible modifications can be quite important in pharmacology, and if you’re curious to know more about these topics Coleman’s drug metabolism text provide great coverage of related topics – US.] Diseases caused by malfunction of these modifications highlight the importance of PTMs. […] in diabetes [h]igh blood glucose lead to unwanted glocosylation of proteins. At the high glucose concentrations associated with diabetes, an unwanted irreversible chemical reaction binds the gllucose to amino acid residues such as lysines exposed on the protein surface. The glucosylated proteins then behave badly, cross-linking themselves to the extracellular matrix. This is particularly dangerous in the kidney where it decreases function and can lead to renal failure.”

“Twenty thousand protein-coding genes make up the human genome but for any given cell only about half of these are expressed. […] Many genes get switched off during differentiation and a major mechanism for this is epigenetics. […] an epigenetic trait […] is ‘a stably heritable phenotype resulting from changes in the chromosome without alterations in the DNA sequence’. Epigenetics involves the chemical alteration of DNA by methyl or other small molecular groups to affect the accessibility of a gene by the transcription machinery […] Epigenetics can […] act on gene expression without affecting the stability of the genetic code by modifying the DNA, the histones in chromatin, or a whole chromosome. […] Epigenetic signatures are not only passed on to somatic daughter cells but they can also be transferred through the germline to the offspring. […] At first the evidence appeared circumstantial but more recent studies have provided direct proof of epigenetic changes involving gene methylation being inherited. Rodent models have provided mechanistic evidence. […] the importance of epigenetics in development is highlighted by the fact that low dietary folate, a nutrient essential for methylation, has been linked to higher risk of birth defects in the offspring.” […on the other hand, well…]

The cell cycle is divided into phases […] Transition from G1 into S phase commits the cell to division and is therefore a very tightly controlled restriction point. Withdrawal of growth factors, insufficient nucleotides, or energy to complete DNA replication, or even a damaged template DNA, would compromise the process. Problems are therefore detected and the cell cycle halted by cell cycle inhibitors before the cell has committed to DNA duplication. […] The cell cycle inhibitors inactive the kinases that promote transition through the phases, thus halting the cell cycle. […] The cell cycle can also be paused in S phase to allow time for DNA repairs to be carried out before cell division. The consequences of uncontrolled cell division are so catastrophic that evolution has provided complex checks and balances to maintain fidelity. The price of failure is apoptosis […] 50 to 70 billion cells die every day in a human adult by the controlled molecular process of apoptosis.”

“There are many diseases that arise because a particular protein is either absent or a faulty protein is produced. Administering a correct version of that protein can treat these patients. The first commercially available recombinant protein to be produced for medical use was human insulin to treat diabetes mellitus. […] (FDA) approved the recombinant insulin for clinical use in 1982. Since then over 300 protein-based recombinant pharmaceuticals have been licensed by the FDA and the European Medicines Agency (EMA) […], and many more are undergoing clinical trials. Therapeutic proteins can be produced in bacterial cells but more often mammalian cells such as the Chinese hamster ovary cell line and human fibroblasts are used as these hosts are better able to produce fully functional human protein. However, using mammalian cells is extremely expensive and an alternative is to use live animals or plants. This is called molecular pharming and is an innovative way of producing large amounts of protein relatively cheaply. […] In plant pharming, tobacco, rice, maize, potato, carrots, and tomatoes have all been used to produce therapeutic proteins. […] [One] class of proteins that can be engineered using gene-cloning technology is therapeutic antibodies. […] Therapeutic antibodies are designed to be monoclonal, that is, they are engineered so that they are specific for a particular antigen to which they bind, to block the antigen’s harmful effects. […] Monoclonal antibodies are at the forefront of biological therapeutics as they are highly specific and tend not to induce major side effects.”

“In gene therapy the aim is to restore the function of a faulty gene by introducing a correct version of that gene. […] a cloned gene is transferred into the cells of a patient. Once inside the cell, the protein encoded by the gene is produced and the defect is corrected. […] there are major hurdles to be overcome for gene therapy to be effective. One is the gene construct has to be delivered to the diseased cells or tissues. This can often be difficult […] Mammalian cells […] have complex mechanisms that have evolved to prevent unwanted material such as foreign DNA getting in. Second, introduction of any genetic construct is likely to trigger the patient’s immune response, which can be fatal […] once delivered, expression of the gene product has to be sustained to be effective. One approach to delivering genes to the cells is to use genetically engineered viruses constructed so that most of the viral genome is deleted […] Once inside the cell, some viral vectors such as the retroviruses integrate into the host genome […]. This is an advantage as it provides long-lasting expression of the gene product. However, it also poses a safety risk, as there is little control over where the viral vector will insert into the patient’s genome. If the insertion occurs within a coding gene, this may inactivate gene function. If it integrates close to transcriptional start sites, where promoters and enhancer sequences are located, inappropriate gene expression can occur. This was observed in early gene therapy trials [where some patients who got this type of treatment developed cancer as a result of it. A few more details hereUS] […] Adeno-associated viruses (AAVs) […] are often used in gene therapy applications as they are non-infectious, induce only a minimal immune response, and can be engineered to integrate into the host genome […] However, AAVs can only carry a small gene insert and so are limited to use with genes that are of a small size. […] An alternative delivery system to viruses is to package the DNA into liposomes that are then taken up by the cells. This is safer than using viruses as liposomes do not integrate into the host genome and are not very immunogenic. However, liposome uptake by the cells can be less efficient, resulting in lower expression of the gene.”

Links:

One gene–one enzyme hypothesis.
Molecular chaperone.
Protein turnover.
Isoelectric point.
Gel electrophoresis. Polyacrylamide.
Two-dimensional gel electrophoresis.
Mass spectrometry.
Proteomics.
Peptide mass fingerprinting.
Worldwide Protein Data Bank.
Nuclear magnetic resonance spectroscopy of proteins.
Immunoglobulins. Epitope.
Western blot.
Immunohistochemistry.
Crystallin. β-catenin.
Protein isoform.
Prion.
Gene expression. Transcriptional regulation. Chromatin. Transcription factor. Gene silencing. Histone. NF-κB. Chromatin immunoprecipitation.
The agouti mouse model.
X-inactive specific transcript (Xist).
Cell cycle. Cyclin. Cyclin-dependent kinase.
Retinoblastoma protein pRb.
Cytochrome c. CaspaseBcl-2 family. Bcl-2-associated X protein.
Hybridoma technology. Muromonab-CD3.
Recombinant vaccines and the development of new vaccine strategies.
Knockout mouse.
Adenovirus Vectors for Gene Therapy, Vaccination and Cancer Gene Therapy.
Genetically modified food. Bacillus thuringiensis. Golden rice.

 

May 29, 2018 Posted by | Biology, Books, Chemistry, Diabetes, Engineering, Genetics, Immunology, Medicine, Molecular biology, Pharmacology | Leave a comment

Molecular biology (I?)

“This is a great publication, considering the format. These authors in my opinion managed to get quite close to what I’d consider to be ‘the ideal level of coverage’ for books of this nature.”

The above was what I wrote in my short goodreads review of the book. In this post I’ve added some quotes from the first chapters of the book and some links to topics covered.

Quotes:

“Once the base-pairing double helical structure of DNA was understood it became apparent that by holding and preserving the genetic code DNA is the source of heredity. The heritable material must also be capable of faithful duplication every time a cell divides. The DNA molecule is ideal for this. […] The effort then concentrated on how the instructions held by the DNA were translated into the choice of the twenty different amino acids that make up proteins. […] George Gamov [yes, that George Gamov! – US] made the suggestion that information held in the four bases of DNA (A, T, C, G) must be read as triplets, called codons. Each codon, made up of three nucleotides, codes for one amino acid or a ‘start’ or ‘stop’ signal. This information, which determines an organism’s biochemical makeup, is known as the genetic code. An encryption based on three nucleotides means that there are sixty-four possible three-letter combinations. But there are only twenty amino acids that are universal. […] some amino acids can be coded for by more than one codon.”

“The mechanism of gene expression whereby DNA transfers its information into proteins was determined in the early 1960s by Sydney Brenner, Francois Jacob, and Matthew Meselson. […] Francis Crick proposed in 1958 that information flowed in one direction only: from DNA to RNA to protein. This was called the ‘Central Dogma‘ and describes how DNA is transcribed into RNA, which then acts as a messenger carrying the information to be translated into proteins. Thus the flow of information goes from DNA to RNA to proteins and information can never be transferred back from protein to nucleic acid. DNA can be copied into more DNA (replication) or into RNA (transcription) but only the information in mRNA [messenger RNA] can be translated into protein”.

“The genome is the entire DNA contained within the forty-six chromosomes located in the nucleus of each human somatic (body) cell. […] The complete human genome is composed of over 3 billion bases and contain approximately 20,000 genes that code for proteins. This is much lower than earlier estimates of 80,000 to 140,000 and astonished the scientific community when revealed through human genome sequencing. Equally surprising was the finding that genomes of much simpler organisms sequenced at the same time contained a higher number of protein-coding genes than humans. […] It is now clear that the size of the genome does not correspond with the number of protein-coding genes, and these do not determine the complexity of an organism. Protein-coding genes can be viewed as ‘transcription units’. These are made up of sequences called exons that code for amino acids, and separated by by non-coding sequences called introns. Associated with these are additional sequences termed promoters and enhancers that control the expression of that gene.”

“Some sections of the human genome code for RNA molecules that do not have the capacity to produce proteins. […] it is now becoming apparent that many play a role in controlling gene expression. Despite the importance of proteins, less than 1.5 per cent of the genome is made up of exon sequences. A recent estimate is that about 80 per cent of the genome is transcribed or involved in regulatory functions with the rest mainly composed of repetitive sequences. […] Satellite DNA […] is a short sequence repeated many thousands of times in tandem […] A second type of repetitive DNA is the telomere sequence. […] Their role is to prevent chromosomes from shortening during DNA replication […] Repetitive sequences can also be found distributed or interspersed throughout the genome. These repeats have the ability to move around the genome and are referred to as mobile or transposable DNA. […] Such movements can be harmful sometimes as gene sequences can be disrupted causing disease. […] The vast majority of transposable sequences are no longer able to move around and are considered to be ‘silent’. However, these movements have contributed, over evolutionary time, to the organization and evolution of the genome, by creating new or modified genes leading to the production of proteins with novel functions.”

“A very important property of DNA is that it can make an accurate copy of itself. This is necessary since cells die during the normal wear and tear of tissues and need to be replenished. […] DNA replication is a highly accurate process with an error occurring every 10,000 to 1 million bases in human DNA. This low frequency is because the DNA polymerases carry a proofreading function. If an incorrect nucleotide is incorporated during DNA synthesis, the polymerase detects the error and excises the incorrect base. Following excision, the polymerase reinserts the correct base and replication continues. Any errors that are not corrected through proofreading are repaired by an alternative mismatch repair mechanism. In some instances, proofreading and repair mechanisms fail to correct errors. These become permanent mutations after the next cell division cycle as they are no longer recognized as errors and are therefore propagated each time the DNA replicates.”

DNA sequencing identifies the precise linear order of the nucleotide bases A, C, G, T, in a DNA fragment. It is possible to sequence individual genes, segments of a genome, or whole genomes. Sequencing information is fundamental in helping us understand how our genome is structured and how it functions. […] The Human Genome Project, which used Sanger sequencing, took ten years to sequence and cost 3 billion US dollars. Using high-throughput sequencing, the entire human genome can now be sequenced in a few days at a cost of 3,000 US dollars. These costs are continuing to fall, making it more feasible to sequence whole genomes. The human genome sequence published in 2003 was built from DNA pooled from a number of donors to generate a ‘reference’ or composite genome. However, the genome of each individual is unique and so in 2005 the Personal Genome Project was launched in the USA aiming to sequence and analyse the genomes of 100,000 volunteers across the world. Soon after, similar projects followed in Canada and Korea and, in 2013, in the UK. […] To store and analyze the huge amounts of data, computational systems have developed in parallel. This branch of biology, called bioinformatics, has become an extremely important collaborative research area for molecular biologists drawing on the expertise of computer scientists, mathematicians, and statisticians.”

“[T]he structure of RNA differs from DNA in three fundamental ways. First, the sugar is a ribose, whereas in DNA it is a deoxyribose. Secondly, in RNA the nucleotide bases are A, G, C, and U (uracil) instead of A, G, C, and T. […] Thirdly, RNA is a single-stranded molecule unlike double-stranded DNA. It is not helical in shape but can fold to form a hairpin or stem-loop structure by base-pairing between complementary regions within the same RNA molecule. These two-dimensional secondary structures can further fold to form complex three-dimensional, tertiary structures. An RNA molecule is able to interact not only with itself, but also with other RNAs, with DNA, and with proteins. These interactions, and the variety of conformations that RNAs can adopt, enables them to carry out a wide range of functions. […] RNAs can influence many normal cellular and disease processes by regulating gene expression. RNA interference […] is one of the main ways in which gene expression is regulated.”

“Translation of the mRNA to a protein takes place in the cell cytoplasm on ribosomes. Ribosomes are cellular structures made up primarily of rRNA and proteins. At the ribosomes, the mRNA is decoded to produce a specific protein according to the rules defined by the genetic code. The correct amino acids are brought to the mRNA at the ribosomes by molecules called transfer RNAs (tRNAs). […] At the start of translation, a tRNA binds to the mRNA at the start codon AUG. This is followed by the binding of a second tRNA matching the adjacent mRNA codon. The two neighbouring amino acids linked to the tRNAs are joined together by a chemical bond called the peptide bond. Once the peptide bond forms, the first tRNA detaches leaving its amino acid behind. The ribosome then moves one codon along the mRNA and a third tRNA binds. In this way, tRNAs sequentially bind to the mRNA as the ribosome moves from codon to codon. Each time a tRNA molecule binds, the linked amino acid is transferred to the growing amino acid chain. Thus the mRNA sequence is translated into a chain of amino acids connected by peptide bonds to produce a polypeptide chain. Translation is terminated when the ribosome encounters a stop codon […]. After translation, the chain is folded and very often modified by the addition of sugar or other molecules to produce fully functional proteins.”

“The naturally occurring RNAi pathway is now extensively exploited in the laboratory to study the function of genes. It is possible to design synthetic siRNA molecules with a sequence complementary to the gene under study. These double-stranded RNA molecules are then introduced into the cell by special techniques to temporarily knock down the expression of that gene. By studying the phenotypic effects of this severe reduction of gene expression, the function of that gene can be identified. Synthetic siRNA molecules also have the potential to be used to treat diseases. If a disease is caused or enhanced by a particular gene product, then siRNAs can be designed against that gene to silence its expression. This prevents the protein which drives the disease from being produced. […] One of the major challenges to the use of RNAi as therapy is directing siRNA to the specific cells in which gene silencing is required. If released directly into the bloodstream, enzymes in the bloodstream degrade siRNAs. […] Other problems are that siRNAs can stimulate the body’s immune response and can produce off-target effects by silencing RNA molecules other than those against which they were specifically designed. […] considerable attention is currently focused on designing carrier molecules that can transport siRNA through the bloodstream to the diseased cell.”

“Both Northern blotting and RT-PCR enable the expression of one or a few genes to be measured simultaneously. In contrast, the technique of microarrays allows gene expression to be measured across the full genome of an organism in a single step. This massive scale genome analysis technique is very useful when comparing gene expression profiles between two samples. […] This can identify gene subsets that are under- or over-expressed in one sample relative to the second sample to which it is compared.”

Links:

Molecular biology.
Charles Darwin. Alfred Wallace. Gregor Mendel. Wilhelm Johannsen. Heinrich Waldeyer. Theodor Boveri. Walter Sutton. Friedrich Miescher. Phoebus Levene. Oswald Avery. Colin MacLeod. Maclyn McCarty. James Watson. Francis Crick. Rosalind Franklin. Andrew Fire. Craig Mello.
Gene. Genotype. Phenotype. Chromosome. Nucleotide. DNA. RNA. Protein.
Chargaff’s rules.
Photo 51.
Human Genome Project.
Long interspersed nuclear elements (LINEs). Short interspersed nuclear elements (SINEs).
Histone. Nucleosome.
Chromatin. Euchromatin. Heterochromatin.
Mitochondrial DNA.
DNA replication. Helicase. Origin of replication. DNA polymeraseOkazaki fragments. Leading strand and lagging strand. DNA ligase. Semiconservative replication.
Mutation. Point mutation. Indel. Frameshift mutation.
Genetic polymorphism. Single-nucleotide polymorphism (SNP).
Genome-wide association study (GWAS).
Molecular cloning. Restriction endonuclease. Multiple cloning site (MCS). Bacterial artificial chromosome.
Gel electrophoresis. Southern blot. Polymerase chain reaction (PCR). Reverse transcriptase PCR (RT-PCR). Quantitative PCR (qPCR).
GenBank. European Molecular Biology Laboratory (EMBL). Encyclopedia of DNA Elements (ENCODE).
RNA polymerase II. TATA box. Transcription factor IID. Stop codon.
Protein biosynthesis.
SmRNA (small nuclear RNA).
Untranslated region (/UTR sequences).
Transfer RNA.
Micro RNA (miRNA).
Dicer (enzyme).
RISC (RNA-induced silencing complex).
Argonaute.
Lipid-Based Nanoparticles for siRNA Delivery in Cancer Therapy.
Long non-coding RNA.
Ribozyme/catalytic RNA.
RNA-sequencing (RNA-seq).

May 5, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine, Molecular biology | Leave a comment

Systems Biology (III)

Some observations from chapter 4 below:

The need to maintain a steady state ensuring homeostasis is an essential concern in nature while negative feedback loop is the fundamental way to ensure that this goal is met. The regulatory system determines the interdependences between individual cells and the organism, subordinating the former to the latter. In trying to maintain homeostasis, the organism may temporarily upset the steady state conditions of its component cells, forcing them to perform work for the benefit of the organism. […] On a cellular level signals are usually transmitted via changes in concentrations of reaction substrates and products. This simple mechanism is made possible due to limited volume of each cell. Such signaling plays a key role in maintaining homeostasis and ensuring cellular activity. On the level of the organism signal transmission is performed by hormones and the nervous system. […] Most intracellular signal pathways work by altering the concentrations of selected substances inside the cell. Signals are registered by forming reversible complexes consisting of a ligand (reaction product) and an allosteric receptor complex. When coupled to the ligand, the receptor inhibits the activity of its corresponding effector, which in turn shuts down the production of the controlled substance ensuring the steady state of the system. Signals coming from outside the cell are usually treated as commands (covalent modifications), forcing the cell to adjust its internal processes […] Such commands can arrive in the form of hormones, produced by the organism to coordinate specialized cell functions in support of general homeostasis (in the organism). These signals act upon cell receptors and are usually amplified before they reach their final destination (the effector).”

“Each concentration-mediated signal must first be registered by a detector. […] Intracellular detectors are typically based on allosteric proteins. Allosteric proteins exhibit a special property: they have two stable structural conformations and can shift from one form to the other as a result of changes in ligand concentrations. […] The concentration of a product (or substrate) which triggers structural realignment in the allosteric protein (such as a regulatory enzyme) depends on the genetically-determined affinity of the active site to its ligand. Low affinity results in high target concentration of the controlled substance while high affinity translates into lower concentration […]. In other words, high concentration of the product is necessary to trigger a low-affinity receptor (and vice versa). Most intracellular regulatory mechanisms rely on noncovalent interactions. Covalent bonding is usually associated with extracellular signals, generated by the organism and capable of overriding the cell’s own regulatory mechanisms by modifying the sensitivity of receptors […]. Noncovalent interactions may be compared to requests while covalent signals are treated as commands. Signals which do not originate in the receptor’s own feedback loop but modify its affinity are known as steering signals […] Hormones which act upon cells are, by their nature, steering signals […] Noncovalent interactions — dependent on substance concentrations — impose spatial restrictions on regulatory mechanisms. Any increase in cell volume requires synthesis of additional products in order to maintain stable concentrations. The volume of a spherical cell is given as V = 4/3 π r3, where r indicates cell radius. Clearly, even a slight increase in r translates into a significant increase in cell volume, diluting any products dispersed in the cytoplasm. This implies that cells cannot expand without incurring great energy costs. It should also be noted that cell expansion reduces the efficiency of intracellular regulatory mechanisms because signals and substrates need to be transported over longer distances. Thus, cells are universally small, regardless of whether they make up a mouse or an elephant.”

An effector is an element of a regulatory loop which counteracts changes in the regulated quantity […] Synthesis and degradation of biological compounds often involves numerous enzymes acting in sequence. The product of one enzyme is a substrate for another enzyme. With the exception of the initial enzyme, each step of this cascade is controlled by the availability of the supplied substrate […] The effector consists of a chain of enzymes, each of which depends on the activity of the initial regulatory enzyme […] as well as on the activity of its immediate predecessor which supplies it with substrates. The function of all enzymes in the effector chain is indirectly dependent on the initial enzyme […]. This coupling between the receptor and the first link in the effector chain is a universal phenomenon. It can therefore be said that the initial enzyme in the effector chain is, in fact, a regulatory enzyme. […] Most cell functions depend on enzymatic activity. […] It seems that a set of enzymes associated with a specific process which involves a negative feedback loop is the most typical form of an intracellular regulatory effector. Such effectors can be controlled through activation or inhibition of their associated enzymes.”

“The organism is a self-contained unit represented by automatic regulatory loops which ensure homeostasis. […] Effector functions are conducted by cells which are usually grouped and organized into tissues and organs. Signal transmission occurs by way of body fluids, hormones or nerve connections. Cells can be treated as automatic and potentially autonomous elements of regulatory loops, however their specific action is dependent on the commands issued by the organism. This coercive property of organic signals is an integral requirement of coordination, allowing the organism to maintain internal homeostasis. […] Activities of the organism are themselves regulated by their own negative feedback loops. Such regulation differs however from the mechanisms observed in individual cells due to its place in the overall hierarchy and differences in signal properties, including in particular:
• Significantly longer travel distances (compared to intracellular signals);
• The need to maintain hierarchical superiority of the organism;
• The relative autonomy of effector cells. […]
The relatively long distance travelled by organism’s signals and their dilution (compared to intracellular ones) calls for amplification. As a consequence, any errors or random distortions in the original signal may be drastically exacerbated. A solution to this problem comes in the form of encoding, which provides the signal with sufficient specificity while enabling it to be selectively amplified. […] a loudspeaker can […] assist in acoustic communication, but due to the lack of signal encoding it cannot compete with radios in terms of communication distance. The same reasoning applies to organism-originated signals, which is why information regarding blood glucose levels is not conveyed directly by glucose but instead by adrenalin, glucagon or insulin. Information encoding is handled by receptors and hormone-producing cells. Target cells are capable of decoding such signals, thus completing the regulatory loop […] Hormonal signals may be effectively amplified because the hormone itself does not directly participate in the reaction it controls — rather, it serves as an information carrier. […] strong amplification invariably requires encoding in order to render the signal sufficiently specific and unambiguous. […] Unlike organisms, cells usually do not require amplification in their internal regulatory loops — even the somewhat rare instances of intracellular amplification only increase signal levels by a small amount. Without the aid of an amplifier, messengers coming from the organism level would need to be highly concentrated at their source, which would result in decreased efficiency […] Most signals originated on organism’s level travel with body fluids; however if a signal has to reach its destination very rapidly (for instance in muscle control) it is sent via the nervous system”.

“Two types of amplifiers are observed in biological systems:
1. cascade amplifier,
2. positive feedback loop. […]
A cascade amplifier is usually a collection of enzymes which perform their action by activation in strict sequence. This mechanism resembles multistage (sequential) synthesis or degradation processes, however instead of exchanging reaction products, amplifier enzymes communicate by sharing activators or by directly activating one another. Cascade amplifiers are usually contained within cells. They often consist of kinases. […] Amplification effects occurring at each stage of the cascade contribute to its final result. […] While the kinase amplification factor is estimated to be on the order of 103, the phosphorylase cascade results in 1010-fold amplification. It is a stunning value, though it should also be noted that the hormones involved in this cascade produce particularly powerful effects. […] A positive feedback loop is somewhat analogous to a negative feedback loop, however in this case the input and output signals work in the same direction — the receptor upregulates the process instead of inhibiting it. Such upregulation persists until the available resources are exhausted.
Positive feedback loops can only work in the presence of a control mechanism which prevents them from spiraling out of control. They cannot be considered self-contained and only play a supportive role in regulation. […] In biological systems positive feedback loops are sometimes encountered in extracellular regulatory processes where there is a need to activate slowly-migrating components and greatly amplify their action in a short amount of time. Examples include blood coagulation and complement factor activation […] Positive feedback loops are often coupled to negative loop-based control mechanisms. Such interplay of loops may impart the signal with desirable properties, for instance by transforming a flat signals into a sharp spike required to overcome the activation threshold for the next stage in a signalling cascade. An example is the ejection of calcium ions from the endoplasmic reticulum in the phospholipase C cascade, itself subject to a negative feedback loop.”

“Strong signal amplification carries an important drawback: it tends to “overshoot” its target activity level, causing wild fluctuations in the process it controls. […] Nature has evolved several means of signal attenuation. The most typical mechanism superimposes two regulatory loops which affect the same parameter but act in opposite directions. An example is the stabilization of blood glucose levels by two contradictory hormones: glucagon and insulin. Similar strategies are exploited in body temperature control and many other biological processes. […] The coercive properties of signals coming from the organism carry risks associated with the possibility of overloading cells. The regulatory loop of an autonomous cell must therefore include an “off switch”, controlled by the cell. An autonomous cell may protect itself against excessive involvement in processes triggered by external signals (which usually incur significant energy expenses). […] The action of such mechanisms is usually timer-based, meaning that they inactivate signals following a set amount of time. […] The ability to interrupt signals protects cells from exhaustion. Uncontrolled hormone-induced activity may have detrimental effects upon the organism as a whole. This is observed e.g. in the case of the vibrio cholerae toxin which causes prolonged activation of intestinal epithelial cells by locking protein G in its active state (resulting in severe diarrhea which can dehydrate the organism).”

“Biological systems in which information transfer is affected by high entropy of the information source and ambiguity of the signal itself must include discriminatory mechanisms. These mechanisms usually work by eliminating weak signals (which are less specific and therefore introduce ambiguities). They create additional obstacles (thresholds) which the signals must overcome. A good example is the mechanism which eliminates the ability of weak, random antigens to activate lymphatic cells. It works by inhibiting blastic transformation of lymphocytes until a so-called receptor cap has accumulated on the surface of the cell […]. Only under such conditions can the activation signal ultimately reach the cell nucleus […] and initiate gene transcription. […] weak, reversible nonspecific interactions do not permit sufficient aggregation to take place. This phenomenon can be described as a form of discrimination against weak signals. […] Discrimination may also be linked to effector activity. […] Cell division is counterbalanced by programmed cell death. The most typical example of this process is apoptosis […] Each cell is prepared to undergo controlled death if required by the organism, however apoptosis is subject to tight control. Cells protect themselves against accidental triggering of the process via IAP proteins. Only strong proapoptotic signals may overcome this threshold and initiate cellular suicide”.

Simply knowing the sequences, structures or even functions of individual proteins does not provide sufficient insight into the biological machinery of living organisms. The complexity of individual cells and entire organisms calls for functional classification of proteins. This task can be accomplished with a proteome — a theoretical construct where individual elements (proteins) are grouped in a way which acknowledges their mutual interactions and interdependencies, characterizing the information pathways in a complex organism.
Most ongoing proteome construction projects focus on individual proteins as the basic building blocks […] [We would instead argue in favour of a model in which] [t]he basic unit of the proteome is one negative feedback loop (rather than a single protein) […]
Due to the relatively large number of proteins (between 25 and 40 thousand in the human organism), presenting them all on a single graph with vertex lengths corresponds to the relative duration of interactions would be unfeasible. This is why proteomes are often subdivided into functional subgroups such as the metabolome (proteins involved in metabolic processes), interactome (complex-forming proteins), kinomes (proteins which belong to the kinase family) etc.”

February 18, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine, Molecular biology | Leave a comment

Systems Biology (II)

Some observations from the book’s chapter 3 below:

“Without regulation biological processes would become progressively more and more chaotic. In living cells the primary source of information is genetic material. Studying the role of information in biology involves signaling (i.e. spatial and temporal transfer of information) and storage (preservation of information). Regarding the role of the genome we can distinguish three specific aspects of biological processes: steady-state genetics, which ensure cell-level and body homeostasis; genetics of development, which controls cell differentiation and genesis of the organism; and evolutionary genetics, which drives speciation. […] The ever growing demand for information, coupled with limited storage capacities, has resulted in a number of strategies for minimizing the quantity of the encoded information that must be preserved by living cells. In addition to combinatorial approaches based on noncontiguous genes structure, self-organization plays an important role in cellular machinery. Nonspecific interactions with the environment give rise to coherent structures despite the lack of any overt information store. These mechanisms, honed by evolution and ubiquitous in living organisms, reduce the need to directly encode large quantities of data by adopting a systemic approach to information management.”

Information is commonly understood as a transferable description of an event or object. Information transfer can be either spatial (communication, messaging or signaling) or temporal (implying storage). […] The larger the set of choices, the lower the likelihood [of] making the correct choice by accident and — correspondingly — the more information is needed to choose correctly. We can therefore state that an increase in the cardinality of a set (the number of its elements) corresponds to an increase in selection indeterminacy. This indeterminacy can be understood as a measure of “a priori ignorance”. […] Entropy determines the uncertainty inherent in a given system and therefore represents the relative difficulty of making the correct choice. For a set of possible events it reaches its maximum value if the relative probabilities of each event are equal. Any information input reduces entropy — we can therefore say that changes in entropy are a quantitative measure of information. […] Physical entropy is highest in a state of equilibrium, i.e. lack of spontaneity (G = 0,0) which effectively terminates the given reaction. Regulatory processes which counteract the tendency of physical systems to reach equilibrium must therefore oppose increases in entropy. It can be said that a steady inflow of information is a prerequisite of continued function in any organism. As selections are typically made at the entry point of a regulatory process, the concept of entropy may also be applied to information sources. This approach is useful in explaining the structure of regulatory systems which must be “designed” in a specific way, reducing uncertainty and enabling accurate, error-free decisions.

The fire ant exudes a pheromone which enables it to mark sources of food and trace its own path back to the colony. In this way, the ant conveys pathing information to other ants. The intensity of the chemical signal is proportional to the abundance of the source. Other ants can sense the pheromone from a distance of several (up to a dozen) centimeters and thus locate the source themselves. […] As can be expected, an increase in the entropy of the information source (i.e. the measure of ignorance) results in further development of regulatory systems — in this case, receptors capable of receiving signals and processing them to enable accurate decisions. Over time, the evolution of regulatory mechanisms increases their performance and precision. The purpose of various structures involved in such mechanisms can be explained on the grounds of information theory. The primary goal is to select the correct input signal, preserve its content and avoid or eliminate any errors.”

Genetic information stored in nucleotide sequences can be expressed and transmitted in two ways:
a. via replication (in cell division);
b. via transcription and translation (also called gene expression […]
)
Both processes act as effectors and can be triggered by certain biological signals transferred on request.
Gene expression can be defined as a sequence of events which lead to the synthesis of proteins or their products required for a particular function. In cell division, the goal of this process is to generate a copy of the entire genetic code (S phase), whereas in gene expression only selected fragments of DNA (those involved in the requested function) are transcribed and translated. […] Transcription calls for exposing a section of the cell’s genetic code and although its product (RNA) is short-lived, it can be recreated on demand, just like a carbon copy of a printed text. On the other hand, replication affects the entire genetic material contained in the cell and must conform to stringent precision requirements, particularly as the size of the genome increases.”

The magnitude of effort involved in replication of genetic code can be visualized by comparing the DNA chain to a zipper […]. Assuming that the zipper consists of three pairs of interlocking teeth per centimeter (300 per meter) and that the human genome is made up of 3 billion […] base pairs, the total length of our uncoiled DNA in “zipper form” would be equal to […] 10,000 km […] If we were to unfasten the zipper at a rate of 1 m per second, the entire unzipping process would take approximately 3 months […]. This comparison should impress upon the reader the length of the DNA chain and the precision with which individual nucleotides must be picked to ensure that the resulting code is an exact copy of the source. It should also be noted that for each base pair the polymerase enzyme needs to select an appropriate matching nucleotide from among four types of nucleotides present in the solution, and attach it to the chain (clearly, no such problem occurs in zippers). The reliability of an average enzyme is on the order of 10-3–10-4, meaning that one error occurs for every 1,000–10,000 interactions between the enzyme and its substrate. Given this figure, replication of 3*109 base pairs would introduce approximately 3 million errors (mutations) per genome, resulting in a highly inaccurate copy. Since the observed reliability of replication is far higher, we may assume that some corrective mechanisms are involved. Really, the remarkable precision of genetic replication is ensured by DNA repair processes, and in particular by the corrective properties of polymerase itself.

Many mutations are caused by the inherent chemical instability of nucleic acids: for example, cytosine may spontaneously convert to uracil. In the human genome such an event occurs approximately 100 times per day; however uracil is not normally encountered in DNA and its presence alerts defensive mechanisms which correct the error. Another type of mutation is spontaneous depurination, which also triggers its own, dedicated error correction procedure. Cells employ a large number of corrective mechanisms […] DNA repair mechanisms may be treated as an “immune system” which protects the genome from loss or corruption of genetic information. The unavoidable mutations which sometimes occur despite the presence of error correction-mechanisms can be masked due to doubled presentation (alleles) of genetic information. Thus, most mutations are recessive and not expressed in the phenotype. As the length of the DNA chain increases, mutations become more probable. It should be noted that the number of nucleotides in DNA is greater than the relative number of aminoacids participating in polypeptide chains. This is due to the fact that each aminoacid is encoded by exactly three nucleotides — a general principle which applies to all living organisms. […] Fidelity is, of course, fundamentally important in DNA replication as any harmful mutations introduced in its course are automatically passed on to all successive generations of cells. In contrast, transcription and translation processes can be more error-prone as their end products are relatively short-lived. Of note is the fact that faulty transcripts appear in relatively low quantities and usually do not affect cell functions, since regulatory processes ensure continued synthesis of the required substances until a suitable level of activity is reached. Nevertheless, it seems that reliable transcription of genetic material is sufficiently significant for cells to have developed appropriate proofreading mechanisms, similar to those which assist replication. […] the entire information pathway — starting with DNA and ending with active proteins — is protected against errors. We can conclude that fallibility is an inherent property of genetic information channels, and that in order to perform their intended function, these channels require error correction mechanisms.”

The discrete nature of genetic material is an important property which distinguishes prokaryotes from eukaryotes. […] The ability to select individual nucleotide fragments and construct sequences from predetermined “building blocks” results in high adaptability to environmental stimuli and is a fundamental aspect of evolution. The discontinuous nature of genes is evidenced by the presence of fragments which do not convey structural information (introns), as opposed to structure-encoding fragments (exons). The initial transcript (pre-mRNA) contains introns as well as exons. In order to provide a template for protein synthesis, it must undergo further processing (also known as splicing): introns must be cleaved and exon fragments attached to one another. […] Recognition of intron-exon boundaries is usually very precise, while the reattachment of adjacent exons is subject to some variability. Under certain conditions, alternative splicing may occur, where the ordering of the final product does not reflect the order in which exon sequences appear in the source chain. This greatly increases the number of potential mRNA combinations and thus the variety of resulting proteins. […] While access to energy sources is not a major problem, sources of information are usually far more difficult to manage — hence the universal tendency to limit the scope of direct (genetic) information storage. Reducing the length of genetic code enables efficient packing and enhances the efficiency of operations while at the same time decreasing the likelihood of errors. […] The number of genes identified in the human genome is lower than the number of distinct proteins by a factor of 4; a difference which can be attributed to alternative splicing. […] This mechanism increases the variety of protein structures without affecting core information storage, i.e. DNA sequences. […] Primitive organisms often possess nearly as many genes as humans, despite the essential differences between both groups. Interspecies diversity is primarily due to the properties of regulatory sequences.”

The discontinuous nature of genes is evolutionarily advantageous but comes at the expense of having to maintain a nucleus where such splicing processes can be safely conducted, in addition to efficient transport channels allowing transcripts to penetrate the nuclear membrane. While it is believed that at early stages of evolution RNA was the primary repository of genetic information, its present function can best be described as an information carrier. Since unguided proteins cannot ensure sufficient specificity of interaction with nucleic acids, protein-RNA complexes are used often in cases where specific fragments of genetic information need to be read. […] The use of RNA in protein complexes is common across all domains of the living world as it bridges the gap between discrete and continuous storage of genetic information.”

Epigenetic differentiation mechanisms are particularly important in embryonic development. […] Unlike the function of mature organisms, embryonic programming refers to structures which do not yet exist but which need to be created through cell proliferation and differentiation. […] Differentiation of cells results in phenotypic changes. This phenomenon is the primary difference between development genetics and steady-state genetics. Functional differences are not, however, associated with genomic changes: instead they are mediated by the transcriptome where certain genes are preferentially selected for transcription while others are suppressed. […] In a mature, specialized cell only a small portion of the transcribable genome is actually expressed. The remainder of the cell’s genetic material is said to be silenced. Gene silencing is a permanent condition. Under normal circumstances mature cells never alter their function, although such changes may be forced in a laboratory setting […] Cells which make up the embryo at a very early stage of development are pluripotent, meaning that their purpose can be freely determined and that all of their genetic information can potentially be expressed (under certain conditions). […] At each stage of the development process the scope of pluripotency is reduced until, ultimately, the cell becomes monopotent. Monopotency implies that the final function of the cell has already been determined, although the cell itself may still be immature. […] functional dissimilarities between specialized cells are not associated with genetic mutations but rather with selective silencing of genes. […] Most genes which determine biological functions have a biallelic representation (i.e. a representation consisting of two alleles). The remainder (approximately 10 % of genes) is inherited from one specific parent, as a result of partial or complete silencing of their sister alleles (called paternal or maternal imprinting) which occurs during gametogenesis. The suppression of a single copy of the X chromosome is a special case of this phenomenon.”

Evolutionary genetics is subject to two somewhat contradictory criteria. On the one hand, there is clear pressure on accurate and consistent preservation of biological functions and structures while on the other hand it is also important to permit gradual but persistent changes. […] the observable progression of adaptive traits which emerge as a result of evolution suggests a mechanism which promotes constructive changes over destructive ones. Mutational diversity cannot be considered truly random if it is limited to certain structures or functions. […] Approximately 50 % of the human genome consists of mobile segments, capable of migrating to various positions in the genome. These segments are called transposons and retrotransposons […] The mobility of genome fragments not only promotes mutations (by increasing the variability of DNA) but also affects the stability and packing of chromatin strands wherever such mobile sections are reintegrated with the genome. Under normal circumstances the activity of mobile sections is tempered by epigenetic mechanisms […]; however in certain situations gene mobility may be upregulated. In particular, it seems that in “prehistoric” (remote evolutionary) times such events occurred at a much faster pace, accelerating the rate of genetic changes and promoting rapid evolution. Cells can actively promote mutations by way of the so-called AID process (activity-dependent cytosine deamination). It is an enzymatic mechanism which converts cytosine into uracil, thereby triggering repair mechanisms and increasing the likelihood of mutations […] The existence of AID proves that cells themselves may trigger evolutionary changes and that the role of mutations in the emergence of new biological structures is not strictly passive.”

Regulatory mechanisms which receive signals characterized by high degrees of uncertainty, must be able to make informed choices to reduce the overall entropy of the system they control. This property is usually associated with development of information channels. Special structures ought to be exposed within information channels connecting systems of different character as for example linking transcription to translation or enabling transduction of signals through the cellular membrane. Examples of structures which convey highly entropic information are receptor systems associated with blood coagulation and immune responses. The regulatory mechanism which triggers an immune response relies on relatively simple effectors (complement factor enzymes, phages and killer cells) coupled to a highly evolved receptor system, represented by specific antibodies and organized set of cells. Compared to such advanced receptors the structures which register the concentration of a given product (e.g. glucose in blood) are rather primitive. Advanced receptors enable the immune system to recognize and verify information characterized by high degrees of uncertainty. […] In sequential processes it is usually the initial stage which poses the most problems and requires the most information to complete successfully. It should come as no surprise that the most advanced control loops are those associated with initial stages of biological pathways.”

February 10, 2018 Posted by | Biology, Books, Chemistry, Evolutionary biology, Genetics, Immunology, Medicine, Molecular biology | Leave a comment

Systems Biology (I)

This book is really dense and is somewhat tough for me to blog. One significant problem is that: “The authors assume that the reader is already familiar with the material covered in a classic biochemistry course.” I know enough biochem to follow most of the stuff in this book, and I was definitely quite happy to have recently read John Finney’s book on the biochemical properties of water and Christopher Hall’s introduction to materials science, as both of those books’ coverage turned out to be highly relevant (these are far from the only relevant books I’ve read semi-recently – Atkins introduction to thermodynamics is another book that springs to mind) – but even so, what do you leave out when writing a post like this? I decided to leave out a lot. Posts covering books like this one are hard to write because it’s so easy for them to blow up in your face because you have to include so many details for the material included in the post to even start to make sense to people who didn’t read the original text. And if you leave out all the details, what’s really left? It’s difficult..

Anyway, some observations from the first chapters of the book below.

“[T]he biological world consists of self-managing and self-organizing systems which owe their existence to a steady supply of energy and information. Thermodynamics introduces a distinction between open and closed systems. Reversible processes occurring in closed systems (i.e. independent of their environment) automatically gravitate toward a state of equilibrium which is reached once the velocity of a given reaction in both directions becomes equal. When this balance is achieved, we can say that the reaction has effectively ceased. In a living cell, a similar condition occurs upon death. Life relies on certain spontaneous processes acting to unbalance the equilibrium. Such processes can only take place when substrates and products of reactions are traded with the environment, i.e. they are only possible in open systems. In turn, achieving a stable level of activity in an open system calls for regulatory mechanisms. When the reaction consumes or produces resources that are exchanged with the outside world at an uneven rate, the stability criterion can only be satisfied via a negative feedback loop […] cells and living organisms are thermodynamically open systems […] all structures which play a role in balanced biological activity may be treated as components of a feedback loop. This observation enables us to link and integrate seemingly unrelated biological processes. […] the biological structures most directly involved in the functions and mechanisms of life can be divided into receptors, effectors, information conduits and elements subject to regulation (reaction products and action results). Exchanging these elements with the environment requires an inflow of energy. Thus, living cells are — by their nature — open systems, requiring an energy source […] A thermodynamically open system lacking equilibrium due to a steady inflow of energy in the presence of automatic regulation is […] a good theoretical model of a living organism. […] Pursuing growth and adapting to changing environmental conditions calls for specialization which comes at the expense of reduced universality. A specialized cell is no longer self-sufficient. As a consequence, a need for higher forms of intercellular organization emerges. The structure which provides cells with suitable protection and ensures continued homeostasis is called an organism.”

“In biology, structure and function are tightly interwoven. This phenomenon is closely associated with the principles of evolution. Evolutionary development has produced structures which enable organisms to develop and maintain its architecture, perform actions and store the resources needed to survive. For this reason we introduce a distinction between support structures (which are akin to construction materials), function-related structures (fulfilling the role of tools and machines), and storage structures (needed to store important substances, achieving a compromise between tight packing and ease of access). […] Biology makes extensive use of small-molecule structures and polymers. The physical properties of polymer chains make them a key building block in biological structures. There are several reasons as to why polymers are indispensable in nature […] Sequestration of resources is subject to two seemingly contradictory criteria: 1. Maximize storage density; 2. Perform sequestration in such a way as to allow easy access to resources. […] In most biological systems, storage applies to energy and information. Other types of resources are only occasionally stored […]. Energy is stored primarily in the form of saccharides and lipids. Saccharides are derivatives of glucose, rendered insoluble (and thus easy to store) via polymerization.Their polymerized forms, stabilized with α-glycosidic bonds, include glycogen (in animals) and starch (in plantlife). […] It should be noted that the somewhat loose packing of polysaccharides […] makes them unsuitable for storing large amounts of energy. In a typical human organism only ca. 600 kcal of energy is stored in the form of glycogen, while (under normal conditions) more than 100,000 kcal exists as lipids. Lipids deposit usually assume the form of triglycerides (triacylglycerols). Their properties can be traced to the similarities between fatty acids and hydrocarbons. Storage efficiency (i.e. the amount of energy stored per unit of mass) is twice that of polysaccharides, while access remains adequate owing to the relatively large surface area and high volume of lipids in the organism.”

“Most living organisms store information in the form of tightly-packed DNA strands. […] It should be noted that only a small percentage of DNA (about few %) conveys biologically relevant information. The purpose of the remaining ballast is to enable suitable packing and exposure of these important fragments. If all of DNA were to consist of useful code, it would be nearly impossible to devise a packing strategy guaranteeing access to all of the stored information.”

“The seemingly endless diversity of biological functions frustrates all but the most persistent attempts at classification. For the purpose of this handbook we assume that each function can be associated either with a single cell or with a living organism. In both cases, biological functions are strictly subordinate to automatic regulation, based — in a stable state — on negative feedback loops, and in processes associated with change (for instance in embryonic development) — on automatic execution of predetermined biological programs. Individual components of a cell cannot perform regulatory functions on their own […]. Thus, each element involved in the biological activity of a cell or organism must necessarily participate in a regulatory loop based on processing information.”

“Proteins are among the most basic active biological structures. Most of the well-known proteins studied thus far perform effector functions: this group includes enzymes, transport proteins, certain immune system components (complement factors) and myofibrils. Their purpose is to maintain biological systems in a steady state. Our knowledge of receptor structures is somewhat poorer […] Simple structures, including individual enzymes and components of multienzyme systems, can be treated as “tools” available to the cell, while advanced systems, consisting of many mechanically-linked tools, resemble machines. […] Machinelike mechanisms are readily encountered in living cells. A classic example is fatty acid synthesis, performed by dedicated machines called synthases. […] Multiunit structures acting as machines can be encountered wherever complex biochemical processes need to be performed in an efficient manner. […] If the purpose of a machine is to generate motion then a thermally powered machine can accurately be called a motor. This type of action is observed e.g. in myocytes, where transmission involves reordering of protein structures using the energy generated by hydrolysis of high-energy bonds.”

“In biology, function is generally understood as specific physiochemical action, almost universally mediated by proteins. Most such actions are reversible which means that a single protein molecule may perform its function many times. […] Since spontaneous noncovalent surface interactions are very infrequent, the shape and structure of active sites — with high concentrations of hydrophobic residues — makes them the preferred area of interaction between functional proteins and their ligands. They alone provide the appropriate conditions for the formation of hydrogen bonds; moreover, their structure may determine the specific nature of interaction. The functional bond between a protein and a ligand is usually noncovalent and therefore reversible.”

“In general terms, we can state that enzymes accelerate reactions by lowering activation energies for processes which would otherwise occur very slowly or not at all. […] The activity of enzymes goes beyond synthesizing a specific protein-ligand complex (as in the case of antibodies or receptors) and involves an independent catalytic attack on a selected bond within the ligand, precipitating its conversion into the final product. The relative independence of both processes (binding of the ligand in the active site and catalysis) is evidenced by the phenomenon of noncompetitive inhibition […] Kinetic studies of enzymes have provided valuable insight into the properties of enzymatic inhibitors — an important field of study in medicine and drug research. Some inhibitors, particularly competitive ones (i.e. inhibitors which outcompete substrates for access to the enzyme), are now commonly used as drugs. […] Physical and chemical processes may only occur spontaneously if they generate energy, or non-spontaneously if they consume it. However, all processes occurring in a cell must have a spontaneous character because only these processes may be catalyzed by enzymes. Enzymes merely accelerate reactions; they do not provide energy. […] The change in enthalpy associated with a chemical process may be calculated as a net difference in the sum of molecular binding energies prior to and following the reaction. Entropy is a measure of the likelihood that a physical system will enter a given state. Since chaotic distribution of elements is considered the most probable, physical systems exhibit a general tendency to gravitate towards chaos. Any form of ordering is thermodynamically disadvantageous.”

“The chemical reactions which power biological processes are characterized by varying degrees of efficiency. In general, they tend to be on the lower end of the efficiency spectrum, compared to energy sources which drive matter transformation processes in our universe. In search for a common criterion to describe the efficiency of various energy sources, we can refer to the net loss of mass associated with a release of energy, according to Einstein’s formula:
E = mc2
The
M/M coefficient (relative loss of mass, given e.g. in %) allows us to compare the efficiency of energy sources. The most efficient processes are those involved in the gravitational collapse of stars. Their efficiency may reach 40 %, which means that 40 % of the stationary mass of the system is converted into energy. In comparison, nuclear reactions have an approximate efficiency of 0.8 %. The efficiency of chemical energy sources available to biological systems is incomparably lower and amounts to approximately 10(-7) % […]. Among chemical reactions, the most potent sources of energy are found in oxidation processes, commonly exploited by biological systems. Oxidation tends  to result in the largest net release of energy per unit of mass, although the efficiency of specific types of oxidation varies. […] given unrestricted access to atmospheric oxygen and to hydrogen atoms derived from hydrocarbons — the combustion of hydrogen (i.e. the synthesis of water; H2 + 1/2O2 = H2O) has become a principal source of energy in nature, next to photosynthesis, which exploits the energy of solar radiation. […] The basic process associated with the release of hydrogen and its subsequent oxidation (called the Krebs cycle) is carried by processes which transfer electrons onto oxygen atoms […]. Oxidation occurs in stages, enabling optimal use of the released energy. An important byproduct of water synthesis is the universal energy carrier known as ATP (synthesized separately). As water synthesis is a highly spontaneous process, it can be exploited to cover the energy debt incurred by endergonic synthesis of ATP, as long as both processes are thermodynamically coupled, enabling spontaneous catalysis of anhydride bonds in ATP. Water synthesis is a universal source of energy in heterotrophic systems. In contrast, autotrophic organisms rely on the energy of light which is exploited in the process of photosynthesis. Both processes yield ATP […] Preparing nutrients (hydrogen carriers) for participation in water synthesis follows different paths for sugars, lipids and proteins. This is perhaps obvious given their relative structural differences; however, in all cases the final form, which acts as a substrate for dehydrogenases, is acetyl-CoA“.

“Photosynthesis is a process which — from the point of view of electron transfer — can be treated as a counterpart of the respiratory chain. In heterotrophic organisms, mitochondria transport electrons from hydrogenated compounds (sugars, lipids, proteins) onto oxygen molecules, synthesizing water in the process, whereas in the course of photosynthesis electrons released by breaking down water molecules are used as a means of reducing oxydised carbon compounds […]. In heterotrophic organisms the respiratory chain has a spontaneous quality (owing to its oxidative properties); however any reverse process requires energy to occur. In the case of photosynthesis this energy is provided by sunlight […] Hydrogen combustion and photosynthesis are the basic sources of energy in the living world. […] For an energy source to become useful, non-spontaneous reactions must be coupled to its operation, resulting in a thermodynamically unified system. Such coupling can be achieved by creating a coherent framework in which the spontaneous and non-spontaneous processes are linked, either physically or chemically, using a bridging component which affects them both. If the properties of both reactions are different, the bridging component must also enable suitable adaptation and mediation. […] Direct exploitation of the energy released via the hydrolysis of ATP is possible usually by introducing an active binding carrier mediating the energy transfer. […] Carriers are considered active as long as their concentration ensures a sufficient release of energy to synthesize a new chemical bond by way of a non-spontaneous process. Active carriers are relatively short-lived […] Any active carrier which performs its function outside of the active site must be sufficiently stable to avoid breaking up prior to participating in the synthesis reaction. Such mobile carriers are usually produced when the required synthesis consists of several stages or cannot be conducted in the active site of the enzyme for sterical reasons. Contrary to ATP, active energy carriers are usually reaction-specific. […] Mobile energy carriers are usually formed as a result of hydrolysis of two high-energy ATP bonds. In many cases this is the minimum amount of energy required to power a reaction which synthesizes a single chemical bond. […] Expelling a mobile or unstable reaction component in order to increase the spontaneity of active energy carrier synthesis is a process which occurs in many biological mechanisms […] The action of active energy carriers may be compared to a ball rolling down a hill. The descending snowball gains sufficient energy to traverse another, smaller mound, adjacent to its starting point. In our case, the smaller hill represents the final synthesis reaction […] Understanding the role of active carriers is essential for the study of metabolic processes.”

“A second category of processes, directly dependent on energy sources, involves structural reconfiguration of proteins, which can be further differentiated into low and high-energy reconfiguration. Low-energy reconfiguration occurs in proteins which form weak, easily reversible bonds with ligands. In such cases, structural changes are powered by the energy released in the creation of the complex. […] Important low-energy reconfiguration processes may occur in proteins which consist of subunits. Structural changes resulting from relative motion of subunits typically do not involve significant expenditures of energy. Of particular note are the so-called allosteric proteins […] whose rearrangement is driven by a weak and reversible bond between the protein and an oxygen molecule. Allosteric proteins are genetically conditioned to possess two stable structural configurations, easily swapped as a result of binding or releasing ligands. Thus, they tend to have two comparable energy minima (separated by a low threshold), each of which may be treated as a global minimum corresponding to the native form of the protein. Given such properties, even a weakly interacting ligand may trigger significant structural reconfiguration. This phenomenon is of critical importance to a variety of regulatory proteins. In many cases, however, the second potential minimum in which the protein may achieve relative stability is separated from the global minimum by a high threshold requiring a significant expenditure of energy to overcome. […] Contrary to low-energy reconfigurations, the relative difference in ligand concentrations is insufficient to cover the cost of a difficult structural change. Such processes are therefore coupled to highly exergonic reactions such as ATP hydrolysis. […]  The link between a biological process and an energy source does not have to be immediate. Indirect coupling occurs when the process is driven by relative changes in the concentration of reaction components. […] In general, high-energy reconfigurations exploit direct coupling mechanisms while indirect coupling is more typical of low-energy processes”.

Muscle action requires a major expenditure of energy. There is a nonlinear dependence between the degree of physical exertion and the corresponding energy requirements. […] Training may improve the power and endurance of muscle tissue. Muscle fibers subjected to regular exertion may improve their glycogen storage capacity, ATP production rate, oxidative metabolism and the use of fatty acids as fuel.

February 4, 2018 Posted by | Biology, Books, Chemistry, Genetics, Molecular biology, Pharmacology, Physics | Leave a comment

A few diabetes papers of interest

i. Mechanisms and Management of Diabetic Painful Distal Symmetrical Polyneuropathy.

“Although a number of the diabetic neuropathies may result in painful symptomatology, this review focuses on the most common: chronic sensorimotor distal symmetrical polyneuropathy (DSPN). It is estimated that 15–20% of diabetic patients may have painful DSPN, but not all of these will require therapy. […] Although the exact pathophysiological processes that result in diabetic neuropathic pain remain enigmatic, both peripheral and central mechanisms have been implicated, and extend from altered channel function in peripheral nerve through enhanced spinal processing and changes in many higher centers. A number of pharmacological agents have proven efficacy in painful DSPN, but all are prone to side effects, and none impact the underlying pathophysiological abnormalities because they are only symptomatic therapy. The two first-line therapies approved by regulatory authorities for painful neuropathy are duloxetine and pregabalin. […] All patients with DSPN are at increased risk of foot ulceration and require foot care, education, and if possible, regular podiatry assessment.”

“The neuropathies are the most common long-term microvascular complications of diabetes and affect those with both type 1 and type 2 diabetes, with up to 50% of older type 2 diabetic patients having evidence of a distal neuropathy (1). These neuropathies are characterized by a progressive loss of nerve fibers affecting both the autonomic and somatic divisions of the nervous system. The clinical features of the diabetic neuropathies vary immensely, and only a minority are associated with pain. The major portion of this review will be dedicated to the most common painful neuropathy, chronic sensorimotor distal symmetrical polyneuropathy (DSPN). This neuropathy has major detrimental effects on its sufferers, confirming an increased risk of foot ulceration and Charcot neuroarthropathy as well as being associated with increased mortality (1).

In addition to DSPN, other rarer neuropathies may also be associated with painful symptoms including acute painful neuropathy that often follows periods of unstable glycemic control, mononeuropathies (e.g., cranial nerve palsies), radiculopathies, and entrapment neuropathies (e.g., carpal tunnel syndrome). By far the most common presentation of diabetic polyneuropathy (over 90%) is typical DSPN or chronic DSPN. […] DSPN results in insensitivity of the feet that predisposes to foot ulceration (1) and/or neuropathic pain (painful DSPN), which can be disabling. […] The onset of DSPN is usually gradual or insidious and is heralded by sensory symptoms that start in the toes and then progress proximally to involve the feet and legs in a stocking distribution. When the disease is well established in the lower limbs in more severe cases, there is upper limb involvement, with a similar progression proximally starting in the fingers. As the disease advances further, motor manifestations, such as wasting of the small muscles of the hands and limb weakness, become apparent. In some cases, there may be sensory loss that the patient may not be aware of, and the first presentation may be a foot ulcer. Approximately 50% of patients with DSPN experience neuropathic symptoms in the lower limbs including uncomfortable tingling (dysesthesia), pain (burning; shooting or “electric-shock like”; lancinating or “knife-like”; “crawling”, or aching etc., in character), evoked pain (allodynia, hyperesthesia), or unusual sensations (such as a feeling of swelling of the feet or severe coldness of the legs when clearly the lower limbs look and feel fine, odd sensations on walking likened to “walking on pebbles” or “walking on hot sand,” etc.). There may be marked pain on walking that may limit exercise and lead to weight gain. Painful DSPN is characteristically more severe at night and often interferes with normal sleep (3). It also has a major impact on the ability to function normally (both mental and physical functioning, e.g., ability to maintain work, mood, and quality of life [QoL]) (3,4). […] The unremitting nature of the pain can be distressing, resulting in mood disorders including depression and anxiety (4). The natural history of painful DSPN has not been well studied […]. However, it is generally believed that painful symptoms may persist over the years (5), occasionally becoming less prominent as the sensory loss worsens (6).”

“There have been relatively few epidemiological studies that have specifically examined the prevalence of painful DSPN, which range from 10–26% (79). In a recent study of a large cohort of diabetic patients receiving community-based health care in northwest England (n = 15,692), painful DSPN assessed using neuropathy symptom and disability scores was found in 21% (7). In one population-based study from Liverpool, U.K., the prevalence of painful DSPN assessed by a structured questionnaire and examination was estimated at 16% (8). Notably, it was found that 12.5% of these patients had never reported their symptoms to their doctor and 39% had never received treatment for their pain (8), indicating that there may be considerable underdiagnosis and undertreatment of painful neuropathic symptoms compared with other aspects of diabetes management such as statin therapy and management of hypertension. Risk factors for DSPN per se have been extensively studied, and it is clear that apart from poor glycemic control, cardiovascular risk factors play a prominent role (10): risk factors for painful DSPN are less well known.”

“A broad spectrum of presentations may occur in patients with DSPN, ranging from one extreme of the patient with very severe painful symptoms but few signs, to the other when patients may present with a foot ulcer having lost all sensation without ever having any painful or uncomfortable symptoms […] it is well recognized that the severity of symptoms may not relate to the severity of the deficit on clinical examination (1). […] Because DSPN is a diagnosis of exclusion, a careful clinical history and a peripheral neurological and vascular examination of the lower limbs are essential to exclude other causes of neuropathic pain and leg/foot pain such as peripheral vascular disease, arthritis, malignancy, alcohol abuse, spinal canal stenosis, etc. […] Patients with asymmetrical symptoms and/or signs (such as loss of an ankle jerk in one leg only), rapid progression of symptoms, or predominance of motor symptoms and signs should be carefully assessed for other causes of the findings.”

“The fact that diabetes induces neuropathy and that in a proportion of patients this is accompanied by pain despite the loss of input and numbness, suggests that marked changes occur in the processes of pain signaling in the peripheral and central nervous system. Neuropathic pain is characterized by ongoing pain together with exaggerated responses to painful and nonpainful stimuli, hyperalgesia, and allodynia. […] the changes seen suggest altered peripheral signaling and central compensatory changes perhaps driven by the loss of input. […] Very clear evidence points to the key role of changes in ion channels as a consequence of nerve damage and their roles in the disordered activity and transduction in damaged and intact fibers (50). Sodium channels depolarize neurons and generate an action potential. Following damage to peripheral nerves, the normal distribution of these channels along a nerve is disrupted by the neuroma and “ectopic” activity results from the accumulation of sodium channels at or around the site of injury. Other changes in the distribution and levels of these channels are seen and impact upon the pattern of neuronal excitability in the nerve. Inherited pain disorders arise from mutated sodium channels […] and polymorphisms in this channel impact on the level of pain in patients, indicating that inherited differences in channel function might explain some of the variability in pain between patients with DSPN (53). […] Where sodium channels act to generate action potentials, potassium channels serve as the molecular brakes of excitable cells, playing an important role in modulating neuronal hyperexcitability. The drug retigabine, a potassium channel opener acting on the channel (KV7, M-current) opener, blunts behavioral hypersensitivity in neuropathic rats (56) and also inhibits C and Aδ-mediated responses in dorsal horn neurons in both naïve and neuropathic rats (57), but has yet to reach the clinic as an analgesic”.

and C fibers terminate primarily in the superficial laminae of the dorsal horn where the large majority of neurons are nociceptive specific […]. Some of these neurons gain low threshold inputs after neuropathy and these cells project predominantly to limbic brain areas […] spinal cord neurons provide parallel outputs to the affective and sensory areas of the brain. Changes induced in these neurons by repeated noxious inputs underpin central sensitization where the resultant hyperexcitability of neurons leads to greater responses to all subsequent inputs — innocuous and noxious — expanded receptive fields and enhanced outputs to higher levels of the brain […] As a consequence of these changes in the sending of nociceptive information within the peripheral nerve and then the spinal cord, the information sent to the brain becomes amplified so that pain ratings become higher. Alongside this, the persistent input into the limbic brain areas such as the amygdala are likely to be causal in the comorbidities that patients often report due to ongoing painful inputs disrupting normal function and generating fear, depression, and sleep problems […]. Of course, many patients report that their pains are worse at night, which may be due to nocturnal changes in these central pain processing areas. […] overall, the mechanisms of pain in diabetic neuropathy extend from altered channel function in peripheral nerves through enhanced spinal processing and finally to changes in many higher centers”.

Pharmacological treatment of painful DSPN is not entirely satisfactory because currently available drugs are often ineffective and complicated by adverse events. Tricyclic compounds (TCAs) have been used as first-line agents for many years, but their use is limited by frequent side effects that may be central or anticholinergic, including dry mouth, constipation, sweating, blurred vision, sedation, and orthostatic hypotension (with the risk of falls particularly in elderly patients). […] Higher doses have been associated with an increased risk of sudden cardiac death, and caution should be taken in any patient with a history of cardiovascular disease (65). […] The selective serotonin noradrenalin reuptake inhibitors (SNRI) duloxetine and venlafaxine have been used for the management of painful DSPN (65). […] there have been several clinical trials involving pregabalin in painful DSPN, and these showed clear efficacy in management of painful DSPN (69). […] The side effects include dizziness, somnolence, peripheral edema, headache, and weight gain.”

A major deficiency in the area of the treatment of neuropathic pain in diabetes is the relative lack of comparative or combination studies. Virtually all previous trials have been of active agents against placebo, whereas there is a need for more studies that compare a given drug with an active comparator and indeed lower-dose combination treatments (64). […] The European Federation of Neurological Societies proposed that first-line treatments might comprise of TCAs, SNRIs, gabapentin, or pregabalin (71). The U.K. National Institute for Health and Care Excellence guidelines on the management of neuropathic pain in nonspecialist settings proposed that duloxetine should be the first-line treatment with amitriptyline as an alternative, and pregabalin as a second-line treatment for painful DSPN (72). […] this recommendation of duloxetine as the first-line therapy was not based on efficacy but rather cost-effectiveness. More recently, the American Academy of Neurology recommended that pregabalin is “established as effective and should be offered for relief of [painful DSPN] (Level A evidence)” (73), whereas venlafaxine, duloxetine, amitriptyline, gabapentin, valproate, opioids, and capsaicin were considered to be “probably effective and should be considered for treatment of painful DSPN (Level B evidence)” (63). […] this recommendation was primarily based on achievement of greater than 80% completion rate of clinical trials, which in turn may be influenced by the length of the trials. […] the International Consensus Panel on Diabetic Neuropathy recommended TCAs, duloxetine, pregabalin, and gabapentin as first-line agents having carefully reviewed all the available literature regarding the pharmacological treatment of painful DSPN (65), the final drug choice tailored to the particular patient based on demographic profile and comorbidities. […] The initial selection of a particular first-line treatment will be influenced by the assessment of contraindications, evaluation of comorbidities […], and cost (65). […] caution is advised to start at lower than recommended doses and titrate gradually.”

ii. Sex Differences in All-Cause and Cardiovascular Mortality, Hospitalization for Individuals With and Without Diabetes, and Patients With Diabetes Diagnosed Early and Late.

“A challenge with type 2 diabetes is the late diagnosis of the disease because many individuals who meet the criteria are often asymptomatic. Approximately 183 million people, or half of those who have diabetes, are unaware they have the disease (1). Furthermore, type 2 diabetes can be present for 9 to 12 years before being diagnosed and, as a result, complications are often present at the time of diagnosis (3). […] Cardiovascular disease (CVD) is the most common comorbidity associated with diabetes, and with 50% of those with diabetes dying of CVD it is the most common cause of death (1). […] Newfoundland and Labrador has the highest age-standardized prevalence of diabetes in Canada (2), and the age-standardized mortality and hospitalization rates for CVD, AMI, and stroke are some of the highest in the country (21,22). A better understanding of mortality and hospitalizations associated with diabetes for males and females is important to support diabetes prevention and management. Therefore, the objectives of this study were to compare the risk of all-cause, CVD, AMI, and stroke mortality and hospitalizations for males and females with and without diabetes and those with early and late diagnoses of diabetes. […] We conducted a population-based retrospective cohort study including 73,783 individuals aged 25 years or older in Newfoundland and Labrador, Canada (15,152 with diabetes; 9,517 with late diagnoses). […] mean age at baseline was 60.1 years (SD, 14.3 years). […] Diabetes was classified as being diagnosed “early” and “late” depending on when diabetes-related comorbidities developed. Individuals early in the disease course would not have any diabetes-related comorbidities at the time of their case dates. On the contrary, a late-diagnosed diabetes patient would have comorbidities related to diabetes at the time of diagnosis.”

“For males, 20.5% (n = 7,751) had diabetes, whereas 20.6% (n = 7,401) of females had diabetes. […] Males and females with diabetes were more likely to die, to be younger at death, to have a shorter survival time, and to be admitted to the hospital than males and females without diabetes (P < 0.01). When admitted to the hospital, individuals with diabetes stayed longer than individuals without diabetes […] Both males and females with late diagnoses were significantly older at the time of diagnosis than those with early diagnoses […]. Males and females with late diagnoses of diabetes were more likely to be deceased at the end of the study period compared with those with early diagnoses […]. Those with early diagnoses were younger at death compared with those with late diagnoses (P < 0.01); however, median survival time for both males and females with early diagnoses was significantly longer than that of those with late diagnoses (P < 0.01). During the study period, males and females with late diabetes diagnoses were more likely to be hospitalized (P < 0.01) and have a longer length of hospital stay compared with those with early diagnoses (P < 0.01).”

“[T]he hospitalization results show that an early diagnosis […] increase the risk of all-cause, CVD, and AMI hospitalizations compared with individuals without diabetes. After adjusting for covariates, males with late diabetes diagnoses had an increased risk of all-cause and CVD mortality and hospitalizations compared with males without diabetes. Similar findings were found for females. A late diabetes diagnosis was positively associated with CVD mortality (HR 6.54 [95% CI 4.80–8.91]) and CVD hospitalizations (5.22 [4.31–6.33]) for females, and the risk was significantly higher compared with their male counterparts (3.44 [2.47–4.79] and 3.33 [2.80–3.95]).”

iii. Effect of Type 1 Diabetes on Carotid Structure and Function in Adolescents and Young Adults.

I may have discussed some of the results of this study before, but a search of the blog told me that I have not covered the study itself. I thought it couldn’t hurt to add a link and a few highlights here.

“Type 1 diabetes mellitus causes increased carotid intima-media thickness (IMT) in adults. We evaluated IMT in young subjects with type 1 diabetes. […] Participants with type 1 diabetes (N = 402) were matched to controls (N = 206) by age, sex, and race or ethnicity. Anthropometric and laboratory values, blood pressure, and IMT were measured.”

“Youth with type 1 diabetes had thicker bulb IMT, which remained significantly different after adjustment for demographics and cardiovascular risk factors. […] Because the rate of progression of IMT in healthy subjects (mean age, 40 years) in the Bogalusa Heart study was 0.017–0.020 mm/year (4), our difference of 0.016 mm suggests that our type 1 diabetic subjects had a vascular age 1 year advanced from their chronological age. […] adjustment for HbA1c ablated the case-control difference in IMT, suggesting that the thicker carotid IMT in the subjects with diabetes could be attributed to diabetes-related hyperglycemia.”

“In the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) study, progression of IMT over the course of 6 years was faster in subjects with type 1 diabetes, yielding a thicker final IMT in cases (5). There was no difference in IMT at baseline. However, DCCT/EDIC did not image the bulb, which is likely the earliest site of thickening according to the Bogalusa Heart Study […] Our analyses reinforce the importance of imaging the carotid bulb, often the site of earliest detectible subclinical atherosclerosis in youth. The DCCT/EDIC study demonstrated that the intensive treatment group had a slower progression of IMT (5) and that mean HbA1c levels explained most of the differences in IMT progression between treatment groups (12). One longitudinal study of youth found children with type 1 diabetes who had progression of IMT over the course of 2 years had higher HbA1c (13). Our data emphasize the role of diabetes-related hyperglycemia in increasing IMT in youth with type 1 diabetes. […] In summary, our study provides novel evidence that carotid thickness is increased in youth with type 1 diabetes compared with healthy controls and that this difference is not accounted for by traditional cardiovascular risk factors. Better control of diabetes-related hyperglycemia may be needed to reduce future cardiovascular disease.”

iv. Factors Associated With Microalbuminuria in 7,549 Children and Adolescents With Type 1 Diabetes in the T1D Exchange Clinic Registry.

“Elevated urinary albumin excretion is an early sign of diabetic kidney disease (DKD). The American Diabetes Association (ADA) recommends screening for microalbuminuria (MA) annually in people with type 1 diabetes after 10 years of age and 5 years of diabetes duration, with a diagnosis of MA requiring two of three tests to be abnormal (1). Early diagnosis of MA is important because effective treatments exist to limit the progression of DKD (1). However, although reduced rates of MA have been reported over the past few decades in some (24) but not all (5,6) studies, it has been suggested that the development of proteinuria has not been prevented but, rather, has been delayed by ∼10 years and that further improvements in care are needed (7).

Limited data exist on the frequency of a clinical diagnosis of MA in the pediatric population with type 1 diabetes in the U.S. Our aim was to use the data from the T1D Exchange clinic registry to assess factors associated with MA in 7,549 children and adolescents with type 1 diabetes.”

“The analysis cohort included 7,549 participants, with mean age of 13.8 ± 3.5 years (range 2 to 19), mean age at type 1 diabetes onset of 6.9 ± 3.9 years, and mean diabetes duration of 6.5 ± 3.7 years; 49% were female. The racial/ethnic distribution was 78% non-Hispanic white, 6% non-Hispanic black, 10% Hispanic, and 5% other. The average of all HbA1c levels (for up to the past 13 years) was 8.4 ± 1.3% (69 ± 13.7 mmol/mol) […]. MA was present in 329 of 7,549 (4.4%) participants, with a higher frequency associated with longer diabetes duration, higher mean glycosylated hemoglobin (HbA1c) level, older age, female sex, higher diastolic blood pressure (BP), and lower BMI […] increasing age [was] mainly associated with an increase in the frequency of MA when HbA1c was ≥9.5% (≥80 mmol/mol). […] MA was uncommon (<2%) among participants with HbA1c <7.5% (<58 mmol/mol). Of those with MA, only 36% were receiving ACEI/ARB treatment. […] Our results provide strong support for prior literature in emphasizing the importance of good glycemic and BP control, particularly as diabetes duration increases, in order to reduce the risk of DKD.

v. Secular Changes in the Age-Specific Prevalence of Diabetes Among U.S. Adults: 1988–2010.

“This study included 22,586 adults sampled in three periods of the National Health and Nutrition Examination Survey (1988–1994, 1999–2004, and 2005–2010). Diabetes was defined as having self-reported diagnosed diabetes or having a fasting plasma glucose level ≥126 mg/dL or HbA1c ≥6.5% (48 mmol/mol). […] The number of adults with diabetes increased by 75% from 1988–1994 to 2005–2010. After adjusting for sex, race/ethnicity, and education level, the prevalence of diabetes increased over the two decades across all age-groups. Younger adults (20–34 years of age) had the lowest absolute increase in diabetes prevalence of 1.0%, followed by middle-aged adults (35–64) at 2.7% and older adults (≥65) at 10.0% (all P < 0.001). Comparing 2005–2010 with 1988–1994, the adjusted prevalence ratios (PRs) by age-group were 2.3, 1.3, and 1.5 for younger, middle-aged, and older adults, respectively (all P < 0.05). After additional adjustment for body mass index (BMI), waist-to-height ratio (WHtR), or waist circumference (WC), the adjusted PR remained statistically significant only for adults ≥65 years of age.

CONCLUSIONS During the past two decades, the prevalence of diabetes increased across all age-groups, but adults ≥65 years of age experienced the largest increase in absolute change. Obesity, as measured by BMI, WHtR, or WC, was strongly associated with the increase in diabetes prevalence, especially in adults <65.”

The crude prevalence of diabetes changed from 8.4% (95% CI 7.7–9.1%) in 1988–1994 to 12.1% (11.3–13.1%) in 2005–2010, with a relative increase of 44.8% (28.3–61.3%) between the two survey periods. There was less change of prevalence of undiagnosed diabetes (P = 0.053). […] The estimated number (in millions) of adults with diabetes grew from 14.9 (95% CI 13.3–16.4) in 1988–1994 to 26.1 (23.8–28.3) in 2005–2010, resulting in an increase of 11.2 prevalent cases (a 75.5% [52.1–98.9%] increase). Younger adults contributed 5.5% (2.5–8.4%), middle-aged adults contributed 52.9% (43.4–62.3%), and older adults contributed 41.7% (31.9–51.4%) of the increased number of cases. In each survey time period, the number of adults with diabetes increased with age until ∼60–69 years; thereafter, it decreased […] the largest increase of cases occurred in middle-aged and older adults.”

vi. The Expression of Inflammatory Genes Is Upregulated in Peripheral Blood of Patients With Type 1 Diabetes.

“Although much effort has been devoted toward discoveries with respect to gene expression profiling in human T1D in the last decade (15), previous studies had serious limitations. Microarray-based gene expression profiling is a powerful discovery platform, but the results must be validated by an alternative technique such as real-time RT-PCR. Unfortunately, few of the previous microarray studies on T1D have been followed by a validation study. Furthermore, most previous gene expression studies had small sample sizes (<100 subjects in each group) that are not adequate for the human population given the expectation of large expression variations among individual subjects. Finally, the selection of appropriate reference genes for normalization of quantitative real-time PCR has a major impact on data quality. Most of the previous studies have used only a single reference gene for normalization. Ideally, gene transcription studies using real-time PCR should begin with the selection of an appropriate set of reference genes to obtain more reliable results (68).

We have previously carried out extensive microarray analysis and identified >100 genes with significantly differential expression between T1D patients and control subjects. Most of these genes have important immunological functions and were found to be upregulated in autoantibody-positive subjects, suggesting their potential use as predictive markers and involvement in T1D development (2). In this study, real-time RT-PCR was performed to validate a subset of the differentially expressed genes in a large sample set of 928 T1D patients and 922 control subjects. In addition to the verification of the gene expression associated with T1D, we also identified genes with significant expression changes in T1D patients with diabetes complications.

“Of the 18 genes analyzed here, eight genes […] had higher expression and three genes […] had lower expression in T1D patients compared with control subjects, indicating that genes involved in inflammation, immune regulation, and antigen processing and presentation are significantly altered in PBMCs from T1D patients. Furthermore, one adhesion molecule […] and three inflammatory genes mainly expressed by myeloid cells […] were significantly higher in T1D patients with complications (odds ratio [OR] 1.3–2.6, adjusted P value = 0.005–10−8), especially those patients with neuropathy (OR 4.8–7.9, adjusted P value <0.005). […] These findings suggest that inflammatory mediators secreted mainly by myeloid cells are implicated in T1D and its complications.

vii. Overexpression of Hemopexin in the Diabetic Eye – A new pathogenic candidate for diabetic macular edema.

“Diabetic retinopathy remains the leading cause of preventable blindness among working-age individuals in developed countries (1). Whereas proliferative diabetic retinopathy (PDR) is the commonest sight-threatening lesion in type 1 diabetes, diabetic macular edema (DME) is the primary cause of poor visual acuity in type 2 diabetes. Because of the high prevalence of type 2 diabetes, DME is the main cause of visual impairment in diabetic patients (2). When clinically significant DME appears, laser photocoagulation is currently indicated. However, the optimal period of laser treatment is frequently passed and, moreover, is not uniformly successful in halting visual decline. In addition, photocoagulation is not without side effects, with visual field loss and impairment of either adaptation or color vision being the most frequent. Intravitreal corticosteroids have been successfully used in eyes with persistent DME and loss of vision after the failure of conventional treatment. However, reinjections are commonly needed, and there are substantial adverse effects such as infection, glaucoma, and cataract formation. Intravitreal anti–vascular endothelial growth factor (VEGF) agents have also found an improvement of visual acuity and decrease of retinal thickness in DME, even in nonresponders to conventional treatment (3). However, apart from local side effects such as endophthalmitis and retinal detachment, the response to treatment of DME by VEGF blockade is not prolonged and is subject to significant variability. For all these reasons, new pharmacological treatments based on the understanding of the pathophysiological mechanisms of DME are needed.”

“Vascular leakage due to the breakdown of the blood-retinal barrier (BRB) is the main event involved in the pathogenesis of DME (4). However, little is known regarding the molecules primarily involved in this event. By means of a proteomic analysis, we have found that hemopexin was significantly increased in the vitreous fluid of patients with DME in comparison with PDR and nondiabetic control subjects (5). Hemopexin is the best characterized permeability factor in steroid-sensitive nephrotic syndrome (6,7). […] T cell–associated cytokines like tumor necrosis factor-α are able to enhance hemopexin production in mesangial cells in vitro, and this effect is prevented by corticosteroids (8). However, whether hemopexin also acts as a permeability factor in the BRB and its potential response to corticosteroids remains to be elucidated. […] the aims of the current study were 1) to compare hemopexin and hemopexin receptor (LDL receptor–related protein [LRP1]) levels in retina and in vitreous fluid from diabetic and nondiabetic patients, 2) to evaluate the effect of hemopexin on the permeability of outer and inner BRB in cell cultures, and 3) to determine whether anti-hemopexin antibodies and dexamethasone were able to prevent an eventual hemopexin-induced hyperpermeability.”

“In the current study, we […] confirmed our previous results obtained by a proteomic approach showing that hemopexin is higher in the vitreous fluid of diabetic patients with DME in comparison with diabetic patients with PDR and nondiabetic subjects. In addition, we provide the first evidence that hemopexin is overexpressed in diabetic eye. Furthermore, we have shown that hemopexin leads to the disruption of RPE [retinal pigment epithelium] cells, thus increasing permeability, and that this effect is prevented by dexamethasone. […] Our findings suggest that hemopexin can be considered a new candidate in the pathogenesis of DME and a new therapeutic target.”

viii. Relationship Between Overweight and Obesity With Hospitalization for Heart Failure in 20,985 Patients With Type 1 Diabetes.

“We studied patients with type 1 diabetes included in the Swedish National Diabetes Registry during 1998–2003, and they were followed up until hospitalization for HF, death, or 31 December 2009. Cox regression was used to estimate relative risks. […] Type 1 diabetes is defined in the NDR as receiving treatment with insulin only and onset at age 30 years or younger. These characteristics previously have been validated as accurate in 97% of cases (11). […] In a sample of 20,985 type 1 diabetic patients (mean age, 38.6 years; mean BMI, 25.0 kg/m2), 635 patients […] (3%) were admitted for a primary or secondary diagnosis of HF during a median follow-up of 9 years, with an incidence of 3.38 events per 1,000 patient-years (95% CI, 3.12–3.65). […] Cox regression adjusting for age, sex, diabetes duration, smoking, HbA1c, systolic and diastolic blood pressures, and baseline and intercurrent comorbidities (including myocardial infarction) showed a significant relationship between BMI and hospitalization for HF (P < 0.0001). In reference to patients in the BMI 20–25 kg/m2 category, hazard ratios (HRs) were as follows: HR 1.22 (95% CI, 0.83–1.78) for BMI <20 kg/m2; HR 0.94 (95% CI, 0.78–1.12) for BMI 25–30 kg/m2; HR 1.55 (95% CI, 1.20–1.99) for BMI 30–35 kg/m2; and HR 2.90 (95% CI, 1.92–4.37) for BMI ≥35 kg/m2.

CONCLUSIONS Obesity, particularly severe obesity, is strongly associated with hospitalization for HF in patients with type 1 diabetes, whereas no similar relation was present in overweight and low body weight.”

“In contrast to type 2 diabetes, obesity is not implicated as a causal factor in type 1 diabetes and maintaining normal weight is accordingly less of a focus in clinical practice of patients with type 1 diabetes. Because most patients with type 2 diabetes are overweight or obese and glucose levels can normalize in some patients after weight reduction, this is usually an important part of integrated diabetes care. Our findings indicate that given the substantial risk of cardiovascular disease in type 1 diabetic patients, it is crucial for clinicians to also address weight issues in type 1 diabetes. Because many patients are normal weight when diabetes is diagnosed, careful monitoring of weight with a view to maintaining normal weight is probably more essential than previously thought. Although overweight was not associated with an increased risk of HF, higher BMI levels probably increase the risk of future obesity. Our finding that 71% of patients with BMI >35 kg/m2 were women is potentially important, although this should be tested in other populations given that it could be a random finding. If not random, especially because the proportion was much higher than in the entire cohort (45%), then it may indicate that severe obesity is a greater problem in women than in men with type 1 diabetes.”

November 30, 2017 Posted by | Cardiology, Diabetes, Genetics, Molecular biology, Nephrology, Neurology, Ophthalmology, Pharmacology, Studies | Leave a comment

Organic Chemistry (II)

I have included some observations from the second half of the book below, as well as some links to topics covered.

“[E]nzymes are used routinely to catalyse reactions in the research laboratory, and for a variety of industrial processes involving pharmaceuticals, agrochemicals, and biofuels. In the past, enzymes had to be extracted from natural sources — a process that was both expensive and slow. But nowadays, genetic engineering can incorporate the gene for a key enzyme into the DNA of fast growing microbial cells, allowing the enzyme to be obtained more quickly and in far greater yield. Genetic engineering has also made it possible to modify the amino acids making up an enzyme. Such modified enzymes can prove more effective as catalysts, accept a wider range of substrates, and survive harsher reaction conditions. […] New enzymes are constantly being discovered in the natural world as well as in the laboratory. Fungi and bacteria are particularly rich in enzymes that allow them to degrade organic compounds. It is estimated that a typical bacterial cell contains about 3,000 enzymes, whereas a fungal cell contains 6,000. Considering the variety of bacterial and fungal species in existence, this represents a huge reservoir of new enzymes, and it is estimated that only 3 per cent of them have been investigated so far.”

“One of the most important applications of organic chemistry involves the design and synthesis of pharmaceutical agents — a topic that is defined as medicinal chemistry. […] In the 19th century, chemists isolated chemical components from known herbs and extracts. Their aim was to identify a single chemical that was responsible for the extract’s pharmacological effects — the active principle. […] It was not long before chemists synthesized analogues of active principles. Analogues are structures which have been modified slightly from the original active principle. Such modifications can often improve activity or reduce side effects. This led to the concept of the lead compound — a compound with a useful pharmacological activity that could act as the starting point for further research. […] The first half of the 20th century culminated in the discovery of effective antimicrobial agents. […] The 1960s can be viewed as the birth of rational drug design. During that period there were important advances in the design of effective anti-ulcer agents, anti-asthmatics, and beta-blockers for the treatment of high blood pressure. Much of this was based on trying to understand how drugs work at the molecular level and proposing theories about why some compounds were active and some were not.”

“[R]ational drug design was boosted enormously towards the end of the century by advances in both biology and chemistry. The sequencing of the human genome led to the identification of previously unknown proteins that could serve as potential drug targets. […] Advances in automated, small-scale testing procedures (high-throughput screening) also allowed the rapid testing of potential drugs. In chemistry, advances were made in X-ray crystallography and NMR spectroscopy, allowing scientists to study the structure of drugs and their mechanisms of action. Powerful molecular modelling software packages were developed that allowed researchers to study how a drug binds to a protein binding site. […] the development of automated synthetic methods has vastly increased the number of compounds that can be synthesized in a given time period. Companies can now produce thousands of compounds that can be stored and tested for pharmacological activity. Such stores have been called chemical libraries and are routinely tested to identify compounds capable of binding with a specific protein target. These advances have boosted medicinal chemistry research over the last twenty years in virtually every area of medicine.”

“Drugs interact with molecular targets in the body such as proteins and nucleic acids. However, the vast majority of clinically useful drugs interact with proteins, especially receptors, enzymes, and transport proteins […] Enzymes are […] important drug targets. Drugs that bind to the active site and prevent the enzyme acting as a catalyst are known as enzyme inhibitors. […] Enzymes are located inside cells, and so enzyme inhibitors have to cross cell membranes in order to reach them—an important consideration in drug design. […] Transport proteins are targets for a number of therapeutically important drugs. For example, a group of antidepressants known as selective serotonin reuptake inhibitors prevent serotonin being transported into neurons by transport proteins.”

“The main pharmacokinetic factors are absorption, distribution, metabolism, and excretion. Absorption relates to how much of an orally administered drug survives the digestive enzymes and crosses the gut wall to reach the bloodstream. Once there, the drug is carried to the liver where a certain percentage of it is metabolized by metabolic enzymes. This is known as the first-pass effect. The ‘survivors’ are then distributed round the body by the blood supply, but this is an uneven process. The tissues and organs with the richest supply of blood vessels receive the greatest proportion of the drug. Some drugs may get ‘trapped’ or sidetracked. For example fatty drugs tend to get absorbed in fat tissue and fail to reach their target. The kidneys are chiefly responsible for the excretion of drugs and their metabolites.”

“Having identified a lead compound, it is important to establish which features of the compound are important for activity. This, in turn, can give a better understanding of how the compound binds to its molecular target. Most drugs are significantly smaller than molecular targets such as proteins. This means that the drug binds to quite a small region of the protein — a region known as the binding site […]. Within this binding site, there are binding regions that can form different types of intermolecular interactions such as van der Waals interactions, hydrogen bonds, and ionic interactions. If a drug has functional groups and substituents capable of interacting with those binding regions, then binding can take place. A lead compound may have several groups that are capable of forming intermolecular interactions, but not all of them are necessarily needed. One way of identifying the important binding groups is to crystallize the target protein with the drug bound to the binding site. X-ray crystallography then produces a picture of the complex which allows identification of binding interactions. However, it is not always possible to crystallize target proteins and so a different approach is needed. This involves synthesizing analogues of the lead compound where groups are modified or removed. Comparing the activity of each analogue with the lead compound can then determine whether a particular group is important or not. This is known as an SAR study, where SAR stands for structure–activity relationships.” Once the important binding groups have been identified, the pharmacophore for the lead compound can be defined. This specifies the important binding groups and their relative position in the molecule.”

“One way of identifying the active conformation of a flexible lead compound is to synthesize rigid analogues where the binding groups are locked into defined positions. This is known as rigidification or conformational restriction. The pharmacophore will then be represented by the most active analogue. […] A large number of rotatable bonds is likely to have an adverse effect on drug activity. This is because a flexible molecule can adopt a large number of conformations, and only one of these shapes corresponds to the active conformation. […] In contrast, a totally rigid molecule containing the required pharmacophore will bind the first time it enters the binding site, resulting in greater activity. […] It is also important to optimize a drug’s pharmacokinetic properties such that it can reach its target in the body. Strategies include altering the drug’s hydrophilic/hydrophobic properties to improve absorption, and the addition of substituents that block metabolism at specific parts of the molecule. […] The drug candidate must [in general] have useful activity and selectivity, with minimal side effects. It must have good pharmacokinetic properties, lack toxicity, and preferably have no interactions with other drugs that might be taken by a patient. Finally, it is important that it can be synthesized as cheaply as possible”.

“Most drugs that have reached clinical trials for the treatment of Alzheimer’s disease have failed. Between 2002 and 2012, 244 novel compounds were tested in 414 clinical trials, but only one drug gained approval. This represents a failure rate of 99.6 per cent as against a failure rate of 81 per cent for anti-cancer drugs.”

“It takes about ten years and £160 million to develop a new pesticide […] The volume of global sales increased 47 per cent in the ten-year period between 2002 and 2012, while, in 2012, total sales amounted to £31 billion. […] In many respects, agrochemical research is similar to pharmaceutical research. The aim is to find pesticides that are toxic to ‘pests’, but relatively harmless to humans and beneficial life forms. The strategies used to achieve this goal are also similar. Selectivity can be achieved by designing agents that interact with molecular targets that are present in pests, but not other species. Another approach is to take advantage of any metabolic reactions that are unique to pests. An inactive prodrug could then be designed that is metabolized to a toxic compound in the pest, but remains harmless in other species. Finally, it might be possible to take advantage of pharmacokinetic differences between pests and other species, such that a pesticide reaches its target more easily in the pest. […] Insecticides are being developed that act on a range of different targets as a means of tackling resistance. If resistance should arise to an insecticide acting on one particular target, then one can switch to using an insecticide that acts on a different target. […] Several insecticides act as insect growth regulators (IGRs) and target the moulting process rather than the nervous system. In general, IGRs take longer to kill insects but are thought to cause less detrimental effects to beneficial insects. […] Herbicides control weeds that would otherwise compete with crops for water and soil nutrients. More is spent on herbicides than any other class of pesticide […] The synthetic agent 2,4-D […] was synthesized by ICI in 1940 as part of research carried out on biological weapons […] It was first used commercially in 1946 and proved highly successful in eradicating weeds in cereal grass crops such as wheat, maize, and rice. […] The compound […] is still the most widely used herbicide in the world.”

“The type of conjugated system present in a molecule determines the specific wavelength of light absorbed. In general, the more extended the conjugation, the higher the wavelength absorbed. For example, β-carotene […] is the molecule responsible for the orange colour of carrots. It has a conjugated system involving eleven double bonds, and absorbs light in the blue region of the spectrum. It appears red because the reflected light lacks the blue component. Zeaxanthin is very similar in structure to β-carotene, and is responsible for the yellow colour of corn. […] Lycopene absorbs blue-green light and is responsible for the red colour of tomatoes, rose hips, and berries. Chlorophyll absorbs red light and is coloured green. […] Scented molecules interact with olfactory receptors in the nose. […] there are around 400 different olfactory protein receptors in humans […] The natural aroma of a rose is due mainly to 2-phenylethanol, geraniol, and citronellol.”

“Over the last fifty years, synthetic materials have largely replaced natural materials such as wood, leather, wool, and cotton. Plastics and polymers are perhaps the most visible sign of how organic chemistry has changed society. […] It is estimated that production of global plastics was 288 million tons in 2012 […] Polymerization involves linking molecular strands called polymers […]. By varying the nature of the monomer, a huge range of different polymers can be synthesized with widely differing properties. The idea of linking small molecular building blocks into polymers is not a new one. Nature has been at it for millions of years using amino acid building blocks to make proteins, and nucleotide building blocks to make nucleic acids […] The raw materials for plastics come mainly from oil, which is a finite resource. Therefore, it makes sense to recycle or depolymerize plastics to recover that resource. Virtually all plastics can be recycled, but it is not necessarily economically feasible to do so. Traditional recycling of polyesters, polycarbonates, and polystyrene tends to produce inferior plastics that are suitable only for low-quality goods.”

Adipic acid.
Protease. Lipase. Amylase. Cellulase.
Reflectin.
Agonist.
Antagonist.
Prodrug.
Conformational change.
Process chemistry (chemical development).
Clinical trial.
Phenylbutazone.
Pesticide.
Dichlorodiphenyltrichloroethane.
Aldrin.
N-Methyl carbamate.
Organophosphates.
Pyrethrum.
Neonicotinoid.
Colony collapse disorder.
Ecdysone receptor.
Methoprene.
Tebufenozide.
Fungicide.
Quinone outside inhibitors (QoI).
Allelopathy.
Glyphosate.
11-cis retinal.
Chromophore.
Synthetic dyes.
Methylene blue.
Cryptochrome.
Pheromone.
Artificial sweeteners.
Miraculin.
Addition polymer.
Condensation polymer.
Polyethylene.
Polypropylene.
Polyvinyl chloride.
Bisphenol A.
Vulcanization.
Kevlar.
Polycarbonate.
Polyhydroxyalkanoates.
Bioplastic.
Nanochemistry.
Allotropy.
Allotropes of carbon.
Carbon nanotube.
Rotaxane.
π-interactions.
Molecular switch.

November 11, 2017 Posted by | Biology, Books, Botany, Chemistry, Medicine, Molecular biology, Pharmacology, Zoology | Leave a comment

Organic Chemistry (I)

This book‘s a bit longer than most ‘A very short introduction to…‘ publications, and it’s quite dense at times and included a lot of interesting stuff. It took me a while to finish it as I put it away a while back when I hit some of the more demanding content, but I did pick it up later and I really enjoyed most of the coverage. In the end I decided that I wouldn’t be doing the book justice if I were to limit my coverage of it to just one post, so this will be only the first of two posts of coverage of this book, covering roughly the first half of it.

As usual I have included in my post both some observations from the book (…and added a few links to these quotes where I figured they might be helpful) as well as some wiki links to topics discussed in the book.

“Organic chemistry is a branch of chemistry that studies carbon-based compounds in terms of their structure, properties, and synthesis. In contrast, inorganic chemistry covers the chemistry of all the other elements in the periodic table […] carbon-based compounds are crucial to the chemistry of life. [However] organic chemistry has come to be defined as the chemistry of carbon-based compounds, whether they originate from a living system or not. […] To date, 16 million compounds have been synthesized in organic chemistry laboratories across the world, with novel compounds being synthesized every day. […] The list of commodities that rely on organic chemistry include plastics, synthetic fabrics, perfumes, colourings, sweeteners, synthetic rubbers, and many other items that we use every day.”

“For a neutral carbon atom, there are six electrons occupying the space around the nucleus […] The electrons in the outer shell are defined as the valence electrons and these determine the chemical properties of the atom. The valence electrons are easily ‘accessible’ compared to the two electrons in the first shell. […] There is great significance in carbon being in the middle of the periodic table. Elements which are close to the left-hand side of the periodic table can lose their valence electrons to form positive ions. […] Elements on the right-hand side of the table can gain electrons to form negatively charged ions. […] The impetus for elements to form ions is the stability that is gained by having a full outer shell of electrons. […] Ion formation is feasible for elements situated to the left or the right of the periodic table, but it is less feasible for elements in the middle of the table. For carbon to gain a full outer shell of electrons, it would have to lose or gain four valence electrons, but this would require far too much energy. Therefore, carbon achieves a stable, full outer shell of electrons by another method. It shares electrons with other elements to form bonds. Carbon excels in this and can be considered chemistry’s ultimate elemental socialite. […] Carbon’s ability to form covalent bonds with other carbon atoms is one of the principle reasons why so many organic molecules are possible. Carbon atoms can be linked together in an almost limitless way to form a mind-blowing variety of carbon skeletons. […] carbon can form a bond to hydrogen, but it can also form bonds to atoms such as nitrogen, phosphorus, oxygen, sulphur, fluorine, chlorine, bromine, and iodine. As a result, organic molecules can contain a variety of different elements. Further variety can arise because it is possible for carbon to form double bonds or triple bonds to a variety of other atoms. The most common double bonds are formed between carbon and oxygen, carbon and nitrogen, or between two carbon atoms. […] The most common triple bonds are found between carbon and nitrogen, or between two carbon atoms.”

[C]hirality has huge importance. The two enantiomers of a chiral molecule behave differently when they interact with other chiral molecules, and this has important consequences in the chemistry of life. As an analogy, consider your left and right hands. These are asymmetric in shape and are non-superimposable mirror images. Similarly, a pair of gloves are non-superimposable mirror images. A left hand will fit snugly into a left-hand glove, but not into a right-hand glove. In the molecular world, a similar thing occurs. The proteins in our bodies are chiral molecules which can distinguish between the enantiomers of other molecules. For example, enzymes can distinguish between the two enantiomers of a chiral compound and catalyse a reaction with one of the enantiomers but not the other.”

“A key concept in organic chemistry is the functional group. A functional group is essentially a distinctive arrangement of atoms and bonds. […] Functional groups react in particular ways, and so it is possible to predict how a molecule might react based on the functional groups that are present. […] it is impossible to build a molecule atom by atom. Instead, target molecules are built by linking up smaller molecules. […] The organic chemist needs to have a good understanding of the reactions that are possible between different functional groups when choosing the molecular building blocks to be used for a synthesis. […] There are many […] reasons for carrying out FGTs [functional group transformations], especially when synthesizing complex molecules. For example, a starting material or a synthetic intermediate may lack a functional group at a key position of the molecular structure. Several reactions may then be required to introduce that functional group. On other occasions, a functional group may be added to a particular position then removed at a later stage. One reason for adding such a functional group would be to block an unwanted reaction at that position of the molecule. Another common situation is where a reactive functional group is converted to a less reactive functional group such that it does not interfere with a subsequent reaction. Later on, the original functional group is restored by another functional group transformation. This is known as a protection/deprotection strategy. The more complex the target molecule, the greater the synthetic challenge. Complexity is related to the number of rings, functional groups, substituents, and chiral centres that are present. […] The more reactions that are involved in a synthetic route, the lower the overall yield. […] retrosynthesis is a strategy by which organic chemists design a synthesis before carrying it out in practice. It is called retrosynthesis because the design process involves studying the target structure and working backwards to identify how that molecule could be synthesized from simpler starting materials. […] a key stage in retrosynthesis is identifying a bond that can be ‘disconnected’ to create those simpler molecules.”

“[V]ery few reactions produce the spectacular visual and audible effects observed in chemistry demonstrations. More typically, reactions involve mixing together two colourless solutions to produce another colourless solution. Temperature changes are a bit more informative. […] However, not all reactions generate heat, and monitoring the temperature is not a reliable way of telling whether the reaction has gone to completion or not. A better approach is to take small samples of the reaction solution at various times and to test these by chromatography or spectroscopy. […] If a reaction is taking place very slowly, different reaction conditions could be tried to speed it up. This could involve heating the reaction, carrying out the reaction under pressure, stirring the contents vigorously, ensuring that the reaction is carried out in a dry atmosphere, using a different solvent, using a catalyst, or using one of the reagents in excess. […] There are a large number of variables that can affect how efficiently reactions occur, and organic chemists in industry are often employed to develop the ideal conditions for a specific reaction. This is an area of organic chemistry known as chemical development. […] Once a reaction has been carried out, it is necessary to isolate and purify the reaction product. This often proves more time-consuming than carrying out the reaction itself. Ideally, one would remove the solvent used in the reaction and be left with the product. However, in most reactions this is not possible as other compounds are likely to be present in the reaction mixture. […] it is usually necessary to carry out procedures that will separate and isolate the desired product from these other compounds. This is known as ‘working up’ the reaction.”

“Proteins are large molecules (macromolecules) which serve a myriad of purposes, and are essentially polymers constructed from molecular building blocks called amino acids […]. In humans, there are twenty different amino acids having the same ‘head group’, consisting of a carboxylic acid and an amine attached to the same carbon atom […] The amino acids are linked up by the carboxylic acid of one amino acid reacting with the amine group of another to form an amide link. Since a protein is being produced, the amide bond is called a peptide bond, and the final protein consists of a polypeptide chain (or backbone) with different side chains ‘hanging off’ the chain […]. The sequence of amino acids present in the polypeptide sequence is known as the primary structure. Once formed, a protein folds into a specific 3D shape […] Nucleic acids […] are another form of biopolymer, and are formed from molecular building blocks called nucleotides. These link up to form a polymer chain where the backbone consists of alternating sugar and phosphate groups. There are two forms of nucleic acid — deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). In DNA, the sugar is deoxyribose , whereas the sugar in RNA is ribose. Each sugar ring has a nucleic acid base attached to it. For DNA, there are four different nucleic acid bases called adenine (A), thymine (T), cytosine (C), and guanine (G) […]. These bases play a crucial role in the overall structure and function of nucleic acids. […] DNA is actually made up of two DNA strands […] where the sugar-phosphate backbones are intertwined to form a double helix. The nucleic acid bases point into the centre of the helix, and each nucleic acid base ‘pairs up’ with a nucleic acid base on the opposite strand through hydrogen bonding. The base pairing is specifically between adenine and thymine, or between cytosine and guanine. This means that one polymer strand is complementary to the other, a feature that is crucial to DNA’s function as the storage molecule for genetic information. […]  [E]ach strand […] act as the template for the creation of a new strand to produce two identical ‘daughter’ DNA double helices […] [A] genetic alphabet of four letters (A, T, G, C) […] code for twenty amino acids. […] [A]n amino acid is coded, not by one nucleotide, but by a set of three. The number of possible triplet combinations using four ‘letters’ is more than enough to encode all the amino acids.”

“Proteins have a variety of functions. Some proteins, such as collagen, keratin, and elastin, have a structural role. Others catalyse life’s chemical reactions and are called enzymes. They have a complex 3D shape, which includes a cavity called the active site […]. This is where the enzyme binds the molecules (substrates) that undergo the enzyme-catalysed reaction. […] A substrate has to have the correct shape to fit an enzyme’s active site, but it also needs binding groups to interact with that site […]. These interactions hold the substrate in the active site long enough for a reaction to occur, and typically involve hydrogen bonds, as well as van der Waals and ionic interactions. When a substrate binds, the enzyme normally undergoes an induced fit. In other words, the shape of the active site changes slightly to accommodate the substrate, and to hold it as tightly as possible. […] Once a substrate is bound to the active site, amino acids in the active site catalyse the subsequent reaction.”

“Proteins called receptors are involved in chemical communication between cells and respond to chemical messengers called neurotransmitters if they are released from nerves, or hormones if they are released by glands. Most receptors are embedded in the cell membrane, with part of their structure exposed on the outer surface of the cell membrane, and another part exposed on the inner surface. On the outer surface they contain a binding site that binds the molecular messenger. An induced fit then takes place that activates the receptor. This is very similar to what happens when a substrate binds to an enzyme […] The induced fit is crucial to the mechanism by which a receptor conveys a message into the cell — a process known as signal transduction. By changing shape, the protein initiates a series of molecular events that influences the internal chemistry within the cell. For example, some receptors are part of multiprotein complexes called ion channels. When the receptor changes shape, it causes the overall ion channel to change shape. This opens up a central pore allowing ions to flow across the cell membrane. The ion concentration within the cell is altered, and that affects chemical reactions within the cell, which ultimately lead to observable results such as muscle contraction. Not all receptors are membrane-bound. For example, steroid receptors are located within the cell. This means that steroid hormones need to cross the cell membrane in order to reach their target receptors. Transport proteins are also embedded in cell membranes and are responsible for transporting polar molecules such as amino acids into the cell. They are also important in controlling nerve action since they allow nerves to capture released neurotransmitters, such that they have a limited period of action.”

“RNA […] is crucial to protein synthesis (translation). There are three forms of RNA — messenger RNA (mRNA), transfer RNA (tRNA), and ribosomal RNA (rRNA). mRNA carries the genetic code for a particular protein from DNA to the site of protein production. Essentially, mRNA is a single-strand copy of a specific section of DNA. The process of copying that information is known as transcription. tRNA decodes the triplet code on mRNA by acting as a molecular adaptor. At one end of tRNA, there is a set of three bases (the anticodon) that can base pair to a set of three bases on mRNA (the codon). An amino acid is linked to the other end of the tRNA and the type of amino acid present is related to the anticodon that is present. When tRNA with the correct anticodon base pairs to the codon on mRNA, it brings the amino acid encoded by that codon. rRNA is a major constituent of a structure called a ribosome, which acts as the factory for protein production. The ribosome binds mRNA then coordinates and catalyses the translation process.”

Organic chemistry.
Carbon.
Stereochemistry.
Delocalization.
Hydrogen bond.
Van der Waals forces.
Ionic bonding.
Chemoselectivity.
Coupling reaction.
Chemical polarity.
Crystallization.
Elemental analysis.
NMR spectroscopy.
Polymerization.
Miller–Urey experiment.
Vester-Ulbricht hypothesis.
Oligonucleotide.
RNA world.
Ribozyme.

November 9, 2017 Posted by | Biology, Books, Chemistry, Genetics, Molecular biology | Leave a comment