Econstudentlog

Blood (I)

As I also mentioned on goodreads I was far from impressed with the first few pages of this book – but I read on, and the book actually turned out to include a decent amount of very reasonable coverage. Taking into consideration the way the author started out the three star rating should be considered a high rating, and in some parts of the book the author covers very complicated stuff in a really very decent manner, considering the format of the book and its target group.

Below I have added some quotes and some links to topics/people/ideas/etc. covered in the first half of the book.

“[Clotting] makes it difficult to study the components of blood. It also [made] it impossible to store blood for transfusion [in the past]. So there was a need to find a way to prevent clotting. Fortunately the discovery that the metal calcium accelerated the rate of clotting enabled the development of a range of compounds that bound calcium and therefore prevented this process. One of them, citrate, is still in common use today [here’s a relevant link, US] when blood is being prepared for storage, or to stop blood from clotting while it is being pumped through kidney dialysis machines and other extracorporeal circuits. Adding citrate to blood, and leaving it alone, will result in gravity gradually separating the blood into three layers; the process can be accelerated by rapid spinning in a centrifuge […]. The top layer is clear and pale yellow or straw-coloured in appearance. This is the plasma, and it contains no cells. The bottom layer is bright red and contains the dense pellet of red cells that have sunk to the bottom of the tube. In-between these two layers is a very narrow layer, called the ‘buffy coat’ because of its pale yellow-brown appearance. This contains white blood cells and platelets. […] red cells, white cells, and platelets […] define the primary functions of blood: oxygen transport, immune defence, and coagulation.”

“The average human has about five trillion red blood cells per litre of blood or thirty trillion […] in total, making up a quarter of the total number of cells in the body. […] It is clear that the red cell has primarily evolved to perform a single function, oxygen transportation. Lacking a nucleus, and the requisite machinery to control the synthesis of new proteins, there is a limited ability for reprogramming or repair. […] each cell [makes] a complete traverse of the body’s circulation about once a minute. In its three- to four-month lifetime, this means every cell will do the equivalent of 150,000 laps around the body. […] Red cells lack mitochondria; they get their energy by fermenting glucose. […] A prosaic explanation for their lack of mitochondria is that it prevents the loss of any oxygen picked up from the lungs on the cells’ journey to the tissues that need it. The shape of the red cell is both deformable and elastic. In the bloodstream each cell is exposed to large shear forces. Yet, due to the properties of the membrane, they are able to constrict to enter blood vessels smaller in diameter than their normal size, bouncing back to their original shape on exiting the vessel the other side. This ability to safely enter very small openings allows capillaries to be very small. This in turn enables every cell in the body to be close to a capillary. Oxygen consequently only needs to diffuse a short distance from the blood to the surrounding tissue; this is vital as oxygen diffusion outside the bloodstream is very slow. Various pathologies, such as diabetes, peripheral vascular disease, and septic shock disturb this deformability of red blood cells, with deleterious consequences.”

“Over thirty different substances, proteins and carbohydrates, contribute to an individual’s blood group. By far the best known are the ABO and Rhesus systems. This is not because the proteins and carbohydrates that comprise these particular blood group types are vitally important for red cell function, but rather because a failure to account for these types during a blood transfusion can have catastrophic consequences. The ABO blood group is sugar-based […] blood from an O person can be safely given to anyone (with no sugar antigens this person is a ‘universal’ donor). […] As all that is needed to convert A and B to O is to remove a sugar, there is commercial and medical interest in devising ways to do this […] the Rh system […] is protein-based rather than sugar based. […] Rh proteins sit in the lipid membrane of the cell and control the transport of molecules into and out of the cell, most probably carbon dioxide and ammonia. The situation is complex, with over thirty different subgroups relating to subtle differences in the protein structure.”

“Unlike the red cells, all white cell subtypes contain nuclei. Some also contain on their surface a set of molecules called the ‘major histocompatibility complex’ (MHC). In humans, these receptors are also called ‘human leucocyte antigens’ (HLA). Their role is to recognize fragments of protein from pathogens and trigger the immune response that will ultimately destroy the invaders. Crudely, white blood cells can be divided into those that attack ‘on sight’ any foreign material — whether it be a fragment of inanimate material such as a splinter or an invading microorganism — and those that form part of a defence mechanism that recognizes specific biomolecules and marshals a slower, but equally devastating response. […] cells of the non-specific (or innate) immune system […] are divided into those that have nuclei with multiple lobed shapes (polymorphonuclear leukocytes or PMN) and those that have a single lobe nucleus ([…] ‘mononuclear leucocytes‘ or ‘MN’). PMN contain granules inside them and so are sometimes called ‘granulocytes‘.”

“Neutrophils are by far the most abundant PMN, making up over half of the total white blood cell count. The primary role of a neutrophil is to engulf a foreign object such as an invading microorganism. […] Eosinophils and basophils are the least abundant PMN cell type, each making up less than 2 per cent of white blood cells. The role of basophils is to respond to tissue injury by triggering an inflammatory response. […] When activated, basophils and mast cells degranulate, releasing molecules such as histamine, leukotrienes, and cytokines. Some of these molecules trigger an increase in blood flow causing redness and heat in the damaged site, others sensitize the area to pain. Greater permeability of the blood vessels results in plasma leaking out of the vessels and into the surrounding tissue at an increased rate, causing swelling. […] This is probably an evolutionary adaption to prevent overuse of a damaged part of the body but also helps to bring white cells and proteins to the damaged, inflamed area. […] The main function of eosinophils is to tackle invaders too large to be engulfed by neutrophils, such as the multicellular parasitic tapeworms and nematodes. […] Monocytes are a type of mononuclear leucocyte (MN) making up about 5 per cent of white blood cells. They spend even less tiem in the circulation than neutrophils, generally less than ten hours, but their time in the blood circulation does not end in death. Instead, they are converted into a cell called a ‘macrophage‘ […] Their role is similar to the neutrophil, […] the ultimate fate of both the red blood cell and the neutrophil is to be engulfed by a macrophage. An excess of monocytes in a blood count (monocytosis) is an indicator of chronic inflammation”.

“Blood has to flow freely. Therefore, the red cells, white cells, and platelets are all suspended in a watery solution called ‘plasma’. But plasma is more than just water. In fact if it were only water all the cells would burst. Plasma has to have a very similar concentration of molecules and ions as the cells. This is because cells are permeable to water. So if the concentration of dissolved substances in the plasma was significantly higher than that in the cells, water would flow from the cells to the plasma in an attempt to equalize this gradient by diluting the plasma; this would result in cell shrinkage. Even worse, if the concentration in the plasma was lower than in the cells, water would flow into the cells from the plasma, and the resulting pressure increase would burst the cells, releasing all their contents into the plasma in the process. […] Plasma contains much more than just the ions required to prevent cells bursting or shrinking. It also contains key components designed to assist in cellular function. The protein clotting factors that are part of the coagulation cascade are always present in low concentrations […] Low levels of antibodies, produced by the lymphocytes, circulate […] In addition to antibodies, the plasma contains C-reactive proteins, Mannose-binding lectin and complement proteins that function as ‘opsonins‘ […] A host of other proteins perform roles independent of oxygen delivery or immune defence. By far the most abundant protein in serum is albumin. […] Blood is the transport infrastructure for any molecule that needs to be moved around the body. Some, such as the water-soluble fuel glucose, and small hormones like insulin, dissolve freely in the plasma. Others that are less soluble hitch a ride on proteins [….] Dangerous reactive molecules, such as iron, are also bound to proteins, in this case transferrin.”

Immunoglobulins are produced by B lymphocytes and either remain bound on the surface of the cell (as part of the B cell receptor) or circulate freely in the plasma (as antibodies). Whatever their location, their purpose is the same – to bind to and capture foreign molecules (antigens). […] To perform the twin role of binding the antigen and the phagocytosing cell, immunoglobulins need to have two distinct parts to their structure — one that recognizes the foreign antigen and one that can be recognized — and destroyed — by the host defence system. The host defence system does not vary; a specific type of immunoglobulin will be recognized by one of the relatively few types of immune cells or proteins. Therefore this part of the immunoglobulin structure is not variable. But the nature of the foreign antigen will vary greatly; so the antigen-recognizing part of the structure must be highly variable. It is this that leads to the great variety of immunoglobulins. […] within the blood there is an army of potential binding sites that can recognize and bind to almost any conceivable chemical structure. Such variety is why the body is able to adapt and kill even organisms it has never encountered before. Indeed the ability to make an immunoglobulin recognize almost any structure has resulted in antibody binding assays being used historically in diagnostic tests ranging from pregnancy to drugs testing.”

“[I]mmunoglobulins consist of two different proteins — a heavy chain and a light chain. In the human heavy chain there are about forty different V (variable) segments, twenty-five different D (Diversity) segments, and six J (Joining) segments. The light chain also contains variable V and J segments. A completed immunoglobulin has a heavy chain with only one V, D, and J segment, and a light chain with only one V and D segment. It is the shuffling of these segments during development of the mature B lymphocyte that creates the diversity required […] the hypervariable regions are particularly susceptible to mutation during development. […] A separate class of immunoglobulin-like molecules also provide the key to cell-to-cell communication in the immune system. In humans, with the exception of the egg and sperm cells, all cells that possess a nucleus also have a protein on their surface called ‘Human Leucocyte Antigen (HLA) Class I’. The function of HLA Class I is to display fragments (antigens) of all the proteins currently being made inside the cell. It therefore acts like a billboard displaying the current highlights of cellular activity. Any proteins recognized as non-self by cytotoxic T cell lymphocytes will result in the whole cell being targeted for destruction […]. Another form of HLA, Class II, is only present on the surface of specialized cells of the immune system termed antigen presenting cells. In contrast to HLA Class I, the surface of HLA Class II cells displays antigens that originate from outside of the cell.”

Galen.
Bloodletting.
Marcello Malpighi.
William Harvey. De Motu Cordis.
Andreas Vesalius. De humani corporis fabrica.
Ibn al-Nafis. Michael Servetus. Realdo Colombo. Andrea Cesalpino.
Pulmonary circulation.
Hematopoietic stem cell. Bone marrow. Erythropoietin.
Hemoglobin.
Anemia.
Peroxidase.
Lymphocytes. NK cells. Granzyme. B lymphocytes. T lymphocytes. Antibody/Immunoglobulin. Lymphoblast.
Platelet. Coagulation cascade. Fibrinogen. Fibrin. Thrombin. Haemophilia. Hirudin. Von Willebrand disease. Haemophilia A. -ll- B.
Tonicity. Colloid osmotic pressure.
Adaptive immune system. Vaccination. VariolationAntiserum. Agostino Bassi. Muscardine. Louis Pasteur. Élie Metchnikoff. Paul Ehrlich.
Humoral immunity. Membrane attack complex.
Niels Kaj Jerne. David Talmage. Frank Burnet. Clonal selection theory. Peter Medawar.
Susumu Tonegawa.

June 2, 2018 Posted by | Biology, Books, Immunology, Medicine, Molecular biology | Leave a comment

A few diabetes papers of interest

i. Reevaluating the Evidence for Blood Pressure Targets in Type 2 Diabetes.

“There is general consensus that treating adults with type 2 diabetes mellitus (T2DM) and hypertension to a target blood pressure (BP) of <140/90 mmHg helps prevent cardiovascular disease (CVD). Whether more intensive BP control should be routinely targeted remains a matter of debate. While the American Diabetes Association (ADA) BP guidelines recommend an individualized assessment to consider different treatment goals, the American College of Cardiology/American Heart Association BP guidelines recommend a BP target of <130/80 mmHg for most individuals with hypertension, including those with T2DM (13).

In large part, these discrepant recommendations reflect the divergent results of the Action to Control Cardiovascular Risk in Diabetes-BP trial (ACCORD-BP) among people with T2DM and the Systolic Blood Pressure Intervention Trial (SPRINT), which excluded people with diabetes (4,5). Both trials evaluated the effect of intensive compared with standard BP treatment targets (<120 vs. <140 mmHg systolic) on a composite CVD end point of nonfatal myocardial infarction or stroke or death from cardiovascular causes. SPRINT also included unstable angina and acute heart failure in its composite end point. While ACCORD-BP did not show a significant benefit from the intervention (hazard ratio [HR] 0.88; 95% CI 0.73–1.06), SPRINT found a significant 25% relative risk reduction on the primary end point favoring intensive therapy (0.75; 0.64–0.89).”

“To some extent, CVD mechanisms and causes of death differ in T2DM patients compared with the general population. Microvascular disease (particularly kidney disease), accelerated vascular calcification, and diabetic cardiomyopathy are common in T2DM (1315). Moreover, the rate of sudden cardiac arrest is markedly increased in T2DM and related, in part, to diabetes-specific factors other than ischemic heart disease (16). Hypoglycemia is a potential cause of CVD mortality that is specific to diabetes (17). In addition, polypharmacy is common and may increase CVD risk (18). Furthermore, nonvascular causes of death account for approximately 40% of the premature mortality burden experienced by T2DM patients (19). Whether these disease processes may render patients with T2DM less amenable to derive a mortality benefit from intensive BP control, however, is not known and should be the focus of future research.

In conclusion, the divergent results between ACCORD-BP and SPRINT are most readily explained by the apparent lack of benefit of intensive BP control on CVD and all-cause mortality in ACCORD-BP, rather than differences in the design, population characteristics, or interventions between the trials. This difference in effects on mortality may be attributable to differential mechanisms underlying CVD mortality in T2DM, to chance, or to both. These observations suggest that caution should be exercised extrapolating the results of SPRINT to patients with T2DM and support current ADA recommendations to individualize BP targets, targeting a BP of <140/90 mmHg in the majority of patients with T2DM and considering lower BP targets when it is anticipated that individual benefits outweigh risks.”

ii. Modelling incremental benefits on complications rates when targeting lower HbA1c levels in people with Type 2 diabetes and cardiovascular disease.

“Glucose‐lowering interventions in Type 2 diabetes mellitus have demonstrated reductions in microvascular complications and modest reductions in macrovascular complications. However, the degree to which targeting different HbA1c reductions might reduce risk is unclear. […] Participant‐level data for Trial Evaluating Cardiovascular Outcomes with Sitagliptin (TECOS) participants with established cardiovascular disease were used in a Type 2 diabetes‐specific simulation model to quantify the likely impact of different HbA1c decrements on complication rates. […] The use of the TECOS data limits our findings to people with Type 2 diabetes and established cardiovascular disease. […] Ten‐year micro‐ and macrovascular rates were estimated with HbA1c levels fixed at 86, 75, 64, 53 and 42 mmol/mol (10%, 9%, 8%, 7% and 6%) while holding other risk factors constant at their baseline levels. Cumulative relative risk reductions for each outcome were derived for each HbA1c decrement. […] Of 5717 participants studied, 72.0% were men and 74.2% White European, with a mean (sd) age of 66.2 (7.9) years, systolic blood pressure 134 (16.9) mmHg, LDL‐cholesterol 2.3 (0.9) mmol/l, HDL‐cholesterol 1.13 (0.3) mmol/l and median Type 2 diabetes duration 9.6 (5.1–15.6) years. Ten‐year cumulative relative risk reductions for modelled HbA1c values of 75, 64, 53 and 42 mmol/mol, relative to 86 mmol/mol, were 4.6%, 9.3%, 15.1% and 20.2% for myocardial infarction; 6.0%, 12.8%, 19.6% and 25.8% for stroke; 14.4%, 26.6%, 37.1% and 46.4% for diabetes‐related ulcer; 21.5%, 39.0%, 52.3% and 63.1% for amputation; and 13.6%, 25.4%, 36.0% and 44.7 for single‐eye blindness. […] We did not investigate outcomes for renal failure or chronic heart failure as previous research conducted to create the model did not find HbA1c to be a statistically significant independent risk factor for either condition, therefore no clinically meaningful differences would be expected from modelling different HbA1c levels 11.”

“For microvascular complications, the absolute median estimates tended to be lower than for macrovascular complications at the same HbA1c level, but cumulative relative risk reductions were greater. For amputation the 10‐year absolute median estimate for a modelled constant HbA1c of 86 mmol/mol (10%) was 3.8% (3.7, 3.9), with successively lower values for each modelled 1% HbA1c decrement. Compared with the 86 mmol/mol (10%) HbA1c level, median relative risk reductions for amputation were 21.5% (21.1, 21.9) at 75 mmol/mol (9%) increasing to 52.3% (52.0, 52.6) at 53 mmol/mol (7%). […] Relative risk reductions in micro‐ and macrovascular complications for each 1% HbA1c reduction were similar for each decrement. The exception was all‐cause mortality, where the relative risk reductions for 1% HbA1c decrements were greater at higher baseline HbA1c levels. These simulated outcomes differ from the Diabetes Control and Complications Trial outcome in people with Type 1 diabetes, where lowering HbA1c from higher baseline levels had a greater impact on microvascular risk reduction 18.”

iii. Laser photocoagulation for proliferative diabetic retinopathy (Cochrane review).

“Diabetic retinopathy is a complication of diabetes in which high blood sugar levels damage the blood vessels in the retina. Sometimes new blood vessels grow in the retina, and these can have harmful effects; this is known as proliferative diabetic retinopathy. Laserphotocoagulation is an intervention that is commonly used to treat diabetic retinopathy, in which light energy is applied to the retinawith the aim of stopping the growth and development of new blood vessels, and thereby preserving vision. […] The aim of laser photocoagulation is to slow down the growth of new blood vessels in the retina and thereby prevent the progression of visual loss (Ockrim 2010). Focal laser photocoagulation uses the heat of light to seal or destroy abnormal blood vessels in the retina. Individual vessels are treated with a small number of laser burns.

PRP [panretinal photocoagulation, US] aims to slow down the growth of new blood vessels in a wider area of the retina. Many hundreds of laser burns are placed on the peripheral parts of the retina to stop blood vessels from growing (RCOphth 2012). It is thought that the anatomic and functional changes that result from photocoagulation may improve the oxygen supply to the retina, and so reduce the stimulus for neovascularisation (Stefansson 2001). Again the exact mechanisms are unclear, but it is possible that the decreased area of retinal tissue leads to improved oxygenation and a reduction in the levels of anti-vascular endothelial growth factor. A reduction in levels of anti-vascular endothelial growth factor may be important in reducing the risk of harmful new vessels forming. […] Laser photocoagulation is a well-established common treatment for DR and there are many different potential strategies for delivery of laser treatment that are likely to have different effects. A systematic review of the evidence for laser photocoagulation will provide important information on benefits and harms to guide treatment choices. […] This is the first in a series of planned reviews on laser photocoagulation. Future reviews will compare different photocoagulation techniques.”

“We identified a large number of trials of laser photocoagulation of diabetic retinopathy (n = 83) but only five of these studies were eligible for inclusion in the review, i.e. they compared laser photocoagulation with currently available lasers to no (or deferred) treatment. Three studies were conducted in the USA, one study in the UK and one study in Japan. A total of 4786 people (9503 eyes) were included in these studies. The majority of participants in four of these trials were people with proliferative diabetic retinopathy; one trial recruited mainly people with non-proliferative retinopathy.”

“At 12 months there was little difference between eyes that received laser photocoagulation and those allocated to no treatment (or deferred treatment), in terms of loss of 15 or more letters of visual acuity (risk ratio (RR) 0.99, 95% confidence interval (CI) 0.89 to1.11; 8926 eyes; 2 RCTs, low quality evidence). Longer term follow-up did not show a consistent pattern, but one study found a 20% reduction in risk of loss of 15 or more letters of visual acuity at five years with laser treatment. Treatment with laser reduced the risk of severe visual loss by over 50% at 12 months (RR 0.46, 95% CI 0.24to 0.86; 9276 eyes; 4 RCTs, moderate quality evidence). There was a beneficial effect on progression of diabetic retinopathy with treated eyes experiencing a 50% reduction in risk of progression of diabetic retinopathy (RR 0.49, 95% CI 0.37 to 0.64; 8331 eyes; 4 RCTs, low quality evidence) and a similar reduction in risk of vitreous haemorrhage (RR 0.56, 95% CI 0.37 to 0.85; 224 eyes; 2RCTs, low quality evidence).”

“Overall there is not a large amount of evidence from RCTs on the effects of laser photocoagulation compared to no treatment or deferred treatment. The evidence is dominated by two large studies conducted in the US population (DRS 1978; ETDRS 1991). These two studies were generally judged to be at low or unclear risk of bias, with the exception of inevitable unmasking of patients due to differences between intervention and control. […] In current clinical guidelines, e.g. RCOphth 2012, PRP is recommended in high-risk PDR. The recommendation is that “as retinopathy approaches the proliferative stage, laser scatter treatment (PRP) should be increasingly considered to prevent progression to high risk PDR” based on other factors such as patients’ compliance or planned cataract surgery.

These recommendations need to be interpreted while considering the risk of visual loss associated with different levels of severity of DR, as well as the risk of progression. Since PRP reduces the risk of severe visual loss, but not moderate visual loss that is more related to diabetic maculopathy, most ophthalmologists judge that there is little benefit in treating non-proliferative DR at low risk of severe visual damage, as patients would incur the known adverse effects of PRP, which, although mild, include pain and peripheral visual field loss and transient DMO [diabetic macular oedema, US]. […] This review provides evidence that laser photocoagulation is beneficial in treating diabetic retinopathy. […] based on the baseline risk of progression of the disease, and risk of visual loss, the current approach of caution in treating non-proliferative DR with laser would appear to be justified.

By current standards the quality of the evidence is not high, however, the effects on risk of progression and risk of severe visual loss are reasonably large (50% relative risk reduction).”

iv. Immune Recognition of β-Cells: Neoepitopes as Key Players in the Loss of Tolerance.

I should probably warn beforehand that this one is rather technical. It relates reasonably closely to topics covered in the molecular biology book I recently covered here on the blog, and if I had not read that book quite recently I almost certainly would not have been able to read the paper – so the coverage below is more ‘for me’ than ‘for you’. Anyway, some quotes:

“Prior to the onset of type 1 diabetes, there is progressive loss of immune self-tolerance, evidenced by the accumulation of islet autoantibodies and emergence of autoreactive T cells. Continued autoimmune activity leads to the destruction of pancreatic β-cells and loss of insulin secretion. Studies of samples from patients with type 1 diabetes and of murine disease models have generated important insights about genetic and environmental factors that contribute to susceptibility and immune pathways that are important for pathogenesis. However, important unanswered questions remain regarding the events that surround the initial loss of tolerance and subsequent failure of regulatory mechanisms to arrest autoimmunity and preserve functional β-cells. In this Perspective, we discuss various processes that lead to the generation of neoepitopes in pancreatic β-cells, their recognition by autoreactive T cells and antibodies, and potential roles for such responses in the pathology of disease. Emerging evidence supports the relevance of neoepitopes generated through processes that are mechanistically linked with β-cell stress. Together, these observations support a paradigm in which neoepitope generation leads to the activation of pathogenic immune cells that initiate a feed-forward loop that can amplify the antigenic repertoire toward pancreatic β-cell proteins.”

“Enzymatic posttranslational processes that have been implicated in neoepitope generation include acetylation (10), citrullination (11), glycosylation (12), hydroxylation (13), methylation (either protein or DNA methylation) (14), phosphorylation (15), and transglutamination (16). Among these, citrullination and transglutamination are most clearly implicated as processes that generate neoantigens in human disease, but evidence suggests that others also play a role in neoepitope formation […] Citrulline, which is among the most studied PTMs in the context of autoimmunity, is a diagnostic biomarker of rheumatoid arthritis (RA). […] Anticitrulline antibodies are among the earliest immune responses that are diagnostic of RA and often correlate with disease severity (18). We have recently documented the biological consequences of citrulline modifications and autoimmunity that arise from pancreatic β-cell proteins in the development of T1D (19). In particular, citrullinated GAD65 and glucose-regulated protein (GRP78) elicit antibody and T-cell responses in human T1D and in NOD diabetes, respectively (20,21).”

Carbonylation is an irreversible, iron-catalyzed oxidative modification of the side chains of lysine, arginine, threonine, or proline. Mitochondrial functions are particularly sensitive to carbonyl modification, which also has detrimental effects on other intracellular enzymatic pathways (30). A number of diseases have been linked with altered carbonylation of self-proteins, including Alzheimer and Parkinson diseases and cancer (27). There is some data to support that carbonyl PTM is a mechanism that directs unstable self-proteins into cellular degradation pathways. It is hypothesized that carbonyl PTM [post-translational modification] self-proteins that fail to be properly degraded in pancreatic β-cells are autoantigens that are targeted in T1D. Recently submitted studies have identified several carbonylated pancreatic β-cell neoantigens in human and murine models of T1D (27). Among these neoantigens are chaperone proteins that are required for the appropriate folding and secretion of insulin. These studies imply that although some PTM self-proteins may be direct targets of autoimmunity, others may alter, interrupt, or disturb downstream metabolic pathways in the β-cell. In particular, these studies indicated that upstream PTMs resulted in misfolding and/or metabolic disruption between proinsulin and insulin production, which provides one explanation for recent observations of increased proinsulin-to-insulin ratios in the progression of T1D (31).”

“Significant hypomethylation of DNA has been linked with several classic autoimmune diseases, such as SLE, multiple sclerosis, RA, Addison disease, Graves disease, and mixed connective tissue disease (36). Therefore, there is rationale to consider the possible influence of epigenetic changes on protein expression and immune recognition in T1D. Relevant to T1D, epigenetic modifications occur in pancreatic β-cells during progression of diabetes in NOD mice (37). […] Consequently, DNMTs [DNA methyltransferases] and protein arginine methyltransferases are likely to play a role in the regulation of β-cell differentiation and insulin gene expression, both of which are pathways that are altered in the presence of inflammatory cytokines. […] Eizirik et al. (38) reported that exposure of human islets to proinflammatory cytokines leads to modulation of transcript levels and increases in alternative splicing for a number of putative candidate genes for T1D. Their findings suggest a mechanism through which alternative splicing may lead to the generation of neoantigens and subsequent presentation of novel β-cell epitopes (39).”

“The phenomenon of neoepitope recognition by autoantibodies has been shown to be relevant in a variety of autoimmune diseases. For example, in RA, antibody responses directed against various citrullinated synovial proteins are remarkably disease-specific and routinely used as a diagnostic test in the clinic (18). Appearance of the first anticitrullinated protein antibodies occurs years prior to disease onset, and accumulation of additional autoantibody specificities correlates closely with the imminent onset of clinical arthritis (44). There is analogous evidence supporting a hierarchical emergence of autoantibody specificities and multiple waves of autoimmune damage in T1D (3,45). Substantial data from longitudinal studies indicate that insulin and GAD65 autoantibodies appear at the earliest time points during progression, followed by additional antibody specificities directed at IA-2 and ZnT8.”

“Multiple autoimmune diseases often cluster within families (or even within one person), implying shared etiology. Consequently, relevant insights can be gleaned from studies of more traditional autoantibody-mediated systemic autoimmune diseases, such as SLE and RA, where inter- and intramolecular epitope spreading are clearly paradigms for disease progression (47). In general, early autoimmunity is marked by restricted B- and T-cell epitopes, followed by an expanded repertoire coinciding with the onset of more significant tissue pathology […] Akin to T1D, other autoimmune syndromes tend to cluster to subcellular tissues or tissue components that share biological or biochemical properties. For example, SLE is marked by autoimmunity to nucleic acid–bearing macromolecules […] Unlike other systemic autoantibody-mediated diseases, such as RA and SLE, there is no clear evidence that T1D-related autoantibodies play a pathogenic role. Autoantibodies against citrulline-containing neoepitopes of proteoglycan are thought to trigger or intensify arthritis by forming immune complexes with this autoantigen in the joints of RA patients with anticitrullinated protein antibodies. In a similar manner, autoantibodies and immune complexes are hallmarks of tissue pathology in SLE. Therefore, it remains likely that autoantibodies or the B cells that produce them contribute to the pathogenesis of T1D.”

“In summation, the existing literature demonstrates that oxidation, citrullination, and deamidation can have a direct impact on T-cell recognition that contributes to loss of tolerance.”

“There is a general consensus that the pathogenesis of T1D is initiated when individuals who possess a high level of genetic risk (e.g., susceptible HLA, insulin VNTR, PTPN22 genotypes) are exposed to environmental factors (e.g., enteroviruses, diet, microbiome) that precipitate a loss of tolerance that manifests through the appearance of insulin and/or GAD autoantibodies. This early autoimmunity is followed by epitope spreading, increasing both the number of antigenic targets and the diversity of epitopes within these targets. These processes create a feed-forward loop antigen release that induces increasing inflammation and increasing numbers of distinct T-cell specificities (64). The formation and recognition of neoepitopes represents one mechanism through which epitope spreading can occur. […] mechanisms related to neoepitope formation and recognition can be envisioned at multiple stages of T1D pathogenesis. At the level of genetic risk, susceptible individuals may exhibit a genetically driven impairment of their stress response, increasing the likelihood of neoepitope formation. At the level of environmental exposure, many of the insults that are thought to initiate T1D are known to cause neoepitope formation. During the window of β-cell destruction that encompasses early autoimmunity through dysglycemia and diagnosis of T1D it remains unclear when neoepitope responses appear in relation to “classic” responses to insulin and GAD65. However, by the time of onset, neoepitope responses are clearly present and remain as part of the ongoing autoimmunity that is present during established T1D. […] The ultimate product of both direct and indirect generation of neoepitopes is an accumulation of robust and diverse autoimmune B- and T-cell responses, accelerating the pathological destruction of pancreatic islets. Clearly, the emergence of sophisticated methods of tissue and single-cell proteomics will identify novel neoepitopes, including some that occur at near the earliest stages of disease. A detailed mechanistic understanding of the pathways that lead to specific classes of neoepitopes will certainly suggest targets of therapeutic manipulation and intervention that would be hoped to impede the progression of disease.”

v. Diabetes technology: improving care, improving patient‐reported outcomes and preventing complications in young people with Type 1 diabetes.

“With the evolution of diabetes technology, those living with Type 1 diabetes are given a wider arsenal of tools with which to achieve glycaemic control and improve patient‐reported outcomes. Furthermore, the use of these technologies may help reduce the risk of acute complications, such as severe hypoglycaemia and diabetic ketoacidosis, as well as long‐term macro‐ and microvascular complications. […] Unfortunately, diabetes goals are often unmet and people with Type 1 diabetes too frequently experience acute and long‐term complications of this condition, in addition to often having less than ideal psychosocial outcomes. Increasing realization of the importance of patient‐reported outcomes is leading to diabetes care delivery becoming more patient‐centred. […] Optimal diabetes management requires both the medical and psychosocial needs of people with Type 1 diabetes and their caregivers to be addressed. […] The aim of this paper was to demonstrate how, by incorporating technology into diabetes care, we can increase patient‐centered care, reduce acute and chronic diabetes complications, and improve clinical outcomes and quality of life.”

[The paper’s Table 2 on page 422 of the pdf-version is awesome, it includes a lot of different Hba1c estimates from various patient populations all across the world. The numbers included in the table are slightly less awesome, as most populations only achieve suboptimal metabolic control.]

“The risks of all forms of complications increase with higher HbA1c concentration, increasing diabetes duration, hypertension, presence of other microvascular complications, obesity, insulin resistance, hyperlipidaemia and smoking 6. Furthermore, the Diabetes Research in Children (DirecNet) study has shown that individuals with Type 1 diabetes have white matter differences in the brain and cognitive differences compared with individuals without Type 1 diabetes. These studies showed that the degree of structural differences in the brain were related to the degree of chronic hyperglycaemia, hypoglycaemia and glucose variability 7. […] In addition to long‐term complications, people with Type 1 diabetes are also at risk of acute complications. Severe hypoglycaemia, a hypoglycaemic event resulting in altered/loss of consciousness or seizures, is a serious complication of insulin therapy. If unnoticed and untreated, severe hypoglycaemia can result in death. […] The incidence of diabetic ketoacidosis, a life‐threatening consequence of diabetes, remains unacceptably high in children with established diabetes (Table 5). The annual incidence of ketoacidosis was 5% in the Prospective Diabetes Follow‐Up Registry (DPV) in Germany and Austria, 6.4% in the National Paediatric Diabetes Audit (NPDA), and 7.1% in the Type 1 Diabetes Exchange (T1DX) registry 10. Psychosocial factors including female gender, non‐white race, lower socio‐economic status, and elevated HbA1c all contribute to increased risk of diabetic ketoacidosis 11.”

“Depression is more common in young people with Type 1 diabetes than in young people without a chronic disease […] Depression can make it more difficult to engage in diabetes self‐management behaviours, and as a result, contributes to suboptimal glycaemic control and lower rates of self‐monitoring of blood glucose (SMBG) in young people with Type 1 diabetes 15. […] Unlike depression, diabetes distress is not a clinical diagnosis but rather emotional distress that comes from the burden of living with and managing diabetes 16. A recent systematic review found that roughly one‐third of young people with Type 1 diabetes (age 10–20 years) have some level of diabetes distress and that diabetes distress was consistently associated with higher HbA1c and worse self‐management 17. […] Eating and weight‐related comorbidities also exist for individuals with Type 1 diabetes. There is a higher incidence of obesity in individuals with Type 1 diabetes on intensive insulin therapy. […] Adolescent girls and young adult women with Type 1 diabetes are more likely to omit insulin for weight loss and have disordered eating habits 20.”

“In addition to screening for and treating depression and diabetes distress to improve overall diabetes management, it is equally important to assess quality of life as well as positive coping factors that may also influence self‐management and well‐being. For example, lower scores on the PROMIS® measure of global health, which assesses social relationships as well as physical and mental well‐being, have been linked to higher depression scores and less frequent blood glucose checks 13. Furthermore, coping strategies such as problem‐solving, emotional expression, and acceptance have been linked to lower HbA1c and enhanced quality of life 21.”

“Self‐monitoring of blood glucose via multiple finger sticks for capillary blood samples per day has been the ‘gold standard’ for glucose monitoring, but SMBG only provides glucose measurements as snapshots in time. Still, the majority of young people with Type 1 diabetes use SMBG as their main method to assess glycaemia. Data from the T1DX registry suggest that an increased frequency of SMBG is associated with lower HbA1c levels 23. The development of continuous glucose monitoring (CGM) provides more values, along with the rate and direction of glucose changes. […] With continued use, CGM has been shown to decrease the incidence of hypoglycaemia and HbA1c levels 26. […] Insulin can be administered via multiple daily injections or continuous subcutaneous insulin infusion (insulin pumps). Over the last 30 years, insulin pumps have become smaller with more features, making them a valuable alternative to multiple daily injections. Insulin pump use in various registries ranges from as low as 5.9% among paediatric patients in the New Zealand national register 28 to as high as 74% in the German/Austrian DPV in children aged <6 years (Table 2) 29. Recent data suggest that consistent use of insulin pumps can result in improved HbA1c values and decreased incidence of severe hypoglycaemia 30, 31. Insulin pumps have been associated with improved quality of life 32. The data on insulin pumps and diabetic ketoacidosis are less clear.”

“The majority of Type 1 diabetes management is carried out outside the clinical setting and in individuals’ daily lives. People with Type 1 diabetes must make complex treatment decisions multiple times daily; thus, diabetes self‐management skills are central to optimal diabetes management. Unfortunately, many people with Type 1 diabetes and their caregivers are not sufficiently familiar with the necessary diabetes self‐management skills. […] Parents are often the first who learn these skills. As children become older, they start receiving more independence over their diabetes care; however, the transition of responsibilities from caregiver to child is often unstructured and haphazard. It is important to ensure that both individuals with diabetes and their caregivers have adequate self‐management skills throughout the diabetes journey.”

“In the developed world (nations with the highest gross domestic product), 87% of the population has access to the internet and 68% report using a smartphone 39. Even in developing countries, 54% of people use the internet and 37% own smartphones 39. In many areas, smartphones are the primary source of internet access and are readily available. […] There are >1000 apps for diabetes on the Apple App Store and the Google Play store. Many of these apps have focused on nutrition, blood glucose logging, and insulin dosing. Given the prevalence of smartphones and the interest in having diabetes apps handy, there is the potential for using a smartphone to deliver education and decision support tools. […] The new psychosocial position statement from the ADA recommends routine psychosocial screening in clinic. These recommendations include screening for: 1) depressive symptoms annually, at diagnosis, or with changes in medical status; 2) anxiety and worry about hypoglycaemia, complications and other diabetes‐specific worries; 3) disordered eating and insulin omission for purposes of weight control; 4) and diabetes distress in children as young as 7 or 8 years old 16. Implementation of in‐clinic screening for depression in young people with Type 1 diabetes has already been shown to be feasible, acceptable and able to identify individuals in need of treatment who may otherwise have gone unnoticed for a longer period of time which would have been having a detrimental impact on physical health and quality of life 13, 40. These programmes typically use tablets […] to administer surveys to streamline the screening process and automatically score measures 13, 40. This automation allows psychologists and social workers to focus on care delivery rather than screening. In addition to depression screening, automated tablet‐based screening for parental depression, distress and anxiety; problem‐solving skills; and resilience/positive coping factors can help the care team understand other psychosocial barriers to care. This approach allows the development of patient‐ and caregiver‐centred interventions to improve these barriers, thereby improving clinical outcomes and complication rates.”

“With the advent of electronic health records, registries and downloadable medical devices, people with Type 1 diabetes have troves of data that can be analysed to provide insights on an individual and population level. Big data analytics for diabetes are still in the early stages, but present great potential for improving diabetes care. IBM Watson Health has partnered with Medtronic to deliver personalized insights to individuals with diabetes based on device data 48. Numerous other systems […] allow people with Type 1 diabetes to access their data, share their data with the healthcare team, and share de‐identified data with the research community. Data analysis and insights such as this can form the basis for the delivery of personalized digital health coaching. For example, historical patterns can be analysed to predict activity and lead to pro‐active insulin adjustment to prevent hypoglycaemia. […] Improvements to diabetes care delivery can occur at both the population level and at the individual level using insights from big data analytics.”

vi. Route to improving Type 1 diabetes mellitus glycaemic outcomes: real‐world evidence taken from the National Diabetes Audit.

“While control of blood glucose levels reduces the risk of diabetes complications, it can be very difficult for people to achieve. There has been no significant improvement in average glycaemic control among people with Type 1 diabetes for at least the last 10 years in many European countries 6.

The National Diabetes Audit (NDA) in England and Wales has shown relatively little change in the levels of HbA1c being achieved in people with Type 1 diabetes over the last 10 years, with >70% of HbA1c results each year being >58 mmol/mol (7.5%) 7.

Data for general practices in England are published by the NDA. NHS Digital publishes annual prescribing data, including British National Formulary (BNF) codes 7, 8. Together, these data provide an opportunity to investigate whether there are systematic associations between HbA1c levels in people with Type 1 diabetes and practice‐level population characteristics, diabetes service levels and use of medication.”

“The Quality and Outcomes Framework (a payment system for general practice performance) provided a baseline list of all general practices in England for each year, the practice list size and number of people (both with Type 1 and Type 2 diabetes) on their diabetes register. General practice‐level data of participating practices were taken from the NDA 2013–2014, 2014–2015 and 2015–2016 (5455 practices in the last year). They include Type 1 diabetes population characteristics, routine review checks and the proportions of people achieving target glycaemic control and/or being at higher glycaemic risk.

Diabetes medication data for all people with diabetes were taken from the general practice prescribing in primary care data for 2013–2014, 2014–2015 and 2015–2016, including insulin and blood glucose monitoring (BGM) […] A total of 20 indicators were created that covered the epidemiological, service, medication, technological, costs and outcomes performance for each practice and year. The variance in these indicators over the 4‐year period and among general practices was also considered. […] The values of the indicators found to be in the 90th percentile were used to quantify the potential of highest performing general practices. […] In total 13 085 practice‐years of data were analysed, covering 437 000 patient‐years of management.”

“There was significant variation among the participating general practices (Fig. 3) in the proportion of people achieving target glycaemic control target [percentage of people with HbA1c ≤58 mmol/mol (7.5%)] and in the proportion at high glycaemic risk [percentage of people with HbA1c >86 mmol/mol (10%)]. […] Our analysis showed that, at general practice level, the median target glycaemic control attainment was 30%, while the 10th percentile was 16%, and the 90th percentile was 45%. The corresponding median for the high glycaemic risk percentage was 16%, while the 10th percentile (corresponding to the best performing practices) was 6% and the 90th percentile (greatest proportion of Type 1 diabetes at high glycaemic risk) was 28%. Practices in the deciles for both lowest target glycaemic control and highest high glycaemic risk had 49% of the results in the 58–86 mmol/mol range. […] A very wide variation was found in the percentage of insulin for presumed pump use (deduced from prescriptions of fast‐acting vial insulin), with a median of 3.8% at general practice level. The 10th percentile was 0% and the 90th percentile was 255% of the median inferred pump usage.”

“[O]ur findings suggest that if all practices optimized service and therapies to the levels achieved by the top decile then 16 100 (7%) more people with Type 1 diabetes would achieve the glycaemic control target of 58 mmol/mol (7.5%) and 11 500 (5%) fewer people would have HbA1c >86 mmol/mol (10%). Put another way, if the results for all practices were at the top decile level, 36% vs 29% of people with Type 1 diabetes would achieve the glycaemic control target of HbA1c ≤ 58 mmol/mol (7.5%), and as few as 10% could have HbA1c levels > 86 mmol/mol (10%) compared with 15% currently (Fig. 6). This has significant implications for the potential to improve the longer‐term outcomes of people with Type 1 diabetes, given the close link between glycaemia and complications in such individuals 5, 10, 11.”

“We found that the significant variation among the participating general practices (Fig. 2) in terms of the proportion of people with HbA1c ≤58 mmol/mol (7.5%) was only partially related to a lower proportion of people with HbA1c >86 mmol/mol (10%). There was only a weak relationship between level of target glycaemia achieved and avoidance of very suboptimal glycaemia. The overall r2 value was 0.6. This suggests that there is a degree of independence between these outcomes, so that success factors at a general practice level differ for people achieving optimal glycaemia vs those factors affecting avoiding a level of at risk glycaemia.”

May 30, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Immunology, Medicine, Molecular biology, Ophthalmology, Studies | Leave a comment

Molecular biology (II)

Below I have added some more quotes and links related to the book’s coverage:

“[P]roteins are the most abundant molecules in the body except for water. […] Proteins make up half the dry weight of a cell whereas DNA and RNA make up only 3 per cent and 20 per cent respectively. […] The approximately 20,000 protein-coding genes in the human genome can, by alternative splicing, multiple translation starts, and post-translational modifications, produce over 1,000,000 different proteins, collectively called ‘the proteome‘. It is the size of the proteome and not the genome that defines the complexity of an organism. […] For simple organisms, such as viruses, all the proteins coded by their genome can be deduced from its sequence and these comprise the viral proteome. However for higher organisms the complete proteome is far larger than the genome […] For these organisms not all the proteins coded by the genome are found in any one tissue at any one time and therefore a partial proteome is usually studied. What are of interest are those proteins that are expressed in specific cell types under defined conditions.”

“Enzymes are proteins that catalyze or alter the rate of chemical reactions […] Enzymes can speed up reactions […] but they can also slow some reactions down. Proteins play a number of other critical roles. They are involved in maintaining cell shape and providing structural support to connective tissues like cartilage and bone. Specialized proteins such as actin and myosin are required [for] muscular movement. Other proteins act as ‘messengers’ relaying signals to regulate and coordinate various cell processes, e.g. the hormone insulin. Yet another class of protein is the antibodies, produced in response to foreign agents such as bacteria, fungi, and viruses.”

“Proteins are composed of amino acids. Amino acids are organic compounds with […] an amino group […] and a carboxyl group […] In addition, amino acids carry various side chains that give them their individual functions. The twenty-two amino acids found in proteins are called proteinogenic […] but other amino acids exist that are non-protein functioning. […] A peptide bond is formed between two amino acids by the removal of a water molecule. […] each individual unit in a peptide or protein is known as an amino acid residue. […] Chains of less than 50-70 amino acid residues are known as peptides or polypeptides and >50-70 as proteins, although many proteins are composed of more than one polypeptide chain. […] Proteins are macromolecules consisting of one or more strings of amino acids folded into highly specific 3D-structures. Each amino acid has a different size and carries a different side group. It is the nature of the different side groups that facilitates the correct folding of a polypeptide chain into a functional tertiary protein structure.”

“Atoms scatter the waves of X-rays mainly through their electrons, thus forming secondary or reflected waves. The pattern of X-rays diffracted by the atoms in the protein can be captured on a photographic plate or an image sensor such as a charge coupled device placed behind the crystal. The pattern and relative intensity of the spots on the diffraction image are then used to calculate the arrangement of atoms in the original protein. Complex data processing is required to convert the series of 2D diffraction or scatter patterns into a 3D image of the protein. […] The continued success and significance of this technique for molecular biology is witnessed by the fact that almost 100,000 structures of biological molecules have been determined this way, of which most are proteins.”

“The number of proteins in higher organisms far exceeds the number of known coding genes. The fact that many proteins carry out multiple functions but in a regulated manner is one way a complex proteome arises without increasing the number of genes. Proteins that performed a single role in the ancestral organism have acquired extra and often disparate functions through evolution. […] The active site of an enzyme employed in catalysis is only a small part of the protein, leaving spare capacity for acquiring a second function. […] The glycolytic pathway is involved in the breakdown of sugars such as glucose to release energy. Many of the highly conserved and ancient enzymes from this pathway have developed secondary or ‘moonlighting’ functions. Proteins often change their location in the cell in order to perform a ‘second job’. […] The limited size of the genome may not be the only evolutionary pressure for proteins to moonlight. Combining two functions in one protein can have the advantage of coordinating multiple activities in a cell, enabling it to respond quickly to changes in the environment without the need for lengthy transcription and translational processes.”

Post-translational modifications (PTMs) […] is [a] process that can modify the role of a protein by addition of chemical groups to amino acids in the peptide chain after translation. Addition of phosphate groups (phosphorylation), for example, is a common mechanism for activating or deactivating an enzyme. Other common PTMs include addition of acetyl groups (acetylation), glucose (glucosylation), or methyl groups (methylation). […] Some additions are reversible, facilitating the switching between active and inactive states, and others are irreversible such as marking a protein for destruction by ubiquitin. [The difference between reversible and irreversible modifications can be quite important in pharmacology, and if you’re curious to know more about these topics Coleman’s drug metabolism text provide great coverage of related topics – US.] Diseases caused by malfunction of these modifications highlight the importance of PTMs. […] in diabetes [h]igh blood glucose lead to unwanted glocosylation of proteins. At the high glucose concentrations associated with diabetes, an unwanted irreversible chemical reaction binds the gllucose to amino acid residues such as lysines exposed on the protein surface. The glucosylated proteins then behave badly, cross-linking themselves to the extracellular matrix. This is particularly dangerous in the kidney where it decreases function and can lead to renal failure.”

“Twenty thousand protein-coding genes make up the human genome but for any given cell only about half of these are expressed. […] Many genes get switched off during differentiation and a major mechanism for this is epigenetics. […] an epigenetic trait […] is ‘a stably heritable phenotype resulting from changes in the chromosome without alterations in the DNA sequence’. Epigenetics involves the chemical alteration of DNA by methyl or other small molecular groups to affect the accessibility of a gene by the transcription machinery […] Epigenetics can […] act on gene expression without affecting the stability of the genetic code by modifying the DNA, the histones in chromatin, or a whole chromosome. […] Epigenetic signatures are not only passed on to somatic daughter cells but they can also be transferred through the germline to the offspring. […] At first the evidence appeared circumstantial but more recent studies have provided direct proof of epigenetic changes involving gene methylation being inherited. Rodent models have provided mechanistic evidence. […] the importance of epigenetics in development is highlighted by the fact that low dietary folate, a nutrient essential for methylation, has been linked to higher risk of birth defects in the offspring.” […on the other hand, well…]

The cell cycle is divided into phases […] Transition from G1 into S phase commits the cell to division and is therefore a very tightly controlled restriction point. Withdrawal of growth factors, insufficient nucleotides, or energy to complete DNA replication, or even a damaged template DNA, would compromise the process. Problems are therefore detected and the cell cycle halted by cell cycle inhibitors before the cell has committed to DNA duplication. […] The cell cycle inhibitors inactive the kinases that promote transition through the phases, thus halting the cell cycle. […] The cell cycle can also be paused in S phase to allow time for DNA repairs to be carried out before cell division. The consequences of uncontrolled cell division are so catastrophic that evolution has provided complex checks and balances to maintain fidelity. The price of failure is apoptosis […] 50 to 70 billion cells die every day in a human adult by the controlled molecular process of apoptosis.”

“There are many diseases that arise because a particular protein is either absent or a faulty protein is produced. Administering a correct version of that protein can treat these patients. The first commercially available recombinant protein to be produced for medical use was human insulin to treat diabetes mellitus. […] (FDA) approved the recombinant insulin for clinical use in 1982. Since then over 300 protein-based recombinant pharmaceuticals have been licensed by the FDA and the European Medicines Agency (EMA) […], and many more are undergoing clinical trials. Therapeutic proteins can be produced in bacterial cells but more often mammalian cells such as the Chinese hamster ovary cell line and human fibroblasts are used as these hosts are better able to produce fully functional human protein. However, using mammalian cells is extremely expensive and an alternative is to use live animals or plants. This is called molecular pharming and is an innovative way of producing large amounts of protein relatively cheaply. […] In plant pharming, tobacco, rice, maize, potato, carrots, and tomatoes have all been used to produce therapeutic proteins. […] [One] class of proteins that can be engineered using gene-cloning technology is therapeutic antibodies. […] Therapeutic antibodies are designed to be monoclonal, that is, they are engineered so that they are specific for a particular antigen to which they bind, to block the antigen’s harmful effects. […] Monoclonal antibodies are at the forefront of biological therapeutics as they are highly specific and tend not to induce major side effects.”

“In gene therapy the aim is to restore the function of a faulty gene by introducing a correct version of that gene. […] a cloned gene is transferred into the cells of a patient. Once inside the cell, the protein encoded by the gene is produced and the defect is corrected. […] there are major hurdles to be overcome for gene therapy to be effective. One is the gene construct has to be delivered to the diseased cells or tissues. This can often be difficult […] Mammalian cells […] have complex mechanisms that have evolved to prevent unwanted material such as foreign DNA getting in. Second, introduction of any genetic construct is likely to trigger the patient’s immune response, which can be fatal […] once delivered, expression of the gene product has to be sustained to be effective. One approach to delivering genes to the cells is to use genetically engineered viruses constructed so that most of the viral genome is deleted […] Once inside the cell, some viral vectors such as the retroviruses integrate into the host genome […]. This is an advantage as it provides long-lasting expression of the gene product. However, it also poses a safety risk, as there is little control over where the viral vector will insert into the patient’s genome. If the insertion occurs within a coding gene, this may inactivate gene function. If it integrates close to transcriptional start sites, where promoters and enhancer sequences are located, inappropriate gene expression can occur. This was observed in early gene therapy trials [where some patients who got this type of treatment developed cancer as a result of it. A few more details hereUS] […] Adeno-associated viruses (AAVs) […] are often used in gene therapy applications as they are non-infectious, induce only a minimal immune response, and can be engineered to integrate into the host genome […] However, AAVs can only carry a small gene insert and so are limited to use with genes that are of a small size. […] An alternative delivery system to viruses is to package the DNA into liposomes that are then taken up by the cells. This is safer than using viruses as liposomes do not integrate into the host genome and are not very immunogenic. However, liposome uptake by the cells can be less efficient, resulting in lower expression of the gene.”

Links:

One gene–one enzyme hypothesis.
Molecular chaperone.
Protein turnover.
Isoelectric point.
Gel electrophoresis. Polyacrylamide.
Two-dimensional gel electrophoresis.
Mass spectrometry.
Proteomics.
Peptide mass fingerprinting.
Worldwide Protein Data Bank.
Nuclear magnetic resonance spectroscopy of proteins.
Immunoglobulins. Epitope.
Western blot.
Immunohistochemistry.
Crystallin. β-catenin.
Protein isoform.
Prion.
Gene expression. Transcriptional regulation. Chromatin. Transcription factor. Gene silencing. Histone. NF-κB. Chromatin immunoprecipitation.
The agouti mouse model.
X-inactive specific transcript (Xist).
Cell cycle. Cyclin. Cyclin-dependent kinase.
Retinoblastoma protein pRb.
Cytochrome c. CaspaseBcl-2 family. Bcl-2-associated X protein.
Hybridoma technology. Muromonab-CD3.
Recombinant vaccines and the development of new vaccine strategies.
Knockout mouse.
Adenovirus Vectors for Gene Therapy, Vaccination and Cancer Gene Therapy.
Genetically modified food. Bacillus thuringiensis. Golden rice.

 

May 29, 2018 Posted by | Biology, Books, Chemistry, Diabetes, Engineering, Genetics, Immunology, Medicine, Molecular biology, Pharmacology | Leave a comment

Alcohol and Aging (II)

I gave the book 3 stars on goodreads.

As is usual for publications of this nature, the book includes many chapters that cover similar topics and so the coverage can get a bit repetitive if you’re reading it from cover to cover the way I did; most of the various chapter authors obviously didn’t read the other contributions included in the book, and as each chapter is meant to stand on its own you end up with a lot of chapter introductions which cover very similar topics. If you can disregard such aspects it’s a decent book, which covers a wide variety of topics.

Below I have added some observations from some of the chapters of the book which I did not cover in my first post.

It is widely accepted that consuming heavy amounts of alcohol and binge drinking are detrimental to the brain. Animal studies that have examined the anatomical changes that occur to the brain as a consequence of consuming alcohol indicate that heavy alcohol consumption and binge drinking leads to the death of existing neurons [10, 11] and prevents production of new neurons [12, 13]. […] While animal studies indicate that consuming even moderate amounts of alcohol is detrimental to the brain, the evidence from epidemiological studies is less clear. […] Epidemiological studies that have examined the relationship between late life alcohol consumption and cognition have frequently reported that older adults who consume light to moderate amounts of alcohol are less likely to develop dementia and have higher cognitive functioning compared to older adults who do not consume alcohol. […] In a meta-analysis of 15 prospective cohort studies, consuming light to moderate amounts of alcohol was associated with significantly lower relative risk (RR) for Alzheimer’s disease (RR=0.72, 95% CI=0.61–0.86), vascular dementia (RR=0.75, 95% CI=0.57–0.98), and any type of dementia (RR=0.74, 95% CI=0.61–0.91), but not cognitive decline (RR=0.28, 95 % CI=0.03–2.83) [31]. These findings are consistent with a previous meta-analysis by Peters et al. [33] in which light to moderate alcohol consumption was associated with a decreased risk for dementia (RR=0.63, 95 % CI=0.53–0.75) and Alzheimer’s disease (RR=0.57, 95 % CI=0.44–0.74), but not vascular dementia (RR=0.82, 95% CI=0.50–1.35) or cognitive decline RR=0.89, 95% CI=0.67–1.17). […] Mild cognitive impairment (MCI) has been used to describe the prodromal stage of Alzheimer’s disease […]. There is no strong evidence to suggest that consuming alcohol is protective against MCI [39, 40] and several studies have reported non-significant findings [41–43].”

The majority of research on the relationship between alcohol consumption and cognitive outcomes has focused on the amount of alcohol consumed during old age, but there is a growing body of research that has examined the relationship between alcohol consumption during middle age and cognitive outcomes several years or decades later. The evidence from this area of research is mixed with some studies not detecting a significant relationship [17, 58, 59], while others have reported that light to moderate alcohol consumption is associated with preserved cognition [60] and decreased risk for cognitive impairment [31, 61, 62]. […] Several epidemiological studies have reported that light to moderate alcohol consumption is associated with a decreased risk for stroke, diabetes, and heart disease [36, 84, 85]. Similar to the U-shaped relationship between alcohol consumption and dementia, heavy alcohol consumption has been associated with poor health [86, 87]. The decreased risk for several metabolic and vascular health conditions for alcohol consumers has been attributed to antioxidants [54], greater concentrations of high-density lipoprotein cholesterol in the bloodstream [88], and reduced blood clot formation [89]. Stroke, diabetes, heart disease, and related conditions have all been associated with lower cognitive functioning during old age [90, 91]. The reduced prevalence of metabolic and vascular health conditions among light to moderate alcohol consumers may contribute to the decreased risk for dementia and cognitive decline for older adults who consume alcohol. A limitation of the hypothesis that the reduced risk for dementia among light and moderate alcohol consumers is conferred through the reduced prevalence of adverse health conditions associated with dementia is the possibility that this relationship is confounded by reverse causality. Alcohol consumption decreases with advancing age and adults may reduce their alcohol consumption in response to the onset of adverse health conditions […] the higher prevalence of dementia and lower cognitive functioning among abstainers may be due in part to their worse health rather than their alcohol consumption.”

A limitation of large cohort studies is that subjects who choose not to participate or are unable to participate are often less healthy than those who do participate. Non-response bias becomes more pronounced with age because only subjects who have survived to old age and are healthy enough to participate are observed. Studies on alcohol consumption and cognition are sensitive to non-response bias because light and moderate drinkers who are not healthy enough to participate in the study will not be observed. Adults who survive to old age despite consuming very high amounts of alcohol represent an even more select segment of the general population because they may have genetic, behavioral, health, social, or other factors that protect them against the negative effects of heavy alcohol consumption. As a result, the analytic sample of epidemiological studies is more likely to be comprised of “healthy” drinkers, which biases results in favor of finding a positive effect of light to moderate alcohol consumption for cognition and health in general. […] The incidence of Alzheimer’s disease doubles every 5 years after 65 years of age [94] and nearly 40% of older adults aged 85 and over are diagnosed with Alzheimer’s disease [7]. The relatively old age of onset for most dementia cases means the observed protective effect of light to moderate alcohol consumption for dementia may be due to alcohol consumers being more likely to die or drop out of a study as a result of their alcohol consumption before they develop dementia. This bias may be especially strong for heavy alcohol consumers. Not properly accounting for death as a competing outcome has been observed to artificially increase the risk of dementia among older adults with diabetes [95] and the effect that death and other competing outcomes may have on the relationship between alcohol consumption and dementia risk is unclear. […] The majority of epidemiological studies that have studied the relationship between alcohol consumption and cognition treat abstainers as the reference category. This can be problematic because often times the abstainer or non-drinking category includes older adults who stopped consuming alcohol because of poor health […] Not differentiating former alcohol consumers from lifelong abstainers has been found to explain some but not all of the benefit of alcohol consumption for preventing mortality from cardiovascular causes [96].”

“It is common for people to engage in other behaviors while consuming alcohol. This complicates the relationship between alcohol consumption and cognition because many of the behaviors associated with alcohol consumption are positively and negatively associated with cognitive functioning. For example, alcohol consumers are more likely to smoke than non-drinkers [104] and smoking has been associated with an increased risk for dementia and cognitive decline [105]. […] The relationship between alcohol consumption and cognition may also differ between people with or without a history of mental illness. Depression reduces the volume of the hippocampus [106] and there is growing evidence that depression plays an important role in dementia. Depression during middle age is recognized as a risk factor for dementia [107], and high depressive symptoms during old age may be an early symptom of dementia [108]. Middle aged adults with depression or other mental illness who self-medicate with alcohol may be at especially high risk for dementia later in life because of synergistic effects that alcohol and depression has on the brain. […] While current evidence from epidemiological studies indicates that consuming light to moderate amounts of alcohol, in particular wine, does not negatively affect cognition and in many cases is associated with cognitive health, adults who do not consume alcohol should not be encouraged to increase their alcohol consumption until further research clarifies these relationships. Inconsistencies between studies on how alcohol consumption categories are defined make it difficult to determine the “optimal” amount of alcohol consumption to prevent dementia. It is likely that the optimal amount of alcohol varies according to a person’s gender, as well as genetic, physiological, behavioral, and health characteristics, making the issue extremely complex.”

Falls are the leading cause of both fatal and nonfatal injuries among older adults, with one in three older adults falling each year, and 20–30% of people who fall suffer moderate to severe injuries such as lacerations, hip fractures, and head traumas. In fact, falls are the foremost cause of both fractures and traumatic brain injury (TBI) among older adults […] In 2013, 2.5 million nonfatal falls among older adults were treated in ED and more than 734,000 of these patients were hospitalized. […] Our analysis of the 2012 Nationwide Emergency Department Sample (NEDS) data set show that fall-related injury was a presenting problem among 12% of all ED visits by those aged 65+, with significant differences among age groups: 9% among the 65–74 age group, 12 % among the 75–84 age group, and 18 % among the 85+ age group [4]. […] heavy alcohol use predicts fractures. For example, among those 55+ years old in a health survey in England, men who consumed more than 8 units of alcohol and women who consumed more than 6 units on their heaviest drinking day in the past week had significantly increased odds of fractures (OR =1.65, 95% CI =1.37–1.98 for men and OR=2.07, 95% CI =1.28–3.35 for women) [63]. […] The 2008–2009 Canadian Community Health Survey-Healthy Aging also showed that consumption of at least one alcoholic drink per week increased the odds of falling by 40 % among those 65+ years [57].”

I at first was not much impressed by the effect sizes mentioned above because there are surely 100 relevant variables they didn’t account for/couldn’t account for, but then I thought a bit more about it. An important observation here – they don’t mention it in the coverage, but it sprang to mind – is that if sick or frail elderly people consume less alcohol than their more healthy counterparts, and are more likely to not consume alcohol (which they do, and which they are, we know this), and if frail or sick(er) elderly people are more likely to suffer a fall/fracture than are people who are relatively healthy (they are, again, we know this), well, then you’d expect consumption of alcohol to be found to have a ‘protective effect’ simply due to confounding by (reverse) indication (unless the researchers were really careful about adjusting for such things, but no such adjustments are mentioned in the coverage, which makes sense as these are just raw numbers being reported). The point is that the null here should not be that ‘these groups should be expected to have the same fall rate/fracture rate’, but rather ‘people who drink alcohol should be expected to be doing better, all else equal’ – but they aren’t, quite the reverse. So ‘the true effect size’ here may be larger than what you’d think.

I’m reasonably sure things are a lot more complicated than the above makes it appear (because of those 100 relevant variables we were talking about…), but I find it interesting anyway. Two more things to note: 1. Have another look at the numbers above if they didn’t sink in the first time. This is more than 10% of emergency department visits for that age group. Falls are a really big deal. 2. Fractures in the elderly are also a potentially really big deal. Here’s a sample quote: “One-fifth of hip fracture victims will die within 6 months of the injury, and only 50% will return to their previous level of independence.” (link). In some contexts, a fall is worse news than a cancer diagnosis, and they are very common events in the elderly. This also means that even relatively small effect sizes here can translate into quite large public health effects, because baseline incidence is so high.

The older adult population is a disproportionate consumer of prescription and over-the-counter medications. In a nationally representative sample of community-dwelling adults aged 57–84 years from the National Social Life, Health, and Aging Project (NSHAP) in 2005–2006, 81 % regularly used at least one prescription medication on a regular basis and 29% used at least five prescription medications. Forty-two percent used at least one nonprescription medication and concurrent use with a prescription medication was common, with 46% of prescription medication users also using OTC medications [2]. Prescription drug use by older adults in the U.S. is also growing. The percentage of older adults taking at least one prescription drug in the last 30 days increased from 73.6% in 1988–1994 to 89.7 % in 2007–2010 and the percentage taking five or more prescription drugs in the last 30 days increased from 13.8% in 1988–1994 to 39.7 % in 2007–2010 [3].”

The aging process can affect the response to a medication by altering its pharmacokinetics and pharmacodynamics [9, 10]. Reduced gastrointestinal motility and gastric acidity can alter the rate or extent of drug absorption. Changes in body composition, including decreased total body water and increased body fat can alter drug distribution. For alcohol, changes in body composition result in higher blood alcohol levels in older adults compared to younger adults after the same dose or quantity  of alcohol consumed. Decreased size of the liver, hepatic blood flow, and function of Phase I (oxidation, reduction, and hydrolysis) metabolic pathways result in reduced drug metabolism and increased drug exposure for drugs that undergo Phase I metabolism. Phase II hepatic metabolic pathways are generally preserved with aging. Decreased size of the kidney, renal blood flow, and glomerular filtration result in slower elimination of medications and metabolites by the kidney and increased drug exposure for medications that undergo renal elimination. Age-related impairment of homeostatic mechanisms and changes in receptor number and function can result in changes in pharmacodynamics as well. Older adults are generally more sensitive to the effects of medications and alcohol which act on the central nervous system for example. The consequences of these physiologic changes with aging are that older adults often experience increased drug exposure for the same dose (higher drug concentrations over time) and increased sensitivity to medications (greater response at a given drug concentration) than their younger counterparts.”

“Aging-related changes in physiology are not the only sources of variability in pharmacokinetics and pharmacodynamics that must be considered for an individual person. Older adults experience more chronic diseases that may decrease drug metabolism and renal elimination than younger cohorts. Frailty may result in further decline in drug metabolism, including Phase II metabolic pathways in the liver […] Drug interactions must also be considered […] A drug interaction is defined as a clinically meaningful change in the effect of one drug when coadministered with another drug [12]. Many drugs, including alcohol, have the potential for a drug interaction when administered concurrently, but whether a clinically meaningful change in effect occurs for a specific person depends on patient-specifc factors including age. Drug interactions are generally classified as pharmacokinetic interactions, where one drug alters the absorption, distribution, metabolism, or elimination of another drug resulting in increased or decreased drug exposure, or pharmacodynamic interactions, where one drug alters the response to another medication through additive or antagonistic pharmacologic effects [13]. An adverse drug event occurs when a pharmacokinetic or pharmacodynamic interaction or combination of both results in changes in drug exposure or response that lead to negative clinical outcomes. The adverse drug event could be a therapeutic failure if drug exposure is decreased or the pharmacologic response is antagonistic. The adverse drug event could be drug toxicity if the drug exposure is increased or the pharmacologic response is additive or synergistic. The threshold for experiencing an adverse event is often lower in older adults due to physiologic changes with aging and medical comorbidities, increasing their risk of experiencing an adverse drug event when medications are taken concurrently.”

“A large number of potential medication–alcohol interactions have been reported in the literature. Mechanisms of these interactions range from pharmacokinetic interactions affecting either alcohol or medication exposure to pharmacodynamics interactions resulting in exaggerated response. […] Epidemiologic evidence suggests that concurrent use of alcohol and medications among older adults is common. […] In a nationally representative U.S. sample of community-dwelling older adults in the National Social Life, Health and Aging Project (NSHAP) 2005–2006, 41% of participants reported consuming alcohol at least once per week and 20% were at risk for an alcohol–medication interaction because they were using both alcohol and alcohol-interacting medications on a regular basis [17]. […] Among participants in the Pennsylvania Assistance Contract for the Elderly program (aged 65–106 years) taking at least one prescription medication, 77% were taking an alcohol-interacting medication and 19% of the alcohol-interacting medication users reported concurrent use of alcohol [18]. […] Although these studies do not document adverse outcomes associated with alcohol–medication interactions, they do document that the potential exists for many older adults. […] High prevalence of concurrent use of alcohol and alcohol-interacting medications have also been reported in Australian men (43% of sedative or anxiolytic users were daily drinkers) [19], in older adults in Finland (42% of at-risk alcohol users were also taking alcohol-interacting medications) [20], and in older Irish adults (72% of participants were exposed to alcohol-interacting medications and 60% of these reported concurrent alcohol use) [21]. Drinking and medication use patterns in older adults may differ across countries, but alcohol–medication interactions appear to be a worldwide concern. […] Polypharmacy in general, and psychotropic burden specifically, has been associated with an increased risk of experiencing a geriatric syndrome such as falls or delirium, in older adults [26, 27]. Based on its pharmacology, alcohol can be considered as a psychotropic drug, and alcohol use should be assessed as part of the medication regimen evaluation to support efforts to prevent or manage geriatric syndromes. […] Combining alcohol and CNS active medications can be particularly problematic […] Older adults suffering from sleep problems or pain may be a particular risk for alcohol–medication interaction-related adverse events.”

In general, alcohol use in younger couples has been found to be highly concordant, that is, individuals in a relationship tend to engage in similar drinking behaviors [67,68]. Less is known, however, about alcohol use concordance between older couples. Graham and Braun [69] examined similarities in drinking behavior between spouses in a study of 826 community-dwelling older adults in Ontario, Canada. Results showed high concordance of drinking between spouses — whether they drank at all, how much they drank, and how frequently. […] Social learning theory suggests that alcohol use trajectories are strongly influenced by attitudes and behaviors of an individual’s social networks, particularly family and friends. When individuals engage in social activities with family and friends who approve of and engage in drinking, alcohol use, and misuse are reinforced [58, 59]. Evidence shows that among older adults, participation in social activities is correlated with higher levels of alcohol consumption [34, 60]. […] Brennan and Moos [29] […] found that older adults who reported less empathy and support from friends drank more alcohol, were more depressed, and were less self-confident. More stressors involving friends were associated with more drinking problems. Similar to the findings on marital conflict […], conflict in close friendships can prompt alcohol-use problems; conversely, these relationships can suffer as a result of alcohol-related problems. […] As opposed to social network theory […], social selection theory proposes that alcohol consumption changes an individual’s social context [33]. Studies among younger adults have shown that heavier drinkers chose partners and friends who approve of heavier drinking [70] and that excessive drinking can alienate social networks. The Moos study supports the idea that social selection also has a strong influence on drinking behavior among older adults.”

Traditionally, treatment studies in addiction have excluded patients over the age of 65. This bias has left a tremendous gap in knowledge regarding treatment outcomes and an understanding of the neurobiology of addiction in older adults.

Alcohol use causes well-established changes in sleep patterns, such as decreased sleep latency, decreased stage IV sleep, and precipitation or aggravation of sleep apnea [101]. There are also age-associated changes in sleep patterns including increased REM episodes, a decrease in REM length, a decrease in stage III and IV sleep, and increased awakenings. Age-associated changes in sleep can all be worsened by alcohol use and depression. Moeller and colleagues [102] demonstrated in younger subjects that alcohol and depression had additive effects upon sleep disturbances when they occurred together [102]. Wagman and colleagues [101] also have demonstrated that abstinent alcoholics did not sleep well because of insomnia, frequent awakenings, and REM fragmentation [101]; however, when these subjects ingested alcohol, sleep periodicity normalized and REM sleep was temporarily suppressed, suggesting that alcohol use could be used to self-medicate for sleep disturbances. A common anecdote from patients is that alcohol is used to help with sleep problems. […] The use of alcohol to self-medicate is considered maladaptive [34] and is associated with a host of negative outcomes. […] The use of alcohol to aid with sleep has been found to disrupt sleep architecture and cause sleep-related problems and daytime sleepiness [35, 36, 46]. Though alcohol is commonly used to aid with sleep initiation, it can worsen sleep-related breathing disorders and cause snoring and obstructive sleep apnea [36].”

Epidemiologic studies have clearly demonstrated that comorbidity between alcohol use and other psychiatric symptoms is common in younger age groups. Less is known about comorbidity between alcohol use and psychiatric illness in late life [88]. […] Blow et al. [90] reviewed the diagnosis of 3,986 VA patients between ages 60 and 69 presenting for alcohol treatment [90]. The most common comorbid psychiatric disorder was an affective disorder found in 21 % of the patients. […] Blazer et al. [91] studied 997 community dwelling elderly of whom only 4.5% had a history of alcohol use problems [91]; […] of these subjects, almost half had a comorbid diagnosis of depression or dysthymia. Comorbid depressive symptoms are not only common in late life but are also an important factor in the course and prognosis of psychiatric disorders. Depressed alcoholics have been shown to have a more complicated clinical course of depression with an increased risk of suicide and more social dysfunction than non-depressed alcoholics [9296]. […]  Alcohol use prior to late life has also been shown to influence treatment of late life depression. Cook and colleagues [94] found that a prior history of alcohol use problems predicted a more severe and chronic course for depression [94]. […] The effect of past heavy alcohol use is [also] highlighted in the findings from the Liverpool Longitudinal Study demonstrating a fivefold increase in psychiatric illness among elderly men who had a lifetime history of 5 or more years of heavy drinking [24]. The association between heavy alcohol consumption in earlier years and psychiatric morbidity in later life was not explained by current drinking habits. […] While Wernicke-Korsakoff’s syndrome is well described and often caused by alcohol use disorders, alcohol-related dementia may be difficult to differentiate from Alzheimer’s disease. Clinical diagnostic criteria for alcohol-related dementia (ARD) have been proposed and now validated in at least one trial, suggesting a method for distinguishing ARD, including Wernicke-Korsakoff’s syndrome, from other types of dementia [97, 98]. […] Finlayson et al. [100] found that 49 of 216 (23%) elderly patients presenting for alcohol treatment had dementia associated with alcohol use disorders [100].”

 

May 24, 2018 Posted by | Books, Demographics, Epidemiology, Medicine, Neurology, Pharmacology, Psychiatry, Statistics | Leave a comment

Alcohol and Aging

I’m currently reading this book. Below I have added some observations from the first five chapters. The book has 17 chapters in total, covering a wide variety of topics. I like the coverage so far. All the highlighted observations below were highlighted by me; they were not written in bold in the book.

“Alcohol consumption and alcohol-related deaths or problems have recently increased among older age groups in many developed countries […]. This increase in consumption, in combination with the ageing of populations worldwide, means that the absolute number of older people with alcohol problems is on the increase and a real danger exists that a “silent epidemic” may be evolving [2]. Although there is growing recognition of this public health problem, clinicians consistently under-detect alcohol problems and under-deliver behaviour change interventions to older people [8, 9] […] While older adults historically demonstrate much lower rates of alcohol use compared with younger adults [4, 5] and present to substance abuse treatment programs less frequently than their younger counterparts [6], substantial evidence suggests that at-risk alcohol use and alcohol use disorder (AUD) among older adults has been under-identified for decades [7, 8]. […] Individuals who have had alcohol-related problems over several decades and have survived into old age tend to be referred to as early onset drinkers. It is estimated that two-thirds of older drinkers fall into this category [2]. […] Late-onset drinking accounts for the remaining one-third of older people who use alcohol excessively [2]. Late-onset drinkers usually begin drinking in their 50s or 60s and tend to be of a higher socio-economic status than early onset drinkers with higher levels of education and income [2]. Stressful life events, such as bereavement or retirement, may trigger late-onset drinking […]. One study demonstrated that 70 % of late-onset drinkers had experienced stressful life events, compared with 25 % of early onset drinkers [17]. Those whose alcohol problems are of late onset tend to have fewer health problems and are more receptive to treatment than those with early onset problems […] Our data highlighted that losing a parent or partner was often pinpointed as an event that had prompted an escalation in alcohol use […] A recent systematic review which examined the relationship between late-life spousal bereavement and changes in routine health behaviour over 32 different studies [however] found only moderate evidence for increased alcohol consumption [41].”

“Understanding alcohol use among older adults requires a life course perspective [2] […]. Broadly speaking, to understand alcohol consumption patterns and associated risks among older adults, one must consider both biopsychosocial processes that emerge earlier in life and aging-specific processes, such as multimorbidity and retirement. […] In the population overall, older adulthood is a life stage in which overall alcohol consumption decreases, binge drinking becomes less common, and individuals give up drinking. […] data collected internationally supports the assertion that older adulthood is a period of declining drinking. […] Two forces specific to later life may be at work in decreasing levels of alcohol consumption in late life. First, the “sick-quitter” hypothesis [12, 13] suggests that changes in health during the aging process limit alcohol consumption. With declines in health, older adults decrease the quantity and frequency of their drinking leading to lower average consumption in the overall older adult population [11, 14]. Similarly, differential mortality of heavy drinkers may lead to decreases in alcohol use among cohorts of older adults; these changes in average drinking may be a function of early mortality of heavy drinkers [15]. Although alcohol use generally declines throughout the course of older adulthood, the population of older adults exhibits a great deal of variability in drinking patterns. […] longitudinal research studies have found that older men tend to consume alcohol at higher levels than women, and their consumption levels decline more slowly than women’s [6]. […] National survey data [from the UK] estimate that approximately 40–45% of older adults (65+) drank alcohol in the past year […] Numerous studies suggest that lifetime nondrinkers are more likely to be female, display greater religiosity (e.g., attend religious services), and have lower levels of education than their moderate drinking peers [20, 21]. […] Older adult nondrinkers are a heterogeneous population, and as such, lifetime nondrinkers and former drinkers should be studied separately. This is especially important when considering the issue of health and drinking because the context for abstinence may be different in these two groups [23, 24].”

“[V]ersion 5 of the DSM manual abandoned separate alcohol abuse and alcohol dependence diagnoses, and combined them into a single diagnosis: alcohol use disorder (AUD). […] The NSDUH survey estimated a past-year prevalence rate of alcohol abuse or dependence of 6.1 % among those aged 50–54 and 2.2 % among those ages 65 and older. […] AUD is the most severe manifestation of alcohol-related pathology among older adults, but most alcohol-related harm is not a function of disordered drinking [55]. […] older adults commonly take medications that interact with alcohol. A recent study of community-dwelling older adults (aged 57+) found that 41% consumed alcohol regularly and among regular alcohol consumers, 51 % used at least one alcohol interacting medication [57]. An analysis of the Irish Longitudinal Study on Ageing identified a high prevalence of alcohol use (60 %) among individuals taking alcohol interacting medications [58]. Falls are also a common health concern for older adults, and there is evidence of increased risk of falls among older adults who drink more than 14 drinks per week [59] […] a study by Holahan and colleagues [44] explored longitudinal outcomes for individuals who were moderate drinkers (below the weekly at-risk threshold) but who engaged in heavy episodic drinking (exceeded day threshold). Individuals were first surveyed between the ages of 55 and 65 and followed for 20 years. Episodic heavy drinkers were twice as likely to have died in the 20-year follow-up period compared with those who were not episodic heavy drinkers […to clarify, none of the episodic heavy drinkers in that study would qualify for a diagnosis of AUD, US] […] Alcohol use in the aging population has been defined through various thresholds of risk. Each approach brings certain advantages and problems. Using alcohol related disorders as a benchmark misses many older adults who may experience alcohol-related consequences to their health and well-being even though they do not meet criteria for disordered drinking. More conservative measures of alcohol risk may identify at-risk drinking in those for whom alcohol use may never compromise their health. […] among light to moderate drinkers, the level of risk is uncertain.

Among adults 65 years old and older in 2000–2001, just under 49.6% reported lifetime use [of tobacco] and 14% reported use in the last 12 months [30]. […] Data collected by the Centers for Disease Control in 2008 revealed that only 9% of individuals aged 65 and older reported being current smokers [42]. […] data from the 2001–2002 NESARC reveal a strong relationship between AUDs and tobacco use […] in 2012, 19.3% of adults 65 and older reported having ever used illicit drugs in their lifetime, whereas 47.6% of adults between the ages 60 and 64 reported lifetime drug use. […] In the 2005–2006 NSDUH […] 3.9% of adults aged 50–64, the bulk of the Baby Boomers at that time, reported past year marijuana use, compared to only 0.7% of those 65 years old and older [53]. Among those aged 50 and older reporting marijuana use, 49% reported using marijuana more than 30 days in the past year, with a mean of 81 days. […] The increasingly widespread, legal availability and acceptance of cannabis, for both medicinal and recreational use, may pose unique risks in an aging population. Across age groups, cannabis is known to impair short-term memory, increase one’s heart and respiratory rate, and elevate blood pressure [56]. […] For older adults, these risks may be particularly pronounced, especially for those whose cognitive or cardiovascular systems may already be compromised. […] Most researchers generally consider existing estimations of mental health and substance use disorders to be underestimations among older adults. […] Assumptions that older adults do not drink or use illicit substances should not be made.

“Although several studies in the United States and elsewhere have shown that moderate alcohol consumption is associated with reduced risk for heart disease [16–20] and that heavy intake is associated with increased risk of CVD incidence [6, 21] and all-cause mortality in various populations […], data specific to effects of alcohol in elderly populations remain scant. The few studies available, e.g., the Cardiovascular Health Study, suggest that moderate alcohol use is beneficial and may be associated with reduced Medicare costs among individuals with CVD [25]. The benefits and risks of alcohol consumption are dose dependent with a consistent cut-point for cardiovascular benefits being 1 drink per day for women and about 2 drinks per day for men [21]. These cut-points have also been observed for associations between alcohol consumption and all-cause mortality [21, 26]. Although there are many similarities in the effects of alcohol on CVD across many populations, the magnitude and significance of the association between amount of alcohol consumed and CVD risk remain inconsistent, especially within countries, regions, age, sex, race, and other population strata […] As shown in a recent review [33], a drinking pattern characterized by moderate drinking without episodes of heavy drinking may be more beneficial for CVD protection when compared to patterns that include heavy drinking episodes. […] In additional to amount of alcohol consumed per se, the pattern of alcohol consumption, commonly defined as the number of drinking days per week is also associated with CVD outcomes independent of the amount of alcohol consumed [18, 24, 34–37]. In general, a drinking pattern characterized by alcohol consumption on 4 or more days of the week is inversely associated with MI, stroke, and CVD risk factors“.

“The relation between moderate alcohol consumption and intermediate CVD markers was summarized in two recent reviews [6, 42]. Overall, moderate alcohol consumption is associated with improved concentrations of CVD risk markers, particularly HDL-C concentrations [18, 31, 43, 44]. Whether HDL-C resulting from moderate alcohol intake is functional and beneficial for cardioprotection remains unknown […] While moderate alcohol consumption shows no appreciable benefit on LDL-C, it is associated with significant improvement in insulin sensitivity […] Alcohol intake may also influence CVD markers through its effects on absorption and metabolism of nutrients in the body. This is critical especially in the elderly who may have deficiencies or insufficiencies of nutrients such as folate, vitamin B12, vitamin D, magnesium, and iron. Indeed, moderate alcohol consumption has been shown to improve status of nutrients associated with cardiovascular effects. For example, it improves iron absorption in humans [52, 53] and is associated with higher vitamin D levels in men [54]. […] heavy alcohol consumption [on the other hand] leads to deficiencies of magnesium [55], zinc, folate [56], and other nutrients and damages the intestinal lining and the liver impairing nutrient absorption and metabolism [57]. These effects of alcohol are likely to be worse in the elderly. […] chronic heavy drinking lowers magnesium [55], a nutrient needed for proper metabolism of vitamin D [58], implying that supplementation with vitamin D in heavy drinkers may not be as effective as intended. These effects of alcohol could also extend to prescription medications that are in common use among the elderly. […] Taken together, moderate alcohol seems to protect against cardiovascular disease across the whole life span but the data on older age groups are scanty. Theoretical considerations as well as emerging data on intermediate outcomes such as lipids, suggest that moderate alcohol could beneficially interact with medications such as statins to improve cardiovascular health but heavy alcohol could worsen CVD risk, especially in the elderly.”

Alcohol is one of the main risk factors for cancer, with alcohol use attributed to up to 44% of some cancers [2, 3] and between 3.2 and 3.7 % of all cancer deaths [4, 5]. Since 1988, alcohol has been classified as a carcinogen [6]. Types of cancers linked to alcohol use include cancers of the liver, pancreas, esophagus, breast, pharynx, and larynx with most convincing evidence for alcohol-related cancers of the upper aerodigestive tract, stomach, colorectum, liver, and the lungs [2, 7]. All of these cancers have a much higher incidence and mortality rate in older adults […] For alcohol-associated cancers, 66–95% of new cases appear in those 55 years of age or older [8, 9]. For alcohol-associated cancers, other than breast cancer, 75–95 % of new cases occur in those 55 years of age or older [8, 10, 11]. […] Four countries with a decline in alcohol use (France, the UK, Sweden, and US) have […] demonstrated a stabilization or decline in the incidence and mortality rates for types of cancers closely associated with alcohol use [12]. […] The increased risk for cancer related to alcohol use is based on a combination of both quantity/frequency and duration of use, with those consuming alcohol for 20 or more years at increased risk [14]. […] consumption of alcohol at lower levels may also increase the risk for alcohol-related cancers. Nelson et al. reported that daily consumption of 1.5 drinks or greater accounted for 26–35% of alcohol-attributable deaths [5]. Thus, the evidence is growing that daily drinking, even at lower levels, increases the risk for developing cancer in later life with the conclusion that there may be no safe threshold level for alcohol consumption below which there is no risk for cancer [6, 16, 17].”

The risk for developing alcohol-related cancer is increased among those who have a history of concurrent tobacco use and at-risk alcohol use […] Among individuals who have a history of smoking two or more packs of cigarettes and consuming more than four alcoholic drinks per day, the risk of head and neck cancer is increased greater than 35-fold [22]. […] At least 75 % of head and neck cancer is associated with alcohol and tobacco use[9]. […] There are gender differences in alcohol attributable cancer deaths with over half (56–66 %) of all alcohol-attributable cancer deaths in females resulting from breast cancer [5]. […] For women, even low-risk alcohol use (5–14.9 g/day or one standard drink of alcohol or less) increases the risk of cancer, mainly breast cancer [18]. […] Alcohol use during cancer treatment can complicate the treatment regimen and lead to poor long-term outcomes. […] Alcohol use is correlated with poor survival outcomes in oncology patients. […] Another issue for patients during cancer treatment is quality of life. Alcohol consumption at higher levels […] or patients who screened positive for a possible AUD during cancer treatment experienced worse quality of life outcomes, including problems with pain, sleep, dyspnea, total distress, anxiety, coping, shortness of breath, diarrhea, poor emotional functioning, fatigue, and poor appetite [58, 59]. Current alcohol use has also been associated with higher pain scores and long-term use of opioids [48, 49].”

May 14, 2018 Posted by | Books, Cancer/oncology, Cardiology, Epidemiology, Medicine | Leave a comment

100 cases in emergency medicine and critical care (II)

In this post I’ve added some links to topics covered in the second half of the book, as well as some quotes.

Flexor tenosynovitis. Kanavel’s cardinal signs.
Pelvic Fracture in Emergency Medicine. (“Pelvic injuries may be associated with significant haemorrhage. […] The definitive management of pelvic fractures is surgical.”)
Femur fracture. Girdlestone-Taylor procedure. (“A fall from standing can result in occult cervical spine fractures. If there is any doubt, then the patient should be immobilized and imaged to exclude injury.”)
Anterior Cruciate Ligament Injury. Anterior drawer test. Segond fracture. (“[R]upture of the anterior cruciate ligament (ACL) […] is often seen in younger patients and is associated with high-energy sports such as skiing, football or cycling. […] Take a careful history of all knee injuries including the mechanism of injury and the timing of swelling.”)
Tibial plateau fracture. Schatzker classification of tibial plateau fractures. (“When assessing the older patient with minor trauma resulting in fracture, always investigate the possibility that this may be a pathological fracture (e.g. osteoporosis, malignancy.”))
Ankle Fracture. Maisonneuve fracture.
Acute cholecystitis. Murphy’s sign. Mirizzi syndrome. (“Most patients with gallstones are asymptomatic. However, complications of gallstones range from biliary colic, whereby gallstones irritate or temporarily block the biliary tract, to acute cholecystitis, which is an infection of the gallbladder sometimes due to obstruction of the cystic duct. Gallstones can also become trapped in the common bile duct (choledocholithiasis) causing jaundice and potential ascending cholangitis, which refers to infection of the biliary tree. Ascending cholangitis classically presents with Charcot’s triad of fever, right upper quadrant (RUQ) pain and jaundice. It can be life-threatening. […] Acute cholecystitis requires antibiotic therapy and admission under general surgery, who should decide whether to perform a ‘hot’ emergency cholecystectomy within 24-72 hours of admission. This shortens the hospital stay but can be associated with more surgical complications.”)
Small-Bowel Obstruction. (“SBO is defined as a mechanical obstruction to the passage of contents in the bowel lumen. There can be complete or incomplete obstruction. […] There are many causes of SBO. […] The commonest cause of SBO worldwide is incarcerated herniae, whereas the commonest cause in the Western world is adhesion secondary to previous abdominal surgery. […] A strangulated hernia is […] a surgical emergency associated with a high mortality.”)
Pneumothorax. Flail chest.
Perforated peptic ulcer. (“Immediate onset pain usually signifies a rupture or occlusion of an organ, whereas more insidious onset tends to be infective or inflammatory in origin.” […] A perforated peptic ulcer is a surgical emergency that presents with upper abdominal pain, decreased or absent bowel sounds and signs of septic shock.”)
Diverticulitis.
Acute appendicitisMcBurney’s point. Rovsing’s sign. Psoas signObturator sign. (“The lifetime risk of developing appendicitis is 5-10%, and it is the commonest cause of emergency abdominal surgery in the Western world. […] in appendicitis, pain classically precedes vomiting, whereas the opposite occurs in gastroenteritis. […] Appendicitis is the commonest general surgical emergency in pregnant women and may have an atypical presentation with pain anywhere in the right side of the abdomen […] It is estimated that 25% of appendicitis will perforate 24 hours from the onset of symptoms, and 75% by 48 hours.”)
Abdominal aortic aneurysm. (“A ruptured AAA is a surgical emergency with 100% mortality if not immediately repaired. It classically presents with abdominal pain, pulsatile abdominal mass and hypotension. It should be ruled out in all patients over 65 years of age presenting with abdominal, loin or groin pain, especially if they have risk factors including smoking, hypertension, COPD or peripheral vascular disease. […] Do not be lured into a diagnosis of renal colic in an older patient, without definitive imaging to rule out an AAA rupture.”)
Nephrolithiasis. (“up to 30% of patients with kidney stones have a recurrence within 5 years”)
Acute Otitis Media. Mastoiditis. Bezold’s abscess.
Malignant otitis externa. (“Despite the term ‘malignant’, this is not a cancerous process. Rather, it refers to temporal bone (skull base) osteomyelitis. This is an ENT emergency associated with serious morbidity and mortality including cranial nerve palsies. […] The defining features of MOE are severe otalgia, often exceeding oral analgesics, in the older diabetic patient. Other symptoms such as hearing loss, otorrhoea, vertigo and tinnitus may also be present”)
Post-tonsillectomy hemorrhage. (Post-tonsillectomy bleeding (PTB) is a common but potentially serious complication occurring in around 5%-10% of patients undergoing tonsillectomy. The majority are self-limiting but around 1% require a return to theatre to stop the bleeding. All patients must be assessed immediately and admitted for observation as a self-limiting bleed can preclude a larger bleed within 24 hours. […] [PTB] should be treated as an airway emergency due to the possibility of obstruction.”)
Acute rhinosinusitis. (“Periorbital cellulitis is a potentially sight-threatening emergency. It is often precipitated by an upper respiratory tract infection, rhinosinusitis or local trauma (injury, insect bite).”)
Corneal Foreign Body. Seidel test. (“Pain with photosensitivity, watery discharge and foreign body sensation are cardinal features of corneal irritation. […] Abnormal pupil shape, iris defect and shallow anterior chamber are red flags for possible ocular perforation or penetrating ocular injury. […] Most conjunctival foreign bodies can be removed by simply irrigating the eye […] Removing a corneal foreign body […] requires more skill and an experienced operator should be sought. […] Iron, steel, copper and wood are known to cause severe ocular reactions”)
Acanthamoeba Keratitis. Bacterial Keratitis. Fungal keratitis. (“In patients with red eyes, reduced vision with severe to moderate pain should be prompted to an early ophthalmology review. Pre-existing ocular surface disease and contact lens wear are high risk factors for microbial keratitis.”)
Globe ruptureAcute orbital compartment syndromeLateral Canthotomy and Cantholysis. (Thirty percent of all facial fractures involve the orbit […] In open globe injuries with visible penetrating objects, it may be tempting to remove the object; however, avoid this as it may cause the globe to collapse.”)
Mandibular fracture. Guardsman fracture. (“Jaw pain, altered bite, numbness of lower lip, trismus or difficulty moving the jaw are the cardinal symptoms of possible mandibular fracture or dislocation.”)
Bronchiolitis. (“This is an acute respiratory condition, resulting in inflammation of the bronchioles. […] Bronchiolitis occurs in children under 2 years of age and most commonly presents in infants aged 3 to 6 months. […] Around 3% of all infants under 1 year old are admitted to hospital with bronchiolitis. […] Not all patients require hospital admission.”)
Fever of Unknown Origin. (“Fever is a very common presentation in the Emergency Department, and in the immunocompetent child is usually caused by a simple infection […] it is important to look for concerning features. Tachycardia is a particular feature that should not be ignored […] red-flag signs for serious illness [include:] • Grunting, tachypneoa or other signs of respiratory distress • Mottled, pale skin with cool peripheries […] Irritability […] not responding to social cues • Difficulty to rouse […] Consider Kawasaki disease in fever lasting more than 5 days.”)
Pediatric gastroenteritis. Rotavirus.
Acute Pyelonephritis. (“Female infants have a two- to-fourfold higher prevalence of UTI than male infants”)
Gastroesophageal Reflux Disease. (“Reflux describes the passage of gastric contents into the oesophagus with or without regurgitation and vomiting. This is a very common, normal, physiological process and occurs in 5% of babies up to six times per day. GORD presents when reflux causes troublesome symptoms or complications. This has a prevalence of 10%– 20% […] No investigations are required in the Emergency Department if there is a suspicion of GORD; this is usually a clinical diagnosis alone.”)
Head injury. (“Head injuries are common in children […] Clinical features of concern in head injuries include multiple episodes of vomiting […] significant scalp haematoma, prolonged loss of consciousness, confusion and seizures.”)
Pertussis. (“In the twentieth century, pertussis was one of the most common childhood diseases and a major cause of childhood mortality. Since use of the immunisation began, incidence has decreased more than 75%.”)
Hyperemesis gravidarum. ([HG] is defined as severe or long-lasting nausea and vomiting, appearing for the first time within the first trimester of pregnancy, and is so severe that weight loss, dehydration and electrolyte imbalance may occur. It affects less than 4% of pregnant women, although up to 80% of women suffer from some degree of nausea and vomiting throughout their pregnancy. […] Classically, patents present with a long history of nausea and vomiting that becomes progressively worse, despite treatment with simple antiemetics.”)
Ectopic pregnancy. (“Abdominal pain and collapse with a positive pregnancy test must be treated as a ruptured ectopic pregnancy until proven otherwise. […] In cases where the patient is stable and an intact ectopic is suspected, this is not an emergency and patients can be brought back the next day […] if seen out of hours”)
Recurrent miscarriage. Antiphospholipid syndrome. (“Bleeding in early pregnancy is common and does not necessarily lead to miscarriage.”)
Ovarian torsion. (“Torsion of the ovary and/ or fallopian tube account for between 2.4% and 7.4% of all gynaecological emergencies, and rapid intervention is required in order to preserve ovarian function. […] Ovarian torsion is unfortunately often misdiagnosed due to its non-specific symptoms and lack of diagnostic tools. […] Suspect ovarian torsion in women with severe sudden onset unilateral pelvic pain.”)
Pelvic Inflammatory Disease. Fitz-Hugh–Curtis syndrome.
Ovarian hyperstimulation syndrome. (“OHSS is an iatrogenic complication of fertility treatment with exogenous gonadotrophins to promote oocyte formation. Hyperstimulation of the ovaries leads to ovarian enlargement, and subsequent exposure to human chorionic gonadotrophin (hCG) causes production of proinflammatory mediators, primarily vascular endothelial growth factor (VEGF). The effects of proinflammatory mediators lead to increased vascular permeability and a loss of fluid from intravascular to third space compartments. This gives rise to ascites, pleural effusions and in some cases pericardial effusions. Women with severe OHSS can typically lose up to 20% of their circulating volume in the acute phase […] OHSS patients are also at high risk of developing a thromboembolism […] In conventional IVF, around one-third of cycles are affected by mild OHSS. The combined incidence of moderate or severe OHSS is reported as between 3.1% and 8%.”)
Pulmonary embolism. (“The overall prevalence of PE in pregnancy is between 2% and 6%. Pregnancy increases the risk of developing a venous thromboembolism by four to five times, compared to non-pregnant women of the same age.”)
Postpartum psychosis.
Informed consent. Gillick competency and Fraser guidelines.
Duty of candour. Never events.

May 8, 2018 Posted by | Books, Gastroenterology, Infectious disease, Medicine, Nephrology, Ophthalmology | Leave a comment

Molecular biology (I?)

“This is a great publication, considering the format. These authors in my opinion managed to get quite close to what I’d consider to be ‘the ideal level of coverage’ for books of this nature.”

The above was what I wrote in my short goodreads review of the book. In this post I’ve added some quotes from the first chapters of the book and some links to topics covered.

Quotes:

“Once the base-pairing double helical structure of DNA was understood it became apparent that by holding and preserving the genetic code DNA is the source of heredity. The heritable material must also be capable of faithful duplication every time a cell divides. The DNA molecule is ideal for this. […] The effort then concentrated on how the instructions held by the DNA were translated into the choice of the twenty different amino acids that make up proteins. […] George Gamov [yes, that George Gamov! – US] made the suggestion that information held in the four bases of DNA (A, T, C, G) must be read as triplets, called codons. Each codon, made up of three nucleotides, codes for one amino acid or a ‘start’ or ‘stop’ signal. This information, which determines an organism’s biochemical makeup, is known as the genetic code. An encryption based on three nucleotides means that there are sixty-four possible three-letter combinations. But there are only twenty amino acids that are universal. […] some amino acids can be coded for by more than one codon.”

“The mechanism of gene expression whereby DNA transfers its information into proteins was determined in the early 1960s by Sydney Brenner, Francois Jacob, and Matthew Meselson. […] Francis Crick proposed in 1958 that information flowed in one direction only: from DNA to RNA to protein. This was called the ‘Central Dogma‘ and describes how DNA is transcribed into RNA, which then acts as a messenger carrying the information to be translated into proteins. Thus the flow of information goes from DNA to RNA to proteins and information can never be transferred back from protein to nucleic acid. DNA can be copied into more DNA (replication) or into RNA (transcription) but only the information in mRNA [messenger RNA] can be translated into protein”.

“The genome is the entire DNA contained within the forty-six chromosomes located in the nucleus of each human somatic (body) cell. […] The complete human genome is composed of over 3 billion bases and contain approximately 20,000 genes that code for proteins. This is much lower than earlier estimates of 80,000 to 140,000 and astonished the scientific community when revealed through human genome sequencing. Equally surprising was the finding that genomes of much simpler organisms sequenced at the same time contained a higher number of protein-coding genes than humans. […] It is now clear that the size of the genome does not correspond with the number of protein-coding genes, and these do not determine the complexity of an organism. Protein-coding genes can be viewed as ‘transcription units’. These are made up of sequences called exons that code for amino acids, and separated by by non-coding sequences called introns. Associated with these are additional sequences termed promoters and enhancers that control the expression of that gene.”

“Some sections of the human genome code for RNA molecules that do not have the capacity to produce proteins. […] it is now becoming apparent that many play a role in controlling gene expression. Despite the importance of proteins, less than 1.5 per cent of the genome is made up of exon sequences. A recent estimate is that about 80 per cent of the genome is transcribed or involved in regulatory functions with the rest mainly composed of repetitive sequences. […] Satellite DNA […] is a short sequence repeated many thousands of times in tandem […] A second type of repetitive DNA is the telomere sequence. […] Their role is to prevent chromosomes from shortening during DNA replication […] Repetitive sequences can also be found distributed or interspersed throughout the genome. These repeats have the ability to move around the genome and are referred to as mobile or transposable DNA. […] Such movements can be harmful sometimes as gene sequences can be disrupted causing disease. […] The vast majority of transposable sequences are no longer able to move around and are considered to be ‘silent’. However, these movements have contributed, over evolutionary time, to the organization and evolution of the genome, by creating new or modified genes leading to the production of proteins with novel functions.”

“A very important property of DNA is that it can make an accurate copy of itself. This is necessary since cells die during the normal wear and tear of tissues and need to be replenished. […] DNA replication is a highly accurate process with an error occurring every 10,000 to 1 million bases in human DNA. This low frequency is because the DNA polymerases carry a proofreading function. If an incorrect nucleotide is incorporated during DNA synthesis, the polymerase detects the error and excises the incorrect base. Following excision, the polymerase reinserts the correct base and replication continues. Any errors that are not corrected through proofreading are repaired by an alternative mismatch repair mechanism. In some instances, proofreading and repair mechanisms fail to correct errors. These become permanent mutations after the next cell division cycle as they are no longer recognized as errors and are therefore propagated each time the DNA replicates.”

DNA sequencing identifies the precise linear order of the nucleotide bases A, C, G, T, in a DNA fragment. It is possible to sequence individual genes, segments of a genome, or whole genomes. Sequencing information is fundamental in helping us understand how our genome is structured and how it functions. […] The Human Genome Project, which used Sanger sequencing, took ten years to sequence and cost 3 billion US dollars. Using high-throughput sequencing, the entire human genome can now be sequenced in a few days at a cost of 3,000 US dollars. These costs are continuing to fall, making it more feasible to sequence whole genomes. The human genome sequence published in 2003 was built from DNA pooled from a number of donors to generate a ‘reference’ or composite genome. However, the genome of each individual is unique and so in 2005 the Personal Genome Project was launched in the USA aiming to sequence and analyse the genomes of 100,000 volunteers across the world. Soon after, similar projects followed in Canada and Korea and, in 2013, in the UK. […] To store and analyze the huge amounts of data, computational systems have developed in parallel. This branch of biology, called bioinformatics, has become an extremely important collaborative research area for molecular biologists drawing on the expertise of computer scientists, mathematicians, and statisticians.”

“[T]he structure of RNA differs from DNA in three fundamental ways. First, the sugar is a ribose, whereas in DNA it is a deoxyribose. Secondly, in RNA the nucleotide bases are A, G, C, and U (uracil) instead of A, G, C, and T. […] Thirdly, RNA is a single-stranded molecule unlike double-stranded DNA. It is not helical in shape but can fold to form a hairpin or stem-loop structure by base-pairing between complementary regions within the same RNA molecule. These two-dimensional secondary structures can further fold to form complex three-dimensional, tertiary structures. An RNA molecule is able to interact not only with itself, but also with other RNAs, with DNA, and with proteins. These interactions, and the variety of conformations that RNAs can adopt, enables them to carry out a wide range of functions. […] RNAs can influence many normal cellular and disease processes by regulating gene expression. RNA interference […] is one of the main ways in which gene expression is regulated.”

“Translation of the mRNA to a protein takes place in the cell cytoplasm on ribosomes. Ribosomes are cellular structures made up primarily of rRNA and proteins. At the ribosomes, the mRNA is decoded to produce a specific protein according to the rules defined by the genetic code. The correct amino acids are brought to the mRNA at the ribosomes by molecules called transfer RNAs (tRNAs). […] At the start of translation, a tRNA binds to the mRNA at the start codon AUG. This is followed by the binding of a second tRNA matching the adjacent mRNA codon. The two neighbouring amino acids linked to the tRNAs are joined together by a chemical bond called the peptide bond. Once the peptide bond forms, the first tRNA detaches leaving its amino acid behind. The ribosome then moves one codon along the mRNA and a third tRNA binds. In this way, tRNAs sequentially bind to the mRNA as the ribosome moves from codon to codon. Each time a tRNA molecule binds, the linked amino acid is transferred to the growing amino acid chain. Thus the mRNA sequence is translated into a chain of amino acids connected by peptide bonds to produce a polypeptide chain. Translation is terminated when the ribosome encounters a stop codon […]. After translation, the chain is folded and very often modified by the addition of sugar or other molecules to produce fully functional proteins.”

“The naturally occurring RNAi pathway is now extensively exploited in the laboratory to study the function of genes. It is possible to design synthetic siRNA molecules with a sequence complementary to the gene under study. These double-stranded RNA molecules are then introduced into the cell by special techniques to temporarily knock down the expression of that gene. By studying the phenotypic effects of this severe reduction of gene expression, the function of that gene can be identified. Synthetic siRNA molecules also have the potential to be used to treat diseases. If a disease is caused or enhanced by a particular gene product, then siRNAs can be designed against that gene to silence its expression. This prevents the protein which drives the disease from being produced. […] One of the major challenges to the use of RNAi as therapy is directing siRNA to the specific cells in which gene silencing is required. If released directly into the bloodstream, enzymes in the bloodstream degrade siRNAs. […] Other problems are that siRNAs can stimulate the body’s immune response and can produce off-target effects by silencing RNA molecules other than those against which they were specifically designed. […] considerable attention is currently focused on designing carrier molecules that can transport siRNA through the bloodstream to the diseased cell.”

“Both Northern blotting and RT-PCR enable the expression of one or a few genes to be measured simultaneously. In contrast, the technique of microarrays allows gene expression to be measured across the full genome of an organism in a single step. This massive scale genome analysis technique is very useful when comparing gene expression profiles between two samples. […] This can identify gene subsets that are under- or over-expressed in one sample relative to the second sample to which it is compared.”

Links:

Molecular biology.
Charles Darwin. Alfred Wallace. Gregor Mendel. Wilhelm Johannsen. Heinrich Waldeyer. Theodor Boveri. Walter Sutton. Friedrich Miescher. Phoebus Levene. Oswald Avery. Colin MacLeod. Maclyn McCarty. James Watson. Francis Crick. Rosalind Franklin. Andrew Fire. Craig Mello.
Gene. Genotype. Phenotype. Chromosome. Nucleotide. DNA. RNA. Protein.
Chargaff’s rules.
Photo 51.
Human Genome Project.
Long interspersed nuclear elements (LINEs). Short interspersed nuclear elements (SINEs).
Histone. Nucleosome.
Chromatin. Euchromatin. Heterochromatin.
Mitochondrial DNA.
DNA replication. Helicase. Origin of replication. DNA polymeraseOkazaki fragments. Leading strand and lagging strand. DNA ligase. Semiconservative replication.
Mutation. Point mutation. Indel. Frameshift mutation.
Genetic polymorphism. Single-nucleotide polymorphism (SNP).
Genome-wide association study (GWAS).
Molecular cloning. Restriction endonuclease. Multiple cloning site (MCS). Bacterial artificial chromosome.
Gel electrophoresis. Southern blot. Polymerase chain reaction (PCR). Reverse transcriptase PCR (RT-PCR). Quantitative PCR (qPCR).
GenBank. European Molecular Biology Laboratory (EMBL). Encyclopedia of DNA Elements (ENCODE).
RNA polymerase II. TATA box. Transcription factor IID. Stop codon.
Protein biosynthesis.
SmRNA (small nuclear RNA).
Untranslated region (/UTR sequences).
Transfer RNA.
Micro RNA (miRNA).
Dicer (enzyme).
RISC (RNA-induced silencing complex).
Argonaute.
Lipid-Based Nanoparticles for siRNA Delivery in Cancer Therapy.
Long non-coding RNA.
Ribozyme/catalytic RNA.
RNA-sequencing (RNA-seq).

May 5, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine, Molecular biology | Leave a comment

Trade-offs when doing medical testing

I was considering whether or not to blog the molecular biology text I recently read today, but I decided against it. However as I did feel like blogging today, I decided instead to add here a few comments I left on SCC. I rarely leave comments on other blogs, but it does happen, and the question I was ‘answering’ (partially – other guys had already added some pretty good comments by the time I joined the debate) is probably a question that I imagine a lot of e.g. undergrads are asking themselves, namely: “What’s the standard procedure, when designing a medical test, to determine the right tradeoff between sensitivity and specificity (where I’m picturing a tradeoff involved in choosing the threshold for a positive test or something similar)?

The ‘short version’, if you want an answer to this question, is probably to read Newman and Kohn’s wonderful book on these- and related- topics (which I blogged here), but that’s not actually a ‘short answer’ in terms of how people usually think about these things. I’ll just reproduce my own comment here, and mention that other guys had already covered some key topics by the time I joined ‘the fray’:

“Some good comments already. I don’t know to which extent the following points have been included in the links provided, but I decided to add them here anyway.

One point worth emphasizing is that you’ll always want a mixture of sensitivity and specificity (or, more broadly, test properties) that’ll mean that your test has clinical relevance. This relates both to the type of test you consider and when/whether to test at all (rather than treat/not treat without testing first). If you’re worried someone has disease X and there’s a high risk of said individual having disease X due to the clinical presentation, some tests will for example be inappropriate even if they are very good at making the distinction between individuals requiring treatment X and individuals not requiring treatment X, for example because they take time to perform that the patient might not have – not an uncommon situation in emergency medicine. If you’re so worried you’d treat him regardless of the test result, you shouldn’t test. And the same goes for e.g. low-sensitivity screens; if a positive test result of a screen does not imply that you’ll actually act on the result of the screen, you shouldn’t perform it (in screening contexts cost effectiveness is usually critically dependent on how you follow up on the test result, and in many contexts inadequate follow-up means that the value of the test goes down a lot […on a related note I have been thinking that I was perhaps not as kind as I could have been when I reviewed Juth & Munthe’s book and I have actually considered whether or not to change my rating of the book; it does give a decent introduction to some key trade-offs with which you’re confronted when you’re dealing with topics related to screening].

Cost effectiveness is another variable that would/should probably (in an ideal world?) enter the analysis when you’re judging what is or is not a good mixture of sensitivity and specificity – you should be willing to pay more for more precise tests, but only to the extent that those more precise tests lead to better outcomes (you’re usually optimizing over patient outcomes, not test accuracy).

Skef also mentions this, but the relative values of specificity and sensitivity may well vary during the diagnostic process; i.e. the (ideal) trade-off will depend on what you plan to use the test for. Is the idea behind testing this guy to make (reasonably?) sure he doesn’t have colon cancer, or to figure out if he needs a more accurate, but also more expensive, test? Screening setups will usually involve a multi-level testing structure, and tests at different levels will not treat these trade-offs the same way, nor should they. This also means that the properties of individual tests can not really be viewed in isolation, which makes the problem of finding ‘the ideal mix’ of test properties (whatever these might be) even harder; if you have three potential tests for example, it’s not enough to compare the tests individually against each other, you’d ideally also want to implicitly take into account that different combinations of tests have different properties, and that the timing of the test may also be an important parameter in the decision problem.”

On a related note I think that in general the idea of looking for some kind of ‘approved method’ that you can use to save yourself from thinking is a very dangerous approach when you’re doing applied statistics. If you’re not thinking about relevant trade-offs and how to deal with them, odds are you’re missing a big part of the picture. If somebody claims to have somehow discovered some simple approach to dealing with all of the relevant trade-offs, well, you should be very skeptical. Statistics usually don’t work like that.

May 4, 2018 Posted by | Medicine, Statistics | Leave a comment

100 cases in emergency medicine and critical care (I)

“This book has been written for medical students, doctors and nurse practitioners. One of the best methods of learning is case-based learning. This book presents a hundred such ‘cases’ or ‘patients’ which have been arranged by system. Each case has been written to stand alone […] the focus of each case is to recognise the initial presentation, the underlying pathophysiology, and to understand broad treatment principles.”

I really liked the book; as was also the case for the surgery book I recently read the cases included in these publications are slightly longer than they were in some of the previous publications in the series I’ve read, and I think this makes a big difference in terms of how much you actually get out of each case.

Below I have added some links and quotes related to the first half of the book’s coverage.

Tracheostomy.
Malnutrition (“it is estimated that around a quarter of hospital inpatients are inadequately nourished. This may be due to increased nutritional requirements […], nutritional losses (e.g. malabsorption, vomiting, diarrhoea) or reduced intake […] A patient’s basal energy expenditure is doubled in head injuries and burns.”)
Acute Adult Supraglottitis. (“It is important to appreciate that halving the radius of the airway will increase its resistance by 16 times (Poiseuille’s equation), and hearing stridor means there is around 75% airway obstruction.”)
Out-of-hospital cardiac arrest. (“After successful resuscitation from an OHCA, only 10% of patients will survive to discharge, and many of these individuals will have significant neurologic disability.”)
Bacterial meningitis. (“Meningococcal meningitis has a high mortality, with 10%-15% of patients dying of the disease despite appropriate therapy.”)
Diabetic ketoacidosis.
Anaphylaxis (“Always think of anaphylaxis when seeing patients with skin/mucosal symptoms, respiratory difficulty and/or hypotension, especially after exposure to a potential allergen.”)
Early goal-directed therapy. (“While randomised evidence on the benefit of [this approach] is conflicting, it is standard practice in most centres.” I’m not sure I’d agree with the authors that the evidence is ‘conflicting’, it looks to me like it’s reasonably clear at this point: “In this meta-analysis of individual patient data, EGDT did not result in better outcomes than usual care and was associated with higher hospitalization costs across a broad range of patient and hospital characteristics.”)
Cardiac tamponade. Hypovolaemic shock. Permissive hypotensionFocused Assessment with Sonography in Trauma (FAST). (“Shock refers to inadequate tissue perfusion and tissue oxygenation. The commonest cause in an injured patient is hypovolaemic shock due to blood loss, but other causes include cardiogenic shock due to myocardial dysfunction, neurogenic shock due to sympathetic dysfunction or obstructive shock due to obstruction of the great vessels or heart. […] tachycardia, cool skin and reduced pulse pressure are early signs of shock until proven otherwise.”)
Intravenous therapy. A Comparison of Albumin and Saline for Fluid Resuscitation in the Intensive Care Unit.
Thermal burns. Curling’s ulcer. Escharotomy. Wallace rule of nines. Fluid management in major burn injuries. (“Alkali burns are more harmful than acidic. […] Electrical burns cause more destruction than the external burn may suggest. They are associated with internal destruction, as the path of least resistance is nerves and blood vessels. They can also cause arrhythmias and an electrocardiogram should be performed.”)
Steven Johnson syndrome. Nikolsky’s sign. SCORTEN scale.
Cardiac arrest. (“The mantra in the ED is that ‘you are not dead until you are warm and dead'”).
Myocardial infarction. (“The most important goal of the acute management of STEMI is coronary reperfusion, which may be achieved either by percutaneous coronary intervention (PCI) or use of fibrinolytic agents (thrombolysis). PCI is the preferred strategy if it can be delivered within 120 minutes of first medical contact (and ideally within 90 minutes) […] several randomised trials have shown that PCI provides improved short- and long-term survival outcomes compared to fibrinolysis, providing it can be performed within the appropriate time frame.”)
Asthma exacerbation. (“the prognosis for asthmatics admitted to the Intensive Care Unit is guarded, with an in-hospital mortality of 7% in those who are mechanically ventilated.”)
Acute exacerbation of COPD. Respiratory Failure.
Pulmonary embolism. CT pulmonary angiography. (“Obstructive cardiopulmonary disease is the main diagnosis to exclude in patients presenting with shortness of breath and syncope.”)
Sepsis. Sepsis Six. qSOFA. (“The main clinical features of sepsis include hypotension […], tachycardia […], a high (>38.3°C) or low (<36°C) temperature, altered mental status and signs of peripheral shutdown (cool skin, prolonged capillary refill, cyanosis) in severe cases. […] Sepsis is associated with substantial in-hospital morbidity and mortality, and an increased risk of death and re-admission to hospital even if the patient survives until discharge. Prognostic factors in sepsis include patient factors (increasing age, higher comorbidity), site of infection (urosepsis is associated with better outcomes compared to other sources), type of pathogen (nosocomial infections have higher mortality), early administration of antibiotics (which may reduce mortality by 50%) and restoration of perfusion.”)
Acute kidney injury. (“Classically there are three major causative categories of AKI: (i) pre-renal (i.e. hypoperfusion), (ii) renal (i.e. an intrinsic process with the kidneys) and (iii) post-renal (i.e. urinary tract obstruction). The initial evaluation should attempt to determine which of these are leading to AKI in the patient. […] two main complications that arise with AKI [are] volume and electrolyte issues.”)
Acute chest syndrome.
Thrombotic thrombocytopenic purpura. Schistocyte. Plasmapheresis.
Lower gastrointestinal bleeding. WarfarinProthrombin complex concentrate. (“Warfarin is associated with a 1%-3% risk of bleeding each year in patients with atrial fibrillation, and the main risk factors for this include presence of comorbities, interacting medications, poor patient compliance, acute illness and dietary variation in vitamin K intake.”)
Acute back pain. Malignant spinal cord compression (-MSCC). (“Acute back pain is not an uncommon reason for presentation to the Emergency Department […] Although the majority of such presentations represent benign pathology, it is important to exclude more serious pathology such as cord or cauda equina compression, infection or abscess. Features in the history warranting greater concern include a prior history of cancer, recent infection or steroid use, fever, pain in the thoracic region, pain that improves with rest and the presence of urinary symptoms. Similarly, ‘red flag’ examination findings include gait ataxia, generalized weakness, upper motor neurone signs (clonus, hyper-reflexia, extensor plantars), a palpable bladder, saddle anaesthesia and reduced anal tone. […] MSCC affects up to 5% of all cancer patients and is the first manifestation of cancer in a fifth of patients.”)
Neutropenic sepsis. (“Neutropaenic sepsis […] arises as a result of cytotoxic chemotherapy suppressing the bone marrow, leading to depletion of white blood cells and leaving the individual vulnerable to infection. It is one of the most common complications of cancer therapy, carrying a significant mortality rate of ~5%-10%, and should be regarded as a medical emergency. Any patient receiving chemotherapy and presenting with a fever should be assumed to have neutropaenic sepsis until proven otherwise.”)
Bacterial Pneumonia. CURB-65 Pneumonia Severity Score.
Peptic ulcer diseaseUpper gastrointestinal bleeding. Glasgow-Blatchford score. Rockall score.
Generalised tonic-clonic seizure. Status Epilepticus.
“Chest pain is an extremely common presentation in the ED […] Key features that may help point towards particular diagnoses include • Location and radiation – Central chest pain that radiates to the face, neck or arms is classic for MI, whereas the pain may be more posterior (between should blades) in aortic dissection and unilateral in lung disease. • Onset – Sudden or acute onset pain usually indicates a vascular cause (e.g. PE or aortic dissection), whereas cardiac chest pain is typically more subacute in onset and increases over time. • Character – Cardiac pain is usually described as crushing but may often be a gnawing discomfort, whereas pain associated with aortic dissection and gastrointestinal disorders is usually tearing/ripping and burning, respectively. • Exacerbation/alleviation […] myocardial ischaemia will manifest as pain brought on by exercise and relieved by rest, which is a good discriminator between cardiac and non-cardiac pain.”
Syncope. Mobitz type II AV block. (The differential diagnosis for syncope is seizure, and the two may be distinguished by the absence of a quick or spontaneous recovery with a seizure, where a post-ictal state (sleepiness, confusion, lethargy) is present.”)
Atrial Fibrillation. CHADSVASC and HASBLED risk scores. (“AF with rapid ventricular rates is generally managed with control of heart rates through use of beta-blockers or calcium-channel blockers. • Unstable patients with AF may require electrical cardioversion to restore sinus rhythm.”)
Typhoid fever. Dysentery.
Alcohol toxicity. (“Differentials which may mimic acute alcohol intoxication include • Hypoglycemia • Electrolyte disturbance • Vitamin depletion (B12/folate) • Head trauma • Sepsis • Other toxins or drug overdose • Other causes for CNS depression”)
Tricyclic Antidepressant Toxicity. (“Over 50% of suicidal overdoses involve more than one medication and are often taken with alcohol.”)
Suicide. SADPERSONS scale. (“Intentional self-harm results in around 150,000 attendances to the ED [presumably ‘every year’ – US]. These patients are 100 times more likely to commit suicide within the next year compared to the general population. Self-harm and suicide are often used interchangeably, but are in fact two separate entities. Suicide is a self-inflicted intentional act to cause death, whereas self-harm is a complex behaviour to inflict harm but not associated with the thought of dying – a method to relieve mental stress by inflicting physical pain.”)
Cauda equina syndrome (-CES). (“signs and symptoms of lower extremity weakness and pain developing acutely after heavy lifting should raise suspicion for a herniated intervertebral disc, which is the commonest cause of CES. […] CES is a neurosurgical emergency. The goal is to prevent irreversible loss of bowel and bladder function and motor function of the lower extremities. […] A multitude of alternative diagnoses may masquerade as CES – stroke, vascular claudication, deep venous thrombosis, muscle cramps and peripheral neuropathy.”)
Concussion.
Subarachnoid hemorrhage. Arteriovenous malformation.
Ischemic Stroke. AlteplaseMechanical thrombectomy for acute ischemic stroke. (“evaluation and treatment should be based on the understanding that the damage that is done (infarcted brain) is likely to be permanent, and the goal is to prevent further damage (ischaemic brain) and treat reversible causes (secondary prevention). Along those lines, time is critical to the outcome of the patient.”)
Mechanical back pain. Sciatica.
Dislocated shoulder. Bankart lesion. Hill-Sachs lesion. Kocher’s method.
Supracondylar Humerus Fractures. (“Supracondular fractures in the adult are relatively uncommon but are seen in major trauma or in elderly patients where bone quality may be compromised. Elbow fractures need careful neurovascular evaluation […] There are three major nerves that pass through the region: 1. The median nerve […] 2. The radial nerve […] 3. The ulnar nerve […] It is important to assess these three nerves and to document their function individually. The brachial artery passes through the cubital fossa and may be directly injured by bone fragments or suffer intimal damage. […] This is a true orthopaedic and vascular emergency as the upper limb can only tolerate an ischaemia time of around 90 minutes before irreparable damage is sustained.”)
Boxer’s fracture.

May 2, 2018 Posted by | Books, Cancer/oncology, Cardiology, Infectious disease, Medicine, Nephrology, Neurology, Psychiatry, Studies | Leave a comment

Endocrinology (part 6 – neuroendocrine disorders and Paget’s disease)

I’m always uncertain as to how much content to cover when covering books like this one, and I usually cover handbooks in less detail (relatively) than I cover other books because of the amount of work it takes to cover all topics of interest – however I didn’t feel after writing my last post in the series that I had really finished with this book, in terms of blogging it; in fact I remember distinctly feeling a bit annoyed towards the end of writing my fifth post by the fact that I didn’t find that I could justify covering the detailed account of Paget’s disease included in the last part of the chapter, even though all of that stuff was new knowledge to me, and quite interesting – but these posts take some effort, and sometimes I cut them short just to at least blog something, rather than just have an unpublished draft lying around.

In this post I’ll first include some belated coverage of Paget’s disease, which is from the book’s chapter 6, and then I’ll cover some of the stuff included in chapter 8 of the book, about neuroendocrine disorders. Chapter 8 deals exclusively with various types of (usually quite rare) tumours. I decided to not cover chapter 7, which is devoted to paediatric endocrinology.

“Paget’s disease is the result of greatly local bone turnover, which occurs particularly in the elderly […] The 1° abnormality in Paget’s disease is gross overactivity of the osteoclasts, resulting in greatly increased ↑ bone resorption. This secondarily results in ↑ osteoblastic activity. The new bone is laid down in a highly disorganized manner […] Paget’s disease can affect any bone in the skeleton […] In most patients, it affects several sites, but, in about 20% of cases, a single bone is affected (monostotic disease). Typically, the disease will start in one end of a long bone and spread along the bone at a rate of about 1cm per year. […] Paget’s disease alters the mechanical properties of the bone. Thus, pagetic bones are more likely to bend under normal physiological loads and are thus liable to fracture. […] Pagetic bones are also larger than their normal counterparts. This can lead to ↑ arthritis at adjacent joints and to pressure on nerves, leading to neurological compression syndromes and, when it occurs in the skull base, sensorineural deafness.”

“Paget’s disease is present in about 2% of the UK population over the age of 55. It’s prevalence increases with age, and it is more common in ♂ than ♀. Only about 10% of affected patients will have symptomatic disease. […] Most notable feature is pain. […] The diagnosis of Paget’s disease is primarily radiological. […] An isotope bone scan is frequently helpful in assessing the extent of skeletal involvement […] Deafness is present in up to half of cases of skull base Paget’s. • Other neurological complications are rare. […] Osteogenic sarcoma [is a] very rare complication of Paget’s disease. […] Any increase of pain in a patient with Paget’s disease should arouse suspicion of sarcomatous degeneration. A more common cause, however, is resumption of activity of disease. […] Treatment with agents that decrease bone turnover reduces disease activity […] Although such treatment has been shown to help pain, there is little evidence that it benefits other consequences of Paget’s disease. In particular, the deafness of Paget’s disease does not regress after treatment […] Bisphosphonates have become the mainstay of treatment. […] Goals of treatment [are to:] • Minimize symptoms. • Prevent long-term complications. • Normalize bone turnover. • Alkaline phosphatase in normal range. • No actual evidence that treatment achieves this.”

The rest of this post will be devoted to covering topics from chapter 8:

Neuroendocrine cells are found in many sites throughout the body. They are particularly prominent in the GI tract and pancreas and […] have the ability to synthesize, store, and release peptide hormones. […] the majority of neuroendocrine tumours occur within the gastroenteropancreatic axis. […] >50% are traditionally termed carcinoid tumours […] with the remainder largely comprising pancreatic islet cell tumours. • Carcinoid and islet cell tumours are generally slow-growing. […] There is a move towards standardizing the terminology of these tumours […] The term NEN [neuroendocrine neoplasia] included low- and intermediate-grade neoplasia (previously referred to as carcinoid or atypical carcinoid) which are now referred to as neuroendocrine tumours (NETs) and high-grade neoplasia (neuroendocrine carcinoma, NEC). There is a confusing array of classifications of NENs, based on anatomical origin, histology, and secretory activity. • Many of these classifications are well established and widely used.”

“It is important to understand the differences between ‘differentiation’, which is the extent to which the neoplastic cells resemble their non-tumourous counterparts, and ‘grade’, which is the inherent agressiveness of the tumour. […] Neuroendocrine carcinomas are the most aggressive NENs and can be either small or large cell type. […] NENs are diagnosed based on histological features of biopsy specimens. The presenting features of the tumours vary like any other tumour, based on their anatomical location, such as abdominal pain, intestinal obstruction. Many are incidentally discovered during endoscopy or imaging for unrelated conditions. In a database study, 49% of NENs were localized, 24% had regional metastases, and 27% had distant metastases. […] These tumours rarely manifest themselves due to their secretory effect. [This is quite different from some of the other tumours they covered elsewhere in the book – US]  [….] Only a third of patients with neuroendocrine tumours develop symptoms due to hormone secretion.”

“Surgery is the treatment of choice for NENs grades 1 and 2, except in the presence of widespread distant metastases and extensive local invasion. […] Somatostatin analogues (SSA) have relatively minor side effects and provide long-term symptom control. •Octreotide and lanreotide […] reduce the level of biochemical tumour markers in the majority of patients and control symptoms in around 70% of cases. […] A combination of interferon with octreotide has been shown to produce biochemical and symptomatic improvement in patients who have previously had no significant benefit from either drug alone. […] Cytotoxic chemotherapy may be considered in patients with progressive, advanced, or uncontrolled symptomatic disease.”

“Despite the changes in nomenclature of NENs […] the ‘carcinoid crisis’ [apparently also termed ‘malignant carcinoid syndrome‘, US] is still an important descriptive term. It is a potentially life-threatening condition that should be prevented, where possible, and treated as an emergency. • Clinical features include hypotension, tachycardia, arrhythmias, flushing, diarrhoea, broncospasm, and altered sensoriom. […] carcinoid crisis can be triggered by manipulation of the tumours, such as during biopsy, surgery, or palpation. • These result in the release of biologically active compounds from the tumours. […] Carcinoid heart disease […] result in valvular stenosis or regurgitation and eventually heart failure. This condition is seen in 40-50% of patients with carcinoid syndrome and 3-4% of patients with neuroendocrine tumours”.

“An insulinoma is a functioning neuroendocrine tumour of the pancreas that causes hypoglycemia through inappropriate secretion of insulin. • Unlike other neuroendocrine tumours of the pancreas, more than 90% of insulinomas are benign. […] annual incidence of insulinomas is of the order of 1-2 per million population. […] The treatment of choice in all, but poor, surgical candidates is operative removal. […] In experienced surgical hands, the mortality is less than 1%. […] Following the removal of solitary insulinoma [>80% of cases], life expectancy is restored to normal. Malignant insulinomas, with metastases usually to the liver, have a natural history of years, rather than months, and may be controlled with medical therapy or specific antitumour therapy […] • Average 5-year survival estimated to be approximately 35% for malignant insulinomas. […] Gastrinomas are the most common functional malignant pancreatic endocrine tumours. […] The incidence of gastrinomas is 0.5-2/million population/year. […] Gastrin […] is the principal gut hormone stimulating gastric acid secretion. • The Zollinger-Ellison (ZE) syndrome is characterized by gastric acid oversecretion and manifests itself as severe peptic ulcer disease (PUD), gastro-oesophageal reflux, and diarrhoea. […] 10-year survival [in patients with gastrinomas] without liver metastases is 95%. […] Where there are diffuse metastases, […] a 10-year survival of approximately 15% [is observed].”

One of the things I was thinking about before deciding whether or not to blog this chapter was whether the (fortunately!) rare conditions encountered in the chapter really ‘deserved’ to be covered. Unlike what is the case for, say, breast cancer or colon cancer, most people won’t know someone who’ll die from malignant insulinoma. However although these conditions are very rare, I also can’t stop myself from thinking they’re also quite interesting, and I don’t care much about whether I know someone with a disease I’ve read about. And if you think these conditions are rare, well, for glucagonomas “The annual incidence is estimated at 1 per 20 million population”. These very rare conditions really serve as a reminder of how great our bodies are at dealing with all kinds of problems we’ve never even thought about. We don’t think about them precisely because a problem so rarely arises – but just now and then, well…

Let’s talk a little bit more about those glucagonomas:

“Glucagonomas are neuroendocrine tumours that usually arise from the α cells of the pancreas and produce the glucagonoma syndrome through the secretion of glucagon and other peptides derived from the preproglucagon gene. • The large majority of glucagonomas are malignant, but they are also very indolent tumours, and the diagnosis may be overlooked for many years. • Up to 90% of patients will have lymph node or liver metastases at the time of presentation. • They are classically associated with the rash of necrolytic migratory erythema. […] The characteristic rash [….] occurs in >70% of cases […] glucose intolerance is a frequent association (>90%). • Sustained gluconeogenesis also causes amino acid deficiencies and results in protein catabolism which can be associated with unrelenting weight loss in >60% of patients. • Glucagon has a direct suppressive effect on the bone marrow, resulting in a normochromic normocytic anaemia in almost all patients. […] Surgery is the only curative option, but the potential for a complete cure may be as low as 5%.”

“In 1958, Verner and Morrison1 first described a syndrome consisting of refractory watery diarrhoea and hypokalaemia, associated with a neuroendocrine tumour of the pancreas. • The syndrome of watery diarrhea, hypokalaemia and acidosis (WDHA) is due to secretion of vasoactive intestinal polypeptide (VIP). • Tumours that secrete VIP are known as VIPomas. VIPomas account for <10% of islet cell tumours and mainly occur as solitary tumours. >60% are malignant […] The most prominent symptom in most patients is profuse watery diarrhoea […] Surgery to remove the tumour is the treatment of first choice […] and may be curative in around 40% of patients. […] Somatostatin analogues produce effective symptomatic relief from the diarrhoea in most patients. Long-term use does not result in tumour regression. […] Chemotherapy […] has resulted in response rates of >30%.”

So by now we know that somatostatin analogues can provide symptom relief in a variety of contexts when you’re dealing with these conditions. But wait, what happens if you get a functional tumour of the cells that produce somatostatins? Will this mean that you just feel great all the time, or that you at least don’t have any symptoms of disease? Well, not exactly…

Somatostatinomas are very rare neuroendocrine tumours, occurring both in the pancreas and in the duodenum. • >60% are large tumours located in the head or body of the pancreas. • The clinical syndrome may be diagnosed late in the course of disease when metastatic spread to local lymph nodes and the liver has already occurred. […] • Glucose intolerance or frank diabetes mellitus may have been observed for many years prior to the diagnosis and retrospectively often represents the first clinical sign. It is probably due to the inhibitory effect of somatostatin on insulin secretion. • A high incidence of gallstones has been described similar to that seen as a side effect with long-term somatostatin analogue therapy. • Diarrhoea, steatorrhoea, and weight loss appear to be consistent clinical features […this despite the fact that you use the hormone produced by these tumours to manage diarrhea in other endocrine tumours – it’s stuff like this which makes these rare disorders far from boring to read about! US] and may be associated with inhibition of the exocrine pancreas by somatostatin.”

May 1, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Epidemiology, Medicine, Neurology, Pharmacology | Leave a comment

Medical Statistics (III)

In this post I’ll include some links and quotes related to topics covered in chapters 4, 6, and 7 of the book. Before diving in, I’ll however draw attention to some of Gerd Gigerenzer’s work as it is quite relevant to in particular the coverage included in chapter 4 (‘Presenting research findings’), even if the authors seem unaware of this. One of Gigerenzer’s key insights, which I consider important and which I have thus tried to keep in mind, unfortunately goes unmentioned in the book; namely the idea that how you communicate risk might be very important in terms of whether or not people actually understand what you are trying to tell them. A related observation is that people have studied these things and they’ve figured out that some types of risk communication are demonstrably better than others at enabling people to understand the issues at hand and the trade-offs involved in a given situation. I covered some of these ideas in a comment on SCC some time ago; if those comments spark your interest you should definitely go read the book).

IMRAD format.
CONSORT Statement (randomized trials).
Equator Network.

“Abstracts may appear easy to write since they are very short […] and often required to be written in a structured format. It is therefore perhaps surprising that they are sometimes poorly written, too bland, contain inaccuracies, and/or are simply misleading.1  The reason for poor quality abstracts are complex; abstracts are often written at the end of a long process of data collection, analysis, and writing up, when time is short and researchers are weary. […] statistical issues […] can lead to an abstract that is not a fair representation of the research conducted. […] it is important that the abstract is consistent with the body of text and that it gives a balanced summary of the work. […] To maximize its usefulness, a summary or abstract should include estimates and confidence intervals for the main findings and not simply present P values.”

“The methods section should describe how the study was conducted. […] it is important to include the following: *The setting or area […] The date(s) […] subjects included […] study design […] measurements used […] source of any non-original data […] sample size, including a justification […] statistical methods, including any computer software used […] The discussion section is where the findings of the study are discussed and interpreted […] this section tends to include less statistics than the results section […] Some medical journals have a specific structure for the discussion for researchers to follow, and so it is important to check the journal’s guidelines before submitting. […] [When] reporting statistical analyses from statistical programs: *Don’t put unedited computer output into a research document. *Extract the relevant data only and reformat as needed […] Beware of presenting percentages for very small samples as they may be misleading. Simply give the numbers alone. […] In general the following is recommended for P values: *Give the actual P value whenever possible. *Rounding: Two significant figures are usually enough […] [Confidence intervals] should be given whenever possible to indicate the precision of estimates. […] Avoid graphs with missing zeros or stretched scales […] a table or graph should stand alone so that a reader does not need to read the […] article to be able to understand it.”

Statistical data type.
Level of measurement.
Descriptive statistics.
Summary statistics.
Geometric mean.
Harmonic mean.
Mode.
Interquartile range.
Histogram.
Stem and leaf plot.
Box and whisker plot.
Dot plot.

“Quantitative data are data that can be measured numerically and may be continuous or discrete. *Continuous data lie on a continuum and so can take any value between two limits. […] *Discrete data do not lie on a continuum and can only take certain values, usually counts (integers) […] On an interval scale, differences between values at different points of the scale have the same meaning […] Data can be regarded as on a ratio scale if the ratio of the two measurements has a meaning. For example we can say that twice as many people in one group had a particular characteristic compared with another group and this has a sensible meaning. […] Quantitative data are always ordinal – the data values can be arranged in a numerical order from the smallest to the largest. […] *Interval scale data are always ordinal. Ratio scale data are always interval scale data and therefore must also be ordinal. *In practice, continuous data may look discrete because of the way they are measured and/or reported. […] All continuous measurements are limited by the accuracy of the instrument used to measure them, and many quantities such as age and height are reported in whole numbers for convenience”.

“Categorical data are data where individuals fall into a number of separate categories or classes. […] Different categories of categorical data may be assigned a number for coding purposes […] and if there are several categories, there may be an implied ordering, such as with stage of cancer where stage I is the least advanced and stage IV is the most advanced. This means that such data are ordinal but not interval because the ‘distance’ between adjacent categories has no real measurement attached to it. The ‘gap’ between stages I and II disease is not necessarily the same as the ‘gap’ between stages III and IV. […] Where categorical data are coded with numerical codes, it might appear that there is an ordering but this may not necessarily be so. It is important to distinguish between ordered and non-ordered data because it affects the analysis.”

“It is usually useful to present more than one summary measure for a set of data […] If the data are going to be analyzed using methods based on means then it makes sense to present means rather than medians. If the data are skewed they may need to be transformed before analysis and so it is best to present summaries based on the transformed data, such as geometric means. […] For very skewed data rather than reporting the median, it may be helpful to present a different percentile (i.e. not the 50th), which better reflects the shape of the distribution. […] Some researchers are reluctant to present the standard deviation when the data are skewed and so present the median and range and/or quartiles. If analyses are planned which are based on means then it makes sense to be consistent and give standard deviations. Further, the useful relationship that approximately 95% of the data lie between mean +/- 2 standard deviations, holds even for skewed data […] If data are transformed, the standard deviation cannot be back-transformed correctly and so for transformed data a standard deviation cannot be given. In this case the untransformed standard deviation can be given or another measure of spread. […] For discrete data with a narrow range, such as stage of cancer, it may be better to present the actual frequency distribution to give a fair summary of the data, rather than calculate a mean or dichotomize it. […] It is often useful to tabulate one categorical variable against another to show the proportions or percentages of the categories of one variable by the other”.

Random variable.
Independence (probability theory).
Probability.
Probability distribution.
Binomial distribution.
Poisson distribution.
Continuous probability distribution.
Normal distribution.
Uniform distribution.

“The central limit theorem is a very important mathematical theorem that links the Normal distribution with other distributions in a unique and surprising way and is therefore very useful in statistics. *The sum of a large number of independent random variables will follow an approximately Normal distribution irrespective of their underlying distributions. *This means that any random variable which can be regarded as a the sum of a large number of small, independent contributions is likely to follow the Normal distribution. [I didn’t really like this description as it’s insufficiently detailed for my taste (and this was pretty much all they wrote about the CLT in that chapter); and one problem with the CLT is that people often think it applies when it might not actually do so, because the data restrictions implied by the theorem(s) are not really fully appreciated. On a related note people often seem to misunderstand what these theorems actually say and where they apply – see e.g. paragraph 10 in this post. See also the wiki link above for a more comprehensive treatment of these topicsUS] *The Normal distribution can be used as an approximation to the Binomial distribution when n is large […] The Normal distribution can be used as an approximation to the Poisson distribution as the mean of the Poisson distribution increases […] The main advantage in using the Normal rather than the Binomial or the Poisson distribution is that it makes it easier to calculate probabilities and confidence intervals”

“The t distribution plays an important role in statistics as the sampling distribution of the sample mean divided by its standard error and is used in significance testing […] The shape is symmetrical about the mean value, and is similar to the Normal distribution but with a higher peak and longer tails to take account of the reduced precision in smaller samples. The exact shape is determined by the mean and variance plus the degrees of freedom. As the degrees of freedom increase, the shape comes closer to the Normal distribution […] The chi-squared distribution also plays an important role in statistics. If we take several variables, say n, which each follow a standard Normal distribution, and square each and add them, the sum of these will follow a chi-squared distribution with n degrees of freedom. This theoretical result is very useful and widely used in statistical testing […] The chi-squared distribution is always positive and its shape is uniquely determined by the degrees of freedom. The distribution becomes more symmetrical as the degrees of freedom increases. […] [The (noncentral) F distribution] is the distribution of the ratio of two chi-squared distributions and is used in hypothesis testing when we want to compare variances, such as in doing analysis of variance […] Sometimes data may follow a positively skewed distribution which becomes a Normal distribution when each data point is log-transformed [..] In this case the original data can be said to follow a lognormal distribution. The transformation of such data from log-normal to Normal is very useful in allowing skewed data to be analysed using methods based on the Normal distribution since these are usually more powerful than alternative methods”.

Half-Normal distribution.
Bivariate Normal distribution.
Negative binomial distribution.
Beta distribution.
Gamma distribution.
Conditional probability.
Bayes theorem.

April 26, 2018 Posted by | Books, Data, Mathematics, Medicine, Statistics | Leave a comment

A few diabetes papers of interest

i. Economic Costs of Diabetes in the U.S. in 2017.

“This study updates previous estimates of the economic burden of diagnosed diabetes and quantifies the increased health resource use and lost productivity associated with diabetes in 2017. […] The total estimated cost of diagnosed diabetes in 2017 is $327 billion, including $237 billion in direct medical costs and $90 billion in reduced productivity. For the cost categories analyzed, care for people with diagnosed diabetes accounts for 1 in 4 health care dollars in the U.S., and more than half of that expenditure is directly attributable to diabetes. People with diagnosed diabetes incur average medical expenditures of ∼$16,750 per year, of which ∼$9,600 is attributed to diabetes. People with diagnosed diabetes, on average, have medical expenditures ∼2.3 times higher than what expenditures would be in the absence of diabetes. Indirect costs include increased absenteeism ($3.3 billion) and reduced productivity while at work ($26.9 billion) for the employed population, reduced productivity for those not in the labor force ($2.3 billion), inability to work because of disease-related disability ($37.5 billion), and lost productivity due to 277,000 premature deaths attributed to diabetes ($19.9 billion). […] After adjusting for inflation, economic costs of diabetes increased by 26% from 2012 to 2017 due to the increased prevalence of diabetes and the increased cost per person with diabetes. The growth in diabetes prevalence and medical costs is primarily among the population aged 65 years and older, contributing to a growing economic cost to the Medicare program.”

The paper includes a lot of details about how they went about estimating these things, but I decided against including these details here – read the full paper if you’re interested. I did however want to add some additional details, so here goes:

Absenteeism is defined as the number of work days missed due to poor health among employed individuals, and prior research finds that people with diabetes have higher rates of absenteeism than the population without diabetes. Estimates from the literature range from no statistically significant diabetes effect on absenteeism to studies reporting 1–6 extra missed work days (and odds ratios of more absences ranging from 1.5 to 3.3) (1214). Analyzing 2014–2016 NHIS data and using a negative binomial regression to control for overdispersion in self-reported missed work days, we estimate that people with diabetes have statistically higher missed work days—ranging from 1.0 to 4.2 additional days missed per year by demographic group, or 1.7 days on average — after controlling for age-group, sex, race/ethnicity, diagnosed hypertension status (yes/no), and body weight status (normal, overweight, obese, unknown). […] Presenteeism is defined as reduced productivity while at work among employed individuals and is generally measured through worker responses to surveys. Multiple recent studies report that individuals with diabetes display higher rates of presenteeism than their peers without diabetes (12,1517). […] We model productivity loss associated with diabetes-attributed presenteeism using the estimate (6.6%) from the 2012 study—which is toward the lower end of the 1.8–38% range reported in the literature. […] Reduced performance at work […] accounted for 30% of the indirect cost of diabetes.”

It is of note that even with a somewhat conservative estimate of presenteeism, this cost component is an order of magnitude larger than the absenteeism variable. It is worth keeping in mind that this ratio is likely to be different elsewhere; due to the way the American health care system is structured/financed – health insurance is to a significant degree linked to employment – you’d expect the estimated ratio to be different from what you might observe in countries like the UK or Denmark. Some more related numbers from the paper:

Inability to work associated with diabetes is estimated using a conservative approach that focuses on unemployment related to long-term disability. Logistic regression with 2014–2016 NHIS data suggests that people aged 18–65 years with diabetes are significantly less likely to be in the workforce than people without diabetes. […] we use a conservative approach (which likely underestimates the cost associated with inability to work) to estimate the economic burden associated with reduced labor force participation. […] Study results suggest that people with diabetes have a 3.1 percentage point higher rate of being out of the workforce and receiving disability payments compared with their peers without diabetes. The diabetes effect increases with age and varies by demographic — ranging from 2.1 percentage points for non-Hispanic white males aged 60–64 years to 10.6 percentage points for non-Hispanic black females aged 55–59 years.”

“In 2017, an estimated 24.7 million people in the U.S. are diagnosed with diabetes, representing ∼7.6% of the total population (and 9.7% of the adult population). The estimated national cost of diabetes in 2017 is $327 billion, of which $237 billion (73%) represents direct health care expenditures attributed to diabetes and $90 billion (27%) represents lost productivity from work-related absenteeism, reduced productivity at work and at home, unemployment from chronic disability, and premature mortality. Particularly noteworthy is that excess costs associated with medications constitute 43% of the total direct medical burden. This includes nearly $15 billion for insulin, $15.9 billion for other antidiabetes agents, and $71.2 billion in excess use of other prescription medications attributed to higher disease prevalence associated with diabetes. […] A large portion of medical costs associated with diabetes costs is for comorbidities.”

Insulin is ~$15 billion/year, out of a total estimated cost of $327 billion. This is less than 5% of the total cost. Take note of the 70 billion. I know I’ve said this before, but it bears repeating: Most of diabetes-related costs are not related to insulin.

“…of the projected 162 million hospital inpatient days in the U.S. in 2017, an estimated 40.3 million days (24.8%) are incurred by people with diabetes [who make up ~7.6% of the population – see above], of which 22.6 million days are attributed to diabetes. About one-fourth of all nursing/residential facility days are incurred by people with diabetes. About half of all physician office visits, emergency department visits, hospital outpatient visits, and medication prescriptions (excluding insulin and other antidiabetes agents) incurred by people with diabetes are attributed to their diabetes. […] The largest contributors to the cost of diabetes are higher use of prescription medications beyond antihyperglycemic medications ($71.2 billion), higher use of hospital inpatient services ($69.7 billion), medications and supplies to directly treat diabetes ($34.6 billion), and more office visits to physicians and other health providers ($30.0 billion). Approximately 61% of all health care expenditures attributed to diabetes are for health resources used by the population aged ≥65 years […] we estimate the average annual excess expenditures for the population aged <65 years and ≥65 years, respectively, at $6,675 and $13,239. Health care expenditures attributed to diabetes generally increase with age […] The population with diabetes is older and sicker than the population without diabetes, and consequently annual medical expenditures are much higher (on average) than for people without diabetes“.

“Of the estimated 24.7 million people with diagnosed diabetes, analysis of NHIS data suggests that ∼8.1 million are in the workforce. If people with diabetes participated in the labor force at rates similar to their peers without diabetes, there would be ∼2 million additional people aged 18–64 years in the workforce.”

Comparing the 2017 estimates with those produced for 2012, the overall cost of diabetes appears to have increased by ∼25% after adjusting for inflation, reflecting an 11% increase in national prevalence of diagnosed diabetes and a 13% increase in the average annual diabetes-attributed cost per person with diabetes.”

ii. Current Challenges and Opportunities in the Prevention and Management of Diabetic Foot Ulcers.

“Diabetic foot ulcers remain a major health care problem. They are common, result in considerable suffering, frequently recur, and are associated with high mortality, as well as considerable health care costs. While national and international guidance exists, the evidence base for much of routine clinical care is thin. It follows that many aspects of the structure and delivery of care are susceptible to the beliefs and opinion of individuals. It is probable that this contributes to the geographic variation in outcome that has been documented in a number of countries. This article considers these issues in depth and emphasizes the urgent need to improve the design and conduct of clinical trials in this field, as well as to undertake systematic comparison of the results of routine care in different health economies. There is strong suggestive evidence to indicate that appropriate changes in the relevant care pathways can result in a prompt improvement in clinical outcomes.”

“Despite considerable advances made over the last 25 years, diabetic foot ulcers (DFUs) continue to present a very considerable health care burden — one that is widely unappreciated. DFUs are common, the median time to healing without surgery is of the order of 12 weeks, and they are associated with a high risk of limb loss through amputation (14). The 5-year survival following presentation with a new DFU is of the order of only 50–60% and hence worse than that of many common cancers (4,5). While there is evidence that mortality is improving with more widespread use of cardiovascular risk reduction (6), the most recent data — derived from a Veterans Health Adminstration population—reported that 1-, 2-, and 5-year survival was only 81, 69, and 29%, respectively, and the association between mortality and DFU was stronger than that of any macrovascular disease (7). […] There is […] wide variation in clinical outcome within the same country (1315), suggesting that some people are being managed considerably less well than others.”

“Data on community-wide ulcer incidence are very limited. Overall incidences of 5.8 and 6.0% have been reported in selected populations of people with diabetes in the U.S. (2,12,20) while incidences of 2.1 and 2.2% have been reported from less selected populations in Europe—either in all people with diabetes (21) or in those with type 2 disease alone (22). It is not known whether the incidence is changing […] Although a number of risk factors associated with the development of ulceration are well recognized (23), there is no consensus on which dominate, and there are currently no reports of any studies that might justify the adoption of any specific strategy for population selection in primary prevention.”

“The incidence of major amputation is used as a surrogate measure of the failure of DFUs to heal. Its main value lies in the relative ease of data capture, but its value is limited because it is essentially a treatment and not a true measure of disease outcome. In no other major disease (including malignancies, cardiovascular disease, or cerebrovascular disease) is the number of treatments used as a measure of outcome. But despite this and other limitations of major amputation as an outcome measure (36), there is evidence that the overall incidence of major amputation is falling in some countries with nationwide databases (37,38). Perhaps the most convincing data come from the U.K., where the unadjusted incidence has fallen dramatically from about 3.0–3.5 per 1,000 people with diabetes per year in the mid-1990s to 1.0 or less per 1,000 per year in both England and Scotland (14,39).”

New ulceration after healing is high, with ∼40% of people having a new ulcer (whether at the same site or another) within 12 months (10). This is a critical aspect of diabetic foot disease—emphasizing that when an ulcer heals, foot disease must be regarded not as cured, but in remission (10). In this respect, diabetic foot disease is directly analogous to malignancy. It follows that the person whose foot disease is in remission should receive the same structured follow-up as a person who is in remission following treatment for cancer. Of all areas concerned with the management of DFUs, this long-term need for specialist surveillance is arguably the one that should command the greatest attention.

“There is currently little evidence to justify the adoption of very many of the products and procedures currently promoted for use in clinical practice. Guidelines are required to encourage clinicians to adopt only those treatments that have been shown to be effective in robust studies and principally in RCTs. The design and conduct of such RCTs needs improved governance because many are of low standard and do not always provide the evidence that is claimed.”

Incidence numbers like the ones included above will not always give you the full picture when there are a lot of overlapping data points in the sample (due to recurrence), but sometimes that’s all you have. However in the type 1 context we also do have some additional numbers that make it easier to appreciate the scale of the problem in that context. Here are a few additional data from a related publication I blogged some time ago (do keep in mind that estimates are likely to be lower in community samples of type 2 diabetics, even if perhaps nobody actually know precisely how much lower):

“The rate of nontraumatic amputation in T1DM is high, occurring at 0.4–7.2% per year (28). By 65 years of age, the cumulative probability of lower-extremity amputation in a Swedish administrative database was 11% for women with T1DM and 20.7% for men (10). In this Swedish population, the rate of lower-extremity amputation among those with T1DM was nearly 86-fold that of the general population.” (link)

Do keep in mind that people don’t stop getting ulcers once they reach retirement age (the 11%/20.7% is not lifetime risk, it’s a biased lower bound).

iii. Excess Mortality in Patients With Type 1 Diabetes Without Albuminuria — Separating the Contribution of Early and Late Risks.

“The current study investigated whether the risk of mortality in patients with type 1 diabetes without any signs of albuminuria is different than in the general population and matched control subjects without diabetes.”

“Despite significant improvements in management, type 1 diabetes remains associated with an increase in mortality relative to the age- and sex-matched general population (1,2). Acute complications of diabetes may initially account for this increased risk (3,4). However, with increasing duration of disease, the leading contributor to excess mortality is its vascular complications including diabetic kidney disease (DKD) and cardiovascular disease (CVD). Consequently, patients who subsequently remain free of complications may have little or no increased risk of mortality (1,2,5).”

“Mortality was evaluated in a population-based cohort of 10,737 children (aged 0–14 years) with newly diagnosed type 1 diabetes in Finland who were listed on the National Public Health Institute diabetes register, Central Drug Register, and Hospital Discharge Register in 1980–2005 […] We excluded patients with type 2 diabetes and diabetes occurring secondary to other conditions, such as steroid use, Down syndrome, and congenital malformations of the pancreas. […] FinnDiane participants who died were more likely to be male, older, have a longer duration of diabetes, and later age of diabetes onset […]. Notably, none of the conventional variables associated with complications (e.g., HbA1c, hypertension, smoking, lipid levels, or AER) were associated with all-cause mortality in this cohort of patients without albuminuria. […] The most frequent cause of death in the FinnDiane cohort was IHD [ischaemic heart disease, US] […], largely driven by events in patients with long-standing diabetes and/or previously established CVD […]. The mortality rate ratio for IHD was 4.34 (95% CI 2.49–7.57, P < 0.0001). There remained a number of deaths due to acute complications of diabetes, including ketoacidosis and hypoglycemia. This was most significant in patients with a shorter duration of diabetes but still apparent in those with long-standing diabetes[…]. Notably, deaths due to “risk-taking behavior” were lower in adults with type 1 diabetes compared with matched individuals without diabetes: mortality rate ratio was 0.42 (95% CI 0.22–0.79, P = 0.006) […] This was largely driven by the 80% reduction (95% CI 0.06–0.66) in deaths due to alcohol and drugs in males with type 1 diabetes (Table 3). No reduction was observed in female patients (rate ratio 0.90 [95% CI 0.18–4.44]), although the absolute event rate was already more than seven times lower in Finnish women than in men.”

The chief determinant of excess mortality in patients with type 1 diabetes is its complications. In the first 10 years of type 1 diabetes, the acute complications of diabetes dominate and result in excess mortality — more than twice that observed in the age- and sex-matched general population. This early excess explains why registry studies following patients with type 1 diabetes from diagnosis have consistently reported reduced life expectancy, even in patients free of chronic complications of diabetes (68). By contrast, studies of chronic complications, like FinnDiane and the Pittsburgh Epidemiology of Diabetes Complications Study (1,2), have followed participants with, usually, >10 years of type 1 diabetes at baseline. In these patients, the presence or absence of chronic complications of diabetes is critical for survival. In particular, the presence and severity of albuminuria (as a marker of vascular burden) is strongly associated with mortality outcomes in type 1 diabetes (1). […] the FinnDiane normoalbuminuric patients showed increased all-cause mortality compared with the control subjects without diabetes in contrast to when the comparison was made with the Finnish general population, as in our previous publication (1). Two crucial causes behind the excess mortality were acute diabetes complications and IHD. […] Comparisons with the general population, rather than matched control subjects, may overestimate expected mortality, diluting the SMR estimate”.

Despite major improvements in the delivery of diabetes care and other technological advances, acute complications remain a major cause of death both in children and in adults with type 1 diabetes. Indeed, the proportion of deaths due to acute events has not changed significantly over the last 30 years. […] Even in patients with long-standing diabetes (>20 years), the risk of death due to hypoglycemia or ketoacidosis remains a constant companion. […] If it were possible to eliminate all deaths from acute events, the observed mortality rate would have been no different from the general population in the early cohort. […] In long-term diabetes, avoiding chronic complications may be associated with mortality rates comparable with those of the general population; although death from IHD remains increased, this is offset by reduced risk-taking behavior, especially in men.”

“It is well-known that CVD is strongly associated with DKD (15). However, in the current study, mortality from IHD remained higher in adults with type 1 diabetes without albuminuria compared with matched control subjects in both men and women. This is concordant with other recent studies also reporting increased mortality from CVD in patients with type 1 diabetes in the absence of DKD (7,8) and reinforces the need for aggressive cardiovascular risk reduction even in patients without signs of microvascular disease. However, it is important to note that the risk of death from CVD, though significant, is still at least 10-fold lower than observed in patients with albuminuria (1). Alcohol- and drug-related deaths were substantially lower in patients with type 1 diabetes compared with the age-, sex-, and region-matched control subjects. […] This may reflect a selection bias […] Nonparticipation in health studies is associated with poorer health, stress, and lower socioeconomic status (17,18), which are in turn associated with increased risk of premature mortality. It can be speculated that with inclusion of patients with risk-taking behavior, the mortality rate in patients with diabetes would be even higher and, consequently, the SMR would also be significantly higher compared with the general population. Selection of patients who despite long-standing diabetes remained free of albuminuria may also have included individuals more accepting of general health messages and less prone to depression and nihilism arising from treatment failure.”

I think the selection bias problem is likely to be quite significant, as these results don’t really match what I’ve seen in the past. For example a recent Norwegian study on young type 1 diabetics found high mortality in their sample in significant degree due to alcohol-related causes and suicide: “A relatively high proportion of deaths were related to alcohol. […] Death was related to alcohol in 15% of cases. SMR for alcohol-related death was 6.8 (95% CI 4.5–10.3), for cardiovascular death was 7.3 (5.4–10.0), and for violent death was 3.6 (2.3–5.3).” That doesn’t sound very similar to the study above, and that study’s also from Scandinavia. In this study, in which they used data from diabetic organ donors, they found that a large proportion of the diabetics included in the study used illegal drugs: “we observed a high rate of illicit substance abuse: 32% of donors reported or tested positive for illegal substances (excluding marijuana), and multidrug use was common.”

Do keep in mind that one of the main reasons why ‘alcohol-related’ deaths are higher in diabetes is likely to be that ‘drinking while diabetic’ is a lot more risky than is ‘drinking while not diabetic’. On a related note, diabetics may not appreciate the level of risk they’re actually exposed to while drinking, due to community norms etc., so there might be a disconnect between risk preferences and observed behaviour (i.e., a diabetic might be risk averse but still engage in risky behaviours because he doesn’t know how risky those behaviours in which he’s engaging actually are).

Although the illicit drugs study indicates that diabetics at least in some samples are not averse to engaging in risky behaviours, a note of caution is probably warranted in the alcohol context: High mortality from alcohol-mediated acute complications needn’t be an indication that diabetics drink more than non-diabetics; that’s a separate question, you might see numbers like these even if they in general drink less. And a young type 1 diabetic who suffers a cardiac arrhythmia secondary to long-standing nocturnal hypoglycemia and subsequently is found ‘dead in bed’ after a bout of drinking is conceptually very different from a 50-year old alcoholic dying from a variceal bleed or acute pancreatitis. Parenthetically, if it is true that illicit drugs use is common in type 1 diabetics one reason might be that they are aware of the risks associated with alcohol (which is particularly nasty in terms of the metabolic/glycemic consequences in diabetes, compared to some other drugs) and thus they deliberately make a decision to substitute this drug with other drugs less likely to cause acute complications like severe hypoglycemic episodes or DKA (depending on the setting and the specifics, alcohol might be a contributor to both of these complications). If so, classical ‘risk behaviours’ may not always be ‘risk behaviours’ in diabetes. You need to be careful, this stuff’s complicated.

iv. Are All Patients With Type 1 Diabetes Destined for Dialysis if They Live Long Enough? Probably Not.

“Over the past three decades there have been numerous innovations, supported by large outcome trials that have resulted in improved blood glucose and blood pressure control, ultimately reducing cardiovascular (CV) risk and progression to nephropathy in type 1 diabetes (T1D) (1,2). The epidemiological data also support the concept that 25–30% of people with T1D will progress to end-stage renal disease (ESRD). Thus, not everyone develops progressive nephropathy that ultimately requires dialysis or transplantation. This is a result of numerous factors […] Data from two recent studies reported in this issue of Diabetes Care examine the long-term incidence of chronic kidney disease (CKD) in T1D. Costacou and Orchard (7) examined a cohort of 932 people evaluated for 50-year cumulative kidney complication risk in the Pittsburgh Epidemiology of Diabetes Complications study. They used both albuminuria levels and ESRD/transplant data for assessment. By 30 years’ duration of diabetes, ESRD affected 14.5% and by 40 years it affected 26.5% of the group with onset of T1D between 1965 and 1980. For those who developed diabetes between 1950 and 1964, the proportions developing ESRD were substantially higher at 34.6% at 30 years, 48.5% at 40 years, and 61.3% at 50 years. The authors called attention to the fact that ESRD decreased by 45% after 40 years’ duration between these two cohorts, emphasizing the beneficial roles of improved glycemic control and blood pressure control. It should also be noted that at 40 years even in the later cohort (those diagnosed between 1965 and 1980), 57.3% developed >300 mg/day albuminuria (7).”

Numbers like these may seem like ancient history (data from the 60s and 70s), but it’s important to keep in mind that many type 1 diabetics are diagnosed in early childhood, and that they don’t ‘get better’ later on – if they’re still alive, they’re still diabetic. …And very likely macroalbuminuric, at least if they’re from Pittsburgh. I was diagnosed in ’87.

“Gagnum et al. (8), using data from a Norwegian registry, also examined the incidence of CKD development over a 42-year follow-up period in people with childhood-onset (<15 years of age) T1D (8). The data from the Norwegian registry noted that the cumulative incidence of ESRD was 0.7% after 20 years and 5.3% after 40 years of T1D. Moreover, the authors noted the risk of developing ESRD was lower in women than in men and did not identify any difference in risk of ESRD between those diagnosed with diabetes in 1973–1982 and those diagnosed in 1989–2012. They concluded that there is a very low incidence of ESRD among patients with childhood-onset T1D diabetes in Norway, with a lower risk in women than men and among those diagnosed at a younger age. […] Analyses of population-based studies, similar to the Pittsburgh and Norway studies, showed that after 30 years of T1D the cumulative incidences of ESRD were only 10% for those diagnosed with T1D in 1961–1984 and 3% for those diagnosed in 1985–1999 in Japan (11), 3.3% for those diagnosed with T1D in 1977–2007 in Sweden (12), and 7.8% for those diagnosed with T1D in 1965–1999 in Finland (13) (Table 1).”

Do note that ESRD (end stage renal disease) is not the same thing as DKD (diabetic kidney disease), and that e.g. many of the Norwegians who did not develop ESRD nevertheless likely have kidney complications from their diabetes. That 5.3% is not the number of diabetics in that cohort who developed diabetes-related kidney complications, it’s the proportion of them who did and as a result of this needed a new kidney or dialysis in order not to die very soon. Do also keep in mind that both microalbuminuria and macroalbuminuria will substantially increase the risk of cardiovascular disease and -cardiac death. I recall a study where they looked at the various endpoints and found that more diabetics with microalbuminuria eventually died of cardiovascular disease than did ever develop kidney failure – cardiac risk goes up a lot long before end-stage renal disease. ESRD estimates don’t account for the full risk profile, and even if you look at mortality risk the number accounts for perhaps less than half of the total risk attributable to DKD. One thing the ESRD diagnosis does have going for it is that it’s a much more reliable variable indicative of significant pathology than is e.g. microalbuminuria (see e.g. this paper). The paper is short and not at all detailed, but they do briefly discuss/mention these issues:

“…there is a substantive difference between the numbers of people with stage 3 CKD (estimated glomerular filtration rate [eGFR] 30–59 mL/min/1.73 m2) versus those with stages 4 and 5 CKD (eGFR <30 mL/min/1.73 m2): 6.7% of the National Health and Nutrition Examination Survey (NHANES) population compared with 0.1–0.3%, respectively (14). This is primarily because of competing risks, such as death from CV disease that occurs in stage 3 CKD; hence, only the survivors are progressing into stages 4 and 5 CKD. Overall, these studies are very encouraging. Since the 1980s, risk of ESRD has been greatly reduced, while risk of CKD progression persists but at a slower rate. This reduced ESRD rate and slowed CKD progression is largely due to improvements in glycemic and blood pressure control and probably also to the institution of RAAS blockers in more advanced CKD. These data portend even better future outcomes if treatment guidance is followed. […] many medications are effective in blood pressure control, but RAAS blockade should always be a part of any regimen when very high albuminuria is present.”

v. New Understanding of β-Cell Heterogeneity and In Situ Islet Function.

“Insulin-secreting β-cells are heterogeneous in their regulation of hormone release. While long known, recent technological advances and new markers have allowed the identification of novel subpopulations, improving our understanding of the molecular basis for heterogeneity. This includes specific subpopulations with distinct functional characteristics, developmental programs, abilities to proliferate in response to metabolic or developmental cues, and resistance to immune-mediated damage. Importantly, these subpopulations change in disease or aging, including in human disease. […] We will discuss recent findings revealing functional β-cell subpopulations in the intact islet, the underlying basis for these identified subpopulations, and how these subpopulations may influence in situ islet function.”

I won’t cover this one in much detail, but this part was interesting:

“Gap junction (GJ) channels electrically couple β-cells within mouse and human islets (25), serving two main functions. First, GJ channels coordinate oscillatory dynamics in electrical activity and Ca2+ under elevated glucose or GLP-1, allowing pulsatile insulin secretion (26,27). Second, GJ channels lower spontaneous elevations in Ca2+ under low glucose levels (28). GJ coupling is also heterogeneous within the islet (29), leading to some β-cells being highly coupled and others showing negligible coupling. Several studies have examined how electrically heterogeneous cells interact via GJ channels […] This series of experiments indicate a “bistability” in islet function, where a threshold number of poorly responsive β-cells is sufficient to totally suppress islet function. Notably, when islets lacking GJ channels are treated with low levels of the KATP activator diazoxide or the GCK inhibitor mannoheptulose, a subpopulation of cells are silenced, presumably corresponding to the less functional population (30). Only diazoxide/mannoheptulose concentrations capable of silencing >40% of these cells will fully suppress Ca2+ elevations in normal islets. […] this indicates that a threshold number of poorly responsive cells can inhibit the whole islet. Thus, if there exists a threshold number of functionally competent β-cells (∼60–85%), then the islet will show coordinated elevations in Ca2+ and insulin secretion.

Below this threshold number, the islet will lack Ca2+ elevation and insulin secretion (Fig. 2). The precise threshold depends on the characteristics of the excitable and inexcitable populations: small numbers of inexcitable cells will increase the number of functionally competent cells required for islet activity, whereas small numbers of highly excitable cells will do the opposite. However, if GJ coupling is lowered, then inexcitable cells will exert a reduced suppression, also decreasing the threshold required. […] Paracrine communication between β-cells and other endocrine cells is also important for regulating insulin secretion. […] Little is known how these paracrine and juxtacrine mechanisms impact heterogeneous cells.”

vi. Closing in on the Mechanisms of Pulsatile Insulin Secretion.

“Insulin secretion from pancreatic islet β-cells occurs in a pulsatile fashion, with a typical period of ∼5 min. The basis of this pulsatility in mouse islets has been investigated for more than four decades, and the various theories have been described as either qualitative or mathematical models. In many cases the models differ in their mechanisms for rhythmogenesis, as well as other less important details. In this Perspective, we describe two main classes of models: those in which oscillations in the intracellular Ca2+ concentration drive oscillations in metabolism, and those in which intrinsic metabolic oscillations drive oscillations in Ca2+ concentration and electrical activity. We then discuss nine canonical experimental findings that provide key insights into the mechanism of islet oscillations and list the models that can account for each finding. Finally, we describe a new model that integrates features from multiple earlier models and is thus called the Integrated Oscillator Model. In this model, intracellular Ca2+ acts on the glycolytic pathway in the generation of oscillations, and it is thus a hybrid of the two main classes of models. It alone among models proposed to date can explain all nine key experimental findings, and it serves as a good starting point for future studies of pulsatile insulin secretion from human islets.”

This one covers material closely related to the study above, so if you find one of these papers interesting you might want to check out the other one as well. The paper is quite technical but if you were wondering why people are interested in this kind of stuff, one reason is that there’s good evidence at this point that insulin pulsativity is disturbed in type 2 diabetics and so it’d be nice to know why that is so that new drugs can be developed to correct this.

April 25, 2018 Posted by | Biology, Cardiology, Diabetes, Epidemiology, Health Economics, Medicine, Nephrology, Pharmacology, Studies | Leave a comment

100 cases in surgery (II)

Below I have added some links and quotes related to the last half of the book’s coverage.

Ischemic rest pain. (“Rest pain indicates inadequate tissue perfusion. *Urgent investigation and treatment is required to salvage the limb. […] The material of choice for bypass grafting is autogenous vein. […] The long-term patency of prosthetic grafts is inferior compared with autogenous vein.”)
Deep vein thrombosis.
Lymphedema. (“In lymphoedema, the vast majority of patients (>90 per cent) are treated conservatively. […] Debulking operations […] are only considered for a selected few patients where the function of the limb is impaired or those with recurrent attacks of severe cellulitis.”)
Varicose veins. Trendelenburg Test. (“Surgery on the superficial venous system should be avoided in patients with an incompetent deep venous system.”)
Testicular Torsion.
Benign Prostatic Hyperplasia.
Acute pyelonephritis. (“In patients with recurrent infection in the urinary system, significant pathology needs excluding such as malignancy, urinary tract stone disease and abnormal urinary tract anatomy.”)
Renal cell carcinomavon Hippel-Lindau syndrome. (“Approximately one-quarter to one-third of patients with renal cell carcinomas have metastases at presentation. […] The classic presenting triad of loin pain, a mass and haematuria only occurs in about 10 per cent of patients. More commonly, one of these features appears in isolation.”)
Haematuria. (“When taking the history, it is important to elicit the following: •Visible or non-visible: duration of haematuria • Age: cancers are more common with increasing age •Sex: females more likely to have urinary tract infections• Location: during micturition, was the haematuria always present (indicative of renal, ureteric or bladder pathology) or was it only present initially (suggestive of anterior urethral pathology) or present at the end of the stream (posterior urethra, bladder neck)? •Pain: more often associated with infection/inflammation/calculi, whereas malignancy tends to be painless •Associated lower urinary tract symptoms that will be helpful in determining aetiology •History of trauma Travel abroad […] •Previous urological surgery/history/recent instrumentation/prostatic biopsy •Medication, e.g. anticoagulants •Family history •Occupational history, e.g. rubber/dye occupational hazards are risk factors for developing transitional carcinoma of the bladder […] •Smoking status: increased risk, particularly of bladder cancer •General status, e.g. weight loss, reduced appetite […] Anticoagulation can often unmask other pathology in the urinary tract. […] Patients on oral anticoagulation who develop haematuria still require investigation.”)
Urinary retention. (“Acute and chronic retention are usually differentiated by the presence or absence of pain. Acute retention is painful, unlike chronic retention, when the bladder accommodates the increase in volume over time.”)
Colles’ fracture/Distal Radius Fractures. (“In all fractures the distal neurological and vascular status should be assessed.”)
Osteoarthritis. (“Radiological evidence of osteoarthritis is common, with 80 per cent of individuals over 80 years demonstrating some evidence of the condition. […] The commonest symptoms are pain, a reduction in mobility, and deformity of the affected joint.”)
Simmonds’ test.
Patella fracture.
Dislocated shoulder.
Femur fracture. (“Fractured neck of the femur is a relatively common injury following a fall in the elderly population. The rate of hip fracture doubles every decade from the age of 50 years. There is a female preponderance of three to one. […] it is important to take a comprehensive history, concentrating on the mechanism of injury. It is incorrect to assume that all falls are mechanical; it is not uncommon to find that the cause of the fall is actually due to a urinary or chest infection or even a silent myocardial infarction.”)
The Ottawa Ankle Rules.
Septic arthritis.
Carpal tunnel syndrome. Tinel’s test. Phalen’s Test. (“It is important, when examining a patient with suspected carpal tunnel syndrome, to carefully examine their neck, shoulder, and axilla. […] the source of the neurological compression may be proximal to the carpal tunnel”)
Acute Compartment Syndrome. (“Within the limbs there are a number of myofascial compartments. These consist of muscles contained within a relatively fixed-volume structure, bounded by fascial layers and bone. After trauma the pressure in the myofascial compartment increases. This pressure may exceed the venous capillary pressure, resulting in a loss of venous outflow from the compartment. The failure to clear metabolites also leads to the accumulation of fluid as a result of osmosis. If left untreated, the pressure will eventually exceed arterial pressure, leading to significant tissue ischaemia. The damage is irreversible after 4–6 h. Tibial fractures are the commonest cause of an acute compartment syndrome, which is thought to complicate up to 20 per cent of these injuries. […] The classical description of ‘pain out of proportion to the injury’ may [unfortunately] be difficult to determine if the clinician is inexperienced.”)
Hemarthrosis. (“Most knee injuries result in swelling which develops over hours rather than minutes. [A] history of immediate knee swelling suggests that there is a haemarthrosis.”)
Sickle cell crisis.
Cervical Spine Fracture. Neurogenic shock. NEXUS Criteria for C-Spine Imaging.
Slipped Capital Femoral Epiphysis. Trethowan sign. (“At any age, a limp in a child should always be taken seriously.”)

ATLS guidelines. (“The ATLS protocol should be followed even in the presence of obvious limb deformity, to ensure a potentially life-threatening injury is not missed.”)
Peritonsillar Abscess.
Epistaxis. Little’s area.
Croup. Acute epiglottitis. (“Acute epiglottitis is an absolute emergency and is usually caused by Haemophilus influenzae. There is significant swelling, and any attempt to examine the throat may result in airway obstruction. […] In adults it tends to cause a supraglottitis. It has a rapid progression and can lead to total airway obstruction. […] Stridor is an ominous sign and needs to be taken seriously.”)
Bell’s palsy.
Subarachnoid hemorrhageInternational subarachnoid aneurysm trial.
Chronic subdural hematoma. (“This condition is twice as common in men as women. Risk factors include chronic alcoholism, epilepsy, anticoagulant therapy (including aspirin) and thrombocytopenia. CSDH is more common in elderly patients due to cerebral atrophy. […] Initial misdiagnosis is, unfortunately, quite common. […] a chronic subdural haematoma should be suspected in confused patients with a history of a fall.”)
Extradural Haematoma. Cushing response. (“A direct blow to the temporo-parietal area is the commonest cause of an extradural haematoma. The bleed is normally arterial in origin. In 85 per cent of cases there is an associated skull fracture that causes damage to the middle meningeal artery. […] This situation represents a neurosurgical emergency. Without urgent decompression the patient will die. Unlike the chronic subdural, which can be treated with Burr hole drainage, the more dense acute arterial haematoma requires a craniotomy in order to evacuate it.”)
Cauda equina syndromeNeurosurgery for Cauda Equina Syndrome.
ASA classification. (“Patients having an operation within 3 months of a myocardial infarction carry a 30 per cent risk of reinfarction or cardiac death. This drops to 5 per cent after 6 months. […] Patients with COPD have difficulty clearing secretions from the lungs during the postoperative period. They also have a higher risk of basal atelectasis and are more prone to chest infections. These factors in combination with postoperative pain (especially in thoracic or abdominal major surgery) make them prone to respiratory complications. […] Patients with diabetes have an increased risk of postoperative complications because of the presence of microvascular and macrovascular disease: •Atherosclerosis: ischaemic heart disease/peripheral vascular disease/cerebrovascular disease •Nephropathy: renal insufficiency […] •Autonomic neuropathy: gastroparesis, decreased bladder tone •Peripheral neuropathy: lower-extremity ulceration, infection, gangrene •Poor wound healingIncreased risk of infection Tight glycaemic control (6–10 mmol/L) and the prevention of hypoglycaemia are critical in preventing perioperative and postoperative complications. The patient with diabetes should be placed first on the operating list to avoid prolonged fasting.
“)
MalnutritionHartmann’s procedure. (“Malnutrition leads to delayed wound healing, reduced ventilatory capacity, reduced immunity and an increased risk of infection. […] The two main methods of feeding are either by the enteral route or the parenteral route. Enteral feeding is via the gastrointestinal tract. It is less expensive and is associated with fewer complications than feeding by the parenteral route. […] The parenteral route should only be used if there is an inability to ingest, digest, absorb or propulse nutrients through the gastrointestinal tract. It can be administered by either a peripheral or central line. Peripheral parenteral nutrition can cause thrombophlebitis […] Sepsis is the most frequent and serious complication of centrally administered parenteral nutrition.”)
Acute Kidney Injury. (“The aetiology of acute renal failure can be thought of in three main categories: •Pre-renal: the glomerular filtration is reduced because of poor renal perfusion. This is usually caused by hypovolaemia as a result of acute blood loss, fluid depletion or hypotension. […] • Renal: this is the result of damage directly to the glomerulus or tubule. The use of drugs such as NSAIDs, contrast agents or aminoglycosides all have direct nephrotoxic effects. Acute tubular necrosis can occur as a result of prolonged hypoperfusion […]. Pre-existing renal disease such as diabetic nephropathy or glomerulonephritis makes patients more susceptible to further renal injury. •Post-renal: this can be simply the result of a blocked catheter. […] Calculi, blood clots, ureteric ligation and prostatic hypertrophy can also all lead to obstruction of urinary flow.”)
Post-operative ileus.

Pulmonary embolism.

April 18, 2018 Posted by | Books, Cancer/oncology, Cardiology, Gastroenterology, Infectious disease, Medicine, Nephrology, Neurology | Leave a comment

Medical Statistics (II)

In this post I’ll include some links and quotes related to topics covered in chapters 2 and 3 of the book. Chapter 2 is about ‘Collecting data’ and chapter 3 is about ‘Handling data: what steps are important?’

“Data collection is a key part of the research process, and the collection method will impact on later statistical analysis of the data. […] Think about the anticipated data analysis [in advance] so that data are collected in the appropriate format, e.g. if a mean will be needed for the analysis, then don’t record the data in categories, record the actual value. […] *It is useful to pilot the data collection process in a range of circumstances to make sure it will work in practice. *This usually involves trialling the data collection form on a smaller sample than intended for the study and enables problems with the data collection form to be identified and resolved prior to main data collection […] In general don’t expect the person filling out the form to do calculations as this may lead to errors, e.g. calculating a length of time between two dates. Instead, record each piece of information to allow computation of the particular value later […] The coding scheme should be designed at the same time as the form so that it can be built into the form. […] It may be important to distinguish between data that are simply missing from the original source and data that the data extractor failed to record. This can be achieved using different codes […] The use of numerical codes for non-numerical data may give the false impression that these data can be treated as if they were numerical data in the statistical analysis. This is not so.”

“It is critical that data quality is monitored and that this happens as the study progresses. It may be too late if problems are only discovered at the analysis stage. If checks are made during the data collection then problems can be corrected. More frequent checks may be worthwhile at the beginning of data collection when processes may be new and staff may be less experienced. […] The layout […] affects questionnaire completion rates and therefore impacts on the overall quality of the data collected.”

“Sometimes researchers need to develop a new measurement or questionnaire scale […] To do this rigorously requires a thorough process. We will outline the main steps here and note the most common statistical measures used in the process. […] Face validity *Is the scale measuring what it sets out to measure? […] Content validity *Does the scale cover all the relevant areas? […] *Between-observers consistency: is there agreement between different observers assessing the same individuals? *Within-observers consistency: is there agreement between assessments on the same individuals by the same observer on two different occasions? *Test-retest consistency: are assessments made on two separate occasions on the same individual similar? […] If a scale has several questions or items which all address the same issue then we usually expect each individual to get similar scores for those questions, i.e. we expect their responses to be internally consistent. […] Cronbach’s alpha […] is often used to assess the degree of internal consistency. [It] is calculated as an average of all correlations among the different questions on the scale. […] *Values are usually expected to be above 0.7 and below 0.9 *Alpha below 0.7 broadly indicates poor internal consistency *Alpha above 0.9 suggests that the items are very similar and perhaps fewer items could be used to obtain the same overall information”.

Bland–Altman plot.
Coefficient of variation.
Intraclass correlation.
Cohen’s kappa.
Likert scale. (“The key characteristic of Likert scales is that the scale is symmetrical. […] Care is needed when analyzing Likert scale data even though a numerical code is assigned to the responses, since the data are ordinal and discrete. Hence an average may be misleading […] It is quite common to collapse Likert scales into two or three categories such as agree versus disagree, but this has the disadvantage that data are discarded.”)
Visual analogue scale. (“VAS scores can be treated like continuous data […] Where it is feasible to use a VAS, it is preferable as it provides greater statistical power than a categorical scale”)

“Correct handling of data is essential to produce valid and reliable statistics. […] Data from research studies need to be coded […] It is important to document the coding scheme for categorical variables such as sex where it will not be obviously [sic, US] what the values mean […] It is strongly recommended that a unique numerical identifier is given to each subject, even if the research is conducted anonymously. […] Computerized datasets are often stored in a spreadsheet format with rows and columns of data. For most statistical analyses it is best to enter the data so that each row represents a different subject and each column a different variable. […] Prefixes or suffixes can be used to denote […] repeated measurements. If there are several repeated variables, use the same ‘scheme’ for all to avoid confusion. […] Try to avoid mixing suffixes and prefixes as it can cause confusion.”

“When data are entered onto a computer at different times it may be necessary to join datasets together. […] It is important to avoid over-writing a current dataset with a new updated version without keeping the old version as a separate file […] the two datasets must use exactly the same variable names for the same variables and the same coding. Any spelling mistakes will prevent a successful joining. […] It is worth checking that the joining has worked as expected by checking that the total number of observations in the updated file is the sum of the two previous files, and that the total number of variables is unchanged. […] When new data are collected on the same individuals at a later stage […], it may [again] be necessary to merge datasets. In order to do this the unique subject identifier must be used to identify the records that must be matched. For the merge to work, all variable names in the two datasets must be different except for the unique identifier. […] Spreadsheets are useful for entering and storing data. However, care should be taken when cutting and pasting different datasets to avoid misalignment of data. […] it is best not to join or sort datasets using a spreadsheet […in some research contexts, I’d add, this is also just plain impossible to even try, due to the amount of data involved – US…] […] It is important to ensure that a unique copy of the current file, the ‘master copy’, is stored at all times. Where the study involves more than one investigator, everyone needs to know who has responsibility for this. It is also important to avoid having two people revising the same file at the same time. […] It is important to keep a record of any changes that are made to the dataset and keep dated copies of datasets as changes are made […] Don’t overwrite datasets with edited versions as older versions may be needed later on.”

“Where possible, it is important to do some [data entry] checks early on to leave time for addressing problems while the study is in progress. […] *Check a random sample of forms for data entry accuracy. If this reveals problems then further checking may be needed. […] If feasible, consider checking data entry forms for key variables, e.g. the primary outcome. […] Range checks: […] tabulate all data to ensure there are no invalid values […] make sure responses are consistent with each other within subjects, e.g. check for any impossible or unlikely combination of responses such as a male with a pregnancy […] Check where feasible that any gaps are true gaps and not missed data entry […] Sometimes finding one error may lead to others being uncovered. For example, if a spreadsheet was used for data entry and one entry was missed, all following entries may be in the wrong columns. Hence, always consider if the discovery of one error may imply that there are others. […] Plots can be useful for checking larger datasets.”

Data monitoring committee.
Damocles guidelines.
Overview of stopping rules for clinical trials.
Pocock boundary.
Haybittle–Peto boundary.

“Trials are only stopped early when it is considered that the evidence for either benefit or harm is overwhelmingly strong. In such cases, the effect size will inevitably be larger than anticipated at the outset of the trial in order to trigger the early stop. Hence effect estimates from trials stopped early tend to be more extreme than would be the case if these trials had continued to the end, and so estimates of the efficacy or harm of a particular treatment may be exaggerated. This phenomenon has been demonstrated in recent reviews.1,2 […] Sometimes it becomes apparent part way through a trial that the assumptions made in the original sample size calculations are not correct. For example, where the primary outcome is a continuous variable, an estimate of the standard deviation (SD) is needed to calculate the required sample size. When the data are summarized during the trial, it may become apparent that the observed SD is different from that expected. This has implications for the statistical power. If the observed SD is smaller than expected then it may be reasonable to reduce the sample size but if it is bigger then it may be necessary to increase it.”

April 16, 2018 Posted by | Books, Medicine, Statistics | Leave a comment

100 cases in surgery (I)

We hope this book will give a good introduction to common surgical conditions seen in everyday surgical practice. Each question has been followed up with a brief overview of the condition and its immediate management. The book should act as an essential revision aid for surgical finals and as a basis for practising surgery after qualification.

This book is far from the first book I read in this series, and the format is the same as usual: There are 100 cases included, with a variety of different organ systems and diagnoses/settings encountered. The first page of a case presents a basic history and some key findings (lab tests, x-rays, results of imaging studies) and asks you a few questions about the case; the second and sometimes third page then provides answers to the questions and some important observations of note. Cases have of course been chosen in order to illustrate a wide variety of different medical scenarios involving many different organ systems and types of complaints. All cases are ‘to some extent’ surgical in nature, but in far from all cases will surgery necessarily be the required/indicated treatment option in the specific context; sometimes non-surgical management will be preferable, sometimes (much too often, in some oncological settings..) tumours are not resectable, some of the cases deal with complications to surgical procedures, etc.

The degree with which I was familiar with the topics covered in the book was highly variable; I’ve never really read any previous medical textbooks (…more or less-) exclusively devoted to surgical topics, but I have previously in a variety of contexts read about topics such as neurosurgery, cardiovascular surgery, and the recent endocrinology text of course covered surgical topics within this field in some detail; on the other hand my knowledge of (e.g.) otorhinolaryngology is, well, …limited. Part of my motivation for having a go at this book was precisely that my knowledge of the field of surgery felt a bit too fragmented (…and, in some cases, non-existent) even if I still didn’t feel like reading, say, an 800-page handbook like this one on these topics. Despite the more modest page-count of this book I would caution against thinking this is a particularly easy/fast read; there are a lot of cases and each of them has something to teach you – and as should also be easily inferred from the quote from the preface included above, this book is probably not readable if you don’t have some medical background of one kind or another (‘read fluent medical textbook’).

Below I have added some links to topics covered in the first half of the book, as well as a few observations from the coverage.

Abdominal hernias.
Appendicitis.
Large-bowel obstruction. Small-bowel obstruction.
Perianal abscess.
Malignant melanoma. (“Factors in the history that are suggestive of malignant change in a mole[:] *Change in surface *itching *increase in size/shape/thickness *Change in colour *bleeding/ulceration *brown/pink halo […] *enlarged local lymph nodes”)
Meckel’s diverticulum.
Rectal cancer. Colorectal Cancer. (“Colorectal cancer is the second commonest cancer causing death in the UK […]. Right-sided lesions can present with iron-deficiency anaemia, weight loss or a right iliac fossa mass. Lef-sided lesions present with alteration in bowel habit, rectal bleeding, or as an emergency with obstruction or perforation.”)
Sigmoid and cecal volvulus.
Anal fissure.
Diverticular disease.
Hemorrhoids.
Crohn Disease Pathology. (“Increasing frequency of stool, anorexia, low-grade fever, abdominal tenderness and anaemia suggest an inflammatory bowel disease. […] The initial management of uncomplicated Crohn’s disease should be medical.”)
Ulcerative colitis. (“Long-standing ulcerative colitis carries an approximate 3 per cent risk of malignant change after 10 years”).
Acute Cholecystitis and Biliary Colic. (“The majority of episodes of acute cholecystitis settle with analgesia and antibiotics.”)
Acute pancreatitis. (“Ranson’s criteria are used to grade the severity of alcoholic pancreatitis […] Each fulfilled criterion scores a point and the total indicates the severity. […] Estimates on mortality are based on the number of points scored: 0–2 = 2 per cent; 3–4 = 15 per cent; 5–6 = 40 per cent; >7 = 100 per cent. […] The aim of treatment is to halt the progression of local inflammation into systemic inflammation, which can result in multi-organ failure. Patients will often require nursing in a high-dependency or intensive care unit. They require prompt fluid resuscitation, a urinary catheter and central venous pressure monitoring. Early enteral feeding is advocated by some specialists. If there is evidence of sepsis, the patient should receive broad-spectrum antibiotics. […] patients should be managed aggressively”)
Ascending cholangitis.
Surgical Treatment of Perforated Peptic Ulcer.
Splenic rupture. Kehr’s sign.
Barrett’s esophagus. Peptic strictures of the esophagus. (“Proton pump inhibitors are effective in reducing stricture recurrence and in the treatment of Barrett’s oesophagus. If frequent dilatations are required despite acid suppression, then surgery should be considered. […] The risk of cancer is increased by up to 30 times in patients with Barrett’s oesophagus. If Barrett’s oesophagus is found at endoscopy, then the patient should be started on lifelong acid suppression. The patient should then have endoscopic surveillance to detect dysplasia before progression to carcinoma.”)
Esophageal Cancer. (“oesophageal carcinoma […] typically affects patients between 60 and 70 years of age and has a higher incidence in males. […] Dysphagia is the most common presenting symptom and is often associated with weight loss. […] Approximately 40 per cent of patients are suitable for surgical resection.”)
Pancreatic cancer. Courvoisier’s law. (“Pancreatic cancer classically presents with painless jaundice from biliary obstruction at the head of the pancreas and is associated with a distended gallbladder. Patients with pancreatic cancer can also present with epigastric pain, radiating through to the back, and vomiting due to duodenal obstruction. Pancreatic cancer occurs in patients between 60 and 80 years of age […] Roughly three-quarters have metastases at presentation […] Only approximately 15 per cent of pancreatic malignancies are surgically resectable.”)
Chronic pancreatitis. (“Chronic pancreatitis is an irreversible inflammation causing pancreatic fibrosis and calcification. Patients usually present with chronic abdominal pain and normal or mildly elevated pancreatic enzyme levels. The pancreas may have lost its endocrine and exocrine function, leading to diabetes mellitus and steatorrhea. […] The mean age of onset is 40 years, with a male preponderance of 4:1. […] thirty per cent of cases of chronic pancreatitis are idiopathic.”)
Myelofibrosis.
Gastric cancer. (“Gastric carcinoma is the second commonest cause of cancer worldwide. […] The highest incidence is in Eastern Asia, with a falling incidence in Western Europe. Diet and H. pylori infection are thought to be the two most important environmental factors in the development of gastric cancer. Diets rich in pickled vegetables, salted fish and smoked meats are thought to predispose to gastric cancer. […] Gastric cancer typically presents late and is associated with a poor prognosis. […] Surgical resection is not possible in the majority of patients.”)
Fibroadenomas of the breast. (“On examination, [benign fibroadenomas] tend to be spherical, smooth and sometimes lobulated with a rubbery consistency. The differential diagnosis includes fibrocystic disease (fluctuation in size with menstrual cycle and often associated with mild tenderness), a breast cyst (smooth, well-defined consistency like fibroadenoma but a hard as opposed to a rubbery consistency) or breast carcinoma (irregular, indistinct surface and shape with hard consistency).”)
Graves’ disease. (“Patients often present with many symptoms including palpitations, anxiety, thirst, sweating, weight loss, heat intolerance and increased bowel frequency. Enhanced activity of the adrenergic system also leads to agitation and restlessness. Approximately 25–30 per cent of patients with Graves’ disease have clinical evidence of ophthalmopathy. This almost only occurs in Graves’ disease (very rarely found in hypothyroidism)”)
Ruptured abdominal aortic aneurysm: a surgical emergency with many clinical presentations.
Temporal arteritis.
Transient ischemic attack. (“A stenosis of more than 70 per cent in the internal carotid artery is an indication for carotid endarterectomy in a patient with TIAs […]. The procedure should be carried out as soon as possible and within 2 weeks of the symptoms to prevent a major stroke.”)
Acute Mesenteric Ischemia.
Acute limb ischaemia. (“Signs and symptoms of acute limb ischaemia – the six Ps: •Pain •Pulseless •Pallor •Paraesthesia •Perishingly cold •Paralysis”).
Cervical rib.
Peripheral Arterial Occlusive Disease. (“The disease will only progress in one in four patients with intermittent claudication: therefore, unless the disease is very disabling for the patient, treatment is conservative. […] Investigations should include ankle–brachial pressure index (ABPI): this is typically <0.9 in patients with claudication; however, calcified vessels (typically in patients with diabetes) may result in an erroneously normal or high ABPI. […] Regular exercise has been shown to increase the claudication distance. In the minority of cases that do require intervention (i.e. severe short distance claudication not improving with exercise), angioplasty and bypass surgery are considered.”)
Venous ulcer. Marjolin’s ulcer. (“It is important to distinguish arterial from venous ulceration, as use of compression to treat the former type of ulcer is contraindicated.”)

April 14, 2018 Posted by | Books, Cancer/oncology, Gastroenterology, Medicine | Leave a comment

Medical Statistics (I)

I was more than a little critical of the book in my review on goodreads, and the review is sufficiently detailed that I thought it would be worth including it in this post. Here’s what I wrote on goodreads (slightly edited to take full advantage of the better editing options on wordpress):

“The coverage is excessively focused on significance testing. The book also provides very poor coverage of model selection topics, where the authors not once but repeatedly recommend employing statistically invalid approaches to model selection (the authors recommend using hypothesis testing mechanisms to guide model selection, as well as using adjusted R-squared for model selection decisions – both of which are frankly awful ideas, for reasons which are obvious to people familiar with the field of model selection. “Generally, hypothesis testing is a very poor basis for model selection […] There is no statistical theory that supports the notion that hypothesis testing with a fixed α level is a basis for model selection.” “While adjusted R2 is useful as a descriptive statistic, it is not useful in model selection” – quotes taken directly from Burnham & Anderson’s book Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach).

The authors do not at any point in the coverage even mention the option of using statistical information criteria to guide model selection decisions, and frankly repeatedly recommend doing things which are known to be deeply problematic. The authors also cover material from Borenstein and Hedges’ meta-analysis text in the book, yet still somehow manage to give poor advice in the context of meta-analysis along similar lines (implicitly advising people to base model decisions within the context of whether to use fixed effects or random effects on the results of heterogeneity tests, despite this approach being criticized as problematic in the formerly mentioned text).

Basic and not terrible, but there are quite a few problems with this text.”

I’ll add a few more details about the above-mentioned problems before moving on to the main coverage. As for the model selection topic I refer specifically to my coverage of Burnham and Anderson’s book here and here – these guys spent a lot of pages talking about why you shouldn’t do what the authors of this book recommend, and I’m sort of flabbergasted medical statisticians don’t know this kind of stuff by now. To people who’ve read both these books, it’s not really in question who’s in the right here.

I believe part of the reason why I was very annoyed at the authors at times was that they seem to promote exactly a sort of blind unthinking hypothesis-testing approach to things that is unfortunately very common – the entire book is saturated with hypothesis testing stuff, which means that many other topics are woefully insufficiently covered. The meta-analysis example is probably quite illustrative; the authors spend multiple pages on study heterogeneity and how to deal with it, but the entire coverage there is centered around the discussion of a most-likely underpowered test, the result of which should perhaps in the best case scenario direct the researcher’s attention to topics he should be have been thinking carefully about from the very start of his data analysis. You don’t need to quote many words from Borenstein and Hedges (here’s a relevant link) to get to the heart of the matter here:

“It makes sense to use the fixed-effect model if two conditions are met. First, we believe that all the studies included in the analysis are functionally identical. Second, our goal is to compute the common effect size for the identified population, and not to generalize to other populations. […] this situation is relatively rare. […] By contrast, when the researcher is accumulating data from a series of studies that had been performed by researchers operating independently, it would be unlikely that all the studies were functionally equivalent. Typically, the subjects or interventions in these studies would have differed in ways that would have impacted on the results, and therefore we should not assume a common effect size. Therefore, in these cases the random-effects model is more easily justified than the fixed-effect model.

A report should state the computational model used in the analysis and explain why this model was selected. A common mistake is to use the fixed-effect model on the basis that there is no evidence of heterogeneity. As [already] explained […], the decision to use one model or the other should depend on the nature of the studies, and not on the significance of this test [because the test will often have low power anyway].”

Yet these guys spend their efforts here talking about a test that is unlikely to yield useful information and which if anything probably distracts the reader from the main issues at hand; are the studies functionally equivalent? Do we assume there’s one (‘true’) effect size, or many? What do those coefficients we’re calculating actually mean? The authors do in fact include a lot of cautionary notes about how to interpret the test, but in my view all this means is that they’re devoting critical pages to peripheral issues – and perhaps even reinforcing the view that the test is important, or why else would they spend so much effort on it? – rather than promote good thinking about the key topics at hand.

Anyway, enough of the critical comments. Below a few links related to the first chapter of the book, as well as some quotes.

Declaration of Helsinki.
Randomized controlled trial.
Minimization (clinical trials).
Blocking (statistics).
Informed consent.
Blinding (RCTs). (…related xkcd link).
Parallel study. Crossover trial.
Zelen’s design.
Superiority, equivalence, and non-inferiority trials.
Intention-to-treat concept: A review.
Case-control study. Cohort study. Nested case-control study. Cross-sectional study.
Bradford Hill criteria.
Research protocol.
Sampling.
Type 1 and type 2 errors.
Clinical audit. A few quotes on this topic:

“‘Clinical audit’ is a quality improvement process that seeks to improve the patient care and outcomes through systematic review of care against explicit criteria and the implementation of change. Aspects of the structures, processes and outcomes of care are selected and systematically evaluated against explicit criteria. […] The aim of audit is to monitor clinical practice against agreed best practice standards and to remedy problems. […] the choice of topic is guided by indications of areas where improvement is needed […] Possible topics [include] *Areas where a problem has been identified […] *High volume practice […] *High risk practice […] *High cost […] *Areas of clinical practice where guidelines or firm evidence exists […] The organization carrying out the audit should have the ability to make changes based on their findings. […] In general, the same methods of statistical analysis are used for audit as for research […] The main difference between audit and research is in the aim of the study. A clinical research study aims to determine what practice is best, whereas an audit checks to see that best practice is being followed.”

A few more quotes from the end of the chapter:

“In clinical medicine and in medical research it is fairly common to categorize a biological measure into two groups, either to aid diagnosis or to classify an outcome. […] It is often useful to categorize a measurement in this way to guide decision-making, and/or to summarize the data but doing this leads to a loss of information which in turn has statistical consequences. […] If a continuous variable is used for analysis in a research study, a substantially smaller sample size will be needed than if the same variable is categorized into two groups […] *Categorization of a continuous variable into two groups loses much data and should be avoided whenever possible *Categorization of a continuous variable into several groups is less problematic”

“Research studies require certain specific data which must be collected to fulfil the aims of the study, such as the primary and secondary outcomes and main factors related to them. Beyond these data there are often other data that could be collected and it is important to weigh the costs and consequences of not collecting data that will be needed later against the disadvantages of collecting too much data. […] collecting too much data is likely to add to the time and cost to data collection and processing, and may threaten the completeness and/or quality of all of the data so that key data items are threatened. For example if a questionnaire is overly long, respondents may leave some questions out or may refuse to fill it out at all.”

Stratified samples are used when fixed numbers are needed from particular sections or strata of the population in order to achieve balance across certain important factors. For example a study designed to estimate the prevalence of diabetes in different ethnic groups may choose a random sample with equal numbers of subjects in each ethnic group to provide a set of estimates with equal precision for each group. If a simple random sample is used rather than a stratified sample, then estimates for minority ethnic groups may be based on small numbers and have poor precision. […] Cluster samples may be chosen where individuals fall naturally into groups or clusters. For example, patients on a hospital wards or patients in a GP practice. If a sample is needed of these patients, it may be easier to list the clusters and then to choose a random sample of clusters, rather than to choose a random sample of the whole population. […] Cluster sampling is less efficient statistically than simple random sampling […] the ICC summarizes the extent of the ‘clustering effect’. When individuals in the same cluster are much more alike than individuals in different clusters with respect to an outcome, then the clustering effect is greater and the impact on the required sample size is correspondingly greater. In practice there can be a substantial effect on the sample size even when the ICC is quite small. […] As well as considering how representative a sample is, it is important […] to consider the size of the sample. A sample may be unbiased and therefore representative, but too small to give reliable estimates. […] Prevalence estimates from small samples will be imprecise and therefore may be misleading. […] The greater the variability of a measure, the greater the number of subjects needed in the sample to estimate it precisely. […] the power of a study is the ability of the study to detect a difference if one exists.”

April 9, 2018 Posted by | Books, Epidemiology, Medicine, Statistics | Leave a comment

A few (more) diabetes papers of interest

Earlier this week I covered a couple of papers, but the second paper turned out to include a lot of interesting stuff so I decided to cut the post short and postpone my coverage of the other papers I’d intended to cover in that post until a later point in time; this post includes some of those other papers I’d intended to cover in that post.

i. TCF7L2 Genetic Variants Contribute to Phenotypic Heterogeneity of Type 1 Diabetes.

“Although the autoimmune destruction of β-cells has a major role in the development of type 1 diabetes, there is growing evidence that the differences in clinical, metabolic, immunologic, and genetic characteristics among patients (1) likely reflect diverse etiology and pathogenesis (2). Factors that govern this heterogeneity are poorly understood, yet these may have important implications for prognosis, therapy, and prevention.

The transcription factor 7 like 2 (TCF7L2) locus contains the single nucleotide polymorphism (SNP) most strongly associated with type 2 diabetes risk, with an ∼30% increase per risk allele (3). In a U.S. cohort, heterozygous and homozygous carriers of the at-risk alleles comprised 40.6% and 7.9%, respectively, of the control subjects and 44.3% and 18.3%, respectively, of the individuals with type 2 diabetes (3). The locus has no known association with type 1 diabetes overall (48), with conflicting reports in latent autoimmune diabetes in adults (816). […] Our studies in two separate cohorts have shown that the type 2 diabetes–associated TCF7L2 genetic variant is more frequent among specific subsets of individuals with autoimmune type 1 diabetes, specifically those with fewer markers of islet autoimmunity (22,23). These observations support a role of this genetic variant in the pathogenesis of diabetes at least in a subset of individuals with autoimmune diabetes. However, whether individuals with type 1 diabetes and this genetic variant have distinct metabolic abnormalities has not been investigated. We aimed to study the immunologic and metabolic characteristics of individuals with type 1 diabetes who carry a type 2 diabetes–associated allele of the TCF7L2 locus.”

“We studied 810 TrialNet participants with newly diagnosed type 1 diabetes and found that among individuals 12 years and older, the type 2 diabetes–associated TCF7L2 genetic variant is more frequent in those presenting with a single autoantibody than in participants who had multiple autoantibodies. These TCF7L2 variants were also associated with higher mean C-peptide AUC and lower mean glucose AUC levels at the onset of type 1 diabetes. […] These findings suggest that, besides the well-known link with type 2 diabetes, the TCF7L2 locus may play a role in the development of type 1 diabetes. The type 2 diabetes–associated TCF7L2 genetic variant identifies a subset of individuals with autoimmune type 1 diabetes and fewer markers of islet autoimmunity, lower glucose, and higher C-peptide at diagnosis. […] A possible interpretation of these data is that TCF7L2-encoded diabetogenic mechanisms may contribute to diabetes development in individuals with limited autoimmunity […]. Because the risk of progression to type 1 diabetes is lower in individuals with single compared with multiple autoantibodies, it is possible that in the absence of this type 2 diabetes–associated TCF7L2 variant, these individuals may have not manifested diabetes. If that is the case, we would postulate that disease development in these patients may have a type 2 diabetes–like pathogenesis in which islet autoimmunity is a significant component but not necessarily the primary driver.”

“The association between this genetic variant and single autoantibody positivity was present in individuals 12 years or older but not in children younger than 12 years. […] The results in the current study suggest that the type 2 diabetes–associated TCF7L2 genetic variant plays a larger role in older individuals. There is mounting evidence that the pathogenesis of type 1 diabetes varies by age (31). Younger individuals appear to have a more aggressive form of disease, with faster decline of β-cell function before and after onset of disease, higher frequency and severity of diabetic ketoacidosis, which is a clinical correlate of severe insulin deficiency, and lower C-peptide at presentation (3135). Furthermore, older patients are less likely to have type 1 diabetes–associated HLA alleles and islet autoantibodies (28). […] Taken together, we have demonstrated that individuals with autoimmune type 1 diabetes who carry the type 2 diabetes–associated TCF7L2 genetic variant have a distinct phenotype characterized by milder immunologic and metabolic characteristics than noncarriers, closer to those of type 2 diabetes, with an important effect of age.”

ii. Heart Failure: The Most Important, Preventable, and Treatable Cardiovascular Complication of Type 2 Diabetes.

“Concerns about cardiovascular disease in type 2 diabetes have traditionally focused on atherosclerotic vasculo-occlusive events, such as myocardial infarction, stroke, and limb ischemia. However, one of the earliest, most common, and most serious cardiovascular disorders in patients with diabetes is heart failure (1). Following its onset, patients experience a striking deterioration in their clinical course, which is marked by frequent hospitalizations and eventually death. Many sudden deaths in diabetes are related to underlying ventricular dysfunction rather than a new ischemic event. […] Heart failure and diabetes are linked pathophysiologically. Type 2 diabetes and heart failure are each characterized by insulin resistance and are accompanied by the activation of neurohormonal systems (norepinephrine, angiotensin II, aldosterone, and neprilysin) (3). The two disorders overlap; diabetes is present in 35–45% of patients with chronic heart failure, whether they have a reduced or preserved ejection fraction.”

“Treatments that lower blood glucose do not exert any consistently favorable effect on the risk of heart failure in patients with diabetes (6). In contrast, treatments that increase insulin signaling are accompanied by an increased risk of heart failure. Insulin use is independently associated with an enhanced likelihood of heart failure (7). Thiazolidinediones promote insulin signaling and have increased the risk of heart failure in controlled clinical trials (6). With respect to incretin-based secretagogues, liraglutide increases the clinical instability of patients with existing heart failure (8,9), and the dipeptidyl peptidase 4 inhibitors saxagliptin and alogliptin are associated with an increased risk of heart failure in diabetes (10). The likelihood of heart failure with the use of sulfonylureas may be comparable to that with thiazolidinediones (11). Interestingly, the only two classes of drugs that ameliorate hyperinsulinemia (metformin and sodium–glucose cotransporter 2 inhibitors) are also the only two classes of antidiabetes drugs that appear to reduce the risk of heart failure and its adverse consequences (12,13). These findings are consistent with experimental evidence that insulin exerts adverse effects on the heart and kidneys that can contribute to heart failure (14). Therefore, physicians can prevent many cases of heart failure in type 2 diabetes by careful consideration of the choice of agents used to achieve glycemic control. Importantly, these decisions have an immediate effect; changes in risk are seen within the first few months of changes in treatment. This immediacy stands in contrast to the years of therapy required to see a benefit of antidiabetes drugs on microvascular risk.”

“As reported by van den Berge et al. (4), the prognosis of patients with heart failure has improved over the past two decades; heart failure with a reduced ejection fraction is a treatable disease. Inhibitors of the renin-angiotensin system are a cornerstone of the management of both disorders; they prevent the onset of heart failure and the progression of nephropathy in patients with diabetes, and they reduce the risk of cardiovascular death and hospitalization in those with established heart failure (3,15). Diabetes does not influence the magnitude of the relative benefit of ACE inhibitors in patients with heart failure, but patients with diabetes experience a greater absolute benefit from treatment (16).”

“The totality of evidence from randomized trials […] demonstrates that in patients with diabetes, heart failure is not only common and clinically important, but it can also be prevented and treated. This conclusion is particularly significant because physicians have long ignored heart failure in their focus on glycemic control and their concerns about the ischemic macrovascular complications of diabetes (1).”

iii. Closely related to the above study: Mortality Reduction Associated With β-Adrenoceptor Inhibition in Chronic Heart Failure Is Greater in Patients With Diabetes.

“Diabetes increases mortality in patients with chronic heart failure (CHF) and reduced left ventricular ejection fraction. Studies have questioned the safety of β-adrenoceptor blockers (β-blockers) in some patients with diabetes and reduced left ventricular ejection fraction. We examined whether β-blockers and ACE inhibitors (ACEIs) are associated with differential effects on mortality in CHF patients with and without diabetes. […] We conducted a prospective cohort study of 1,797 patients with CHF recruited between 2006 and 2014, with mean follow-up of 4 years.”

RESULTS Patients with diabetes were prescribed larger doses of β-blockers and ACEIs than were patients without diabetes. Increasing β-blocker dose was associated with lower mortality in patients with diabetes (8.9% per mg/day; 95% CI 5–12.6) and without diabetes (3.5% per mg/day; 95% CI 0.7–6.3), although the effect was larger in people with diabetes (interaction P = 0.027). Increasing ACEI dose was associated with lower mortality in patients with diabetes (5.9% per mg/day; 95% CI 2.5–9.2) and without diabetes (5.1% per mg/day; 95% CI 2.6–7.6), with similar effect size in these groups (interaction P = 0.76).”

“Our most important findings are:

  • Higher-dose β-blockers are associated with lower mortality in patients with CHF and LVSD, but patients with diabetes may derive more benefit from higher-dose β-blockers.

  • Higher-dose ACEIs were associated with comparable mortality reduction in people with and without diabetes.

  • The association between higher β-blocker dose and reduced mortality is most pronounced in patients with diabetes who have more severely impaired left ventricular function.

  • Among patients with diabetes, the relationship between β-blocker dose and mortality was not associated with glycemic control or insulin therapy.”

“We make the important observation that patients with diabetes may derive more prognostic benefit from higher β-blocker doses than patients without diabetes. These data should provide reassurance to patients and health care providers and encourage careful but determined uptitration of β-blockers in this high-risk group of patients.”

iv. Diabetes, Prediabetes, and Brain Volumes and Subclinical Cerebrovascular Disease on MRI: The Atherosclerosis Risk in Communities Neurocognitive Study (ARIC-NCS).

“Diabetes and prediabetes are associated with accelerated cognitive decline (1), and diabetes is associated with an approximately twofold increased risk of dementia (2). Subclinical brain pathology, as defined by small vessel disease (lacunar infarcts, white matter hyperintensities [WMH], and microhemorrhages), large vessel disease (cortical infarcts), and smaller brain volumes also are associated with an increased risk of cognitive decline and dementia (37). The mechanisms by which diabetes contributes to accelerated cognitive decline and dementia are not fully understood, but contributions of hyperglycemia to both cerebrovascular disease and primary neurodegenerative disease have been suggested in the literature, although results are inconsistent (2,8). Given that diabetes is a vascular risk factor, brain atrophy among individuals with diabetes may be driven by increased cerebrovascular disease. Brain magnetic resonance imaging (MRI) provides a noninvasive opportunity to study associations of hyperglycemia with small vessel disease (lacunar infarcts, WMH, microhemorrhages), large vessel disease (cortical infarcts), and brain volumes (9).”

“Overall, the mean age of participants [(n = 1,713)] was 75 years, 60% were women, 27% were black, 30% had prediabetes (HbA1c 5.7 to <6.5%), and 35% had diabetes. Compared with participants without diabetes and HbA1c <5.7%, those with prediabetes (HbA1c 5.7 to <6.5%) were of similar age (75.2 vs. 75.0 years; P = 0.551), were more likely to be black (24% vs. 11%; P < 0.001), have less than a high school education (11% vs. 7%; P = 0.017), and have hypertension (71% vs. 63%; P = 0.012) (Table 1). Among participants with diabetes, those with HbA1c <7.0% versus ≥7.0% were of similar age (75.4 vs. 75.1 years; P = 0.481), but those with diabetes and HbA1c ≥7.0% were more likely to be black (39% vs. 28%; P = 0.020) and to have less than a high school education (23% vs. 16%; P = 0.031) and were more likely to have a longer duration of diabetes (12 vs. 8 years; P < 0.001).”

“Compared with participants without diabetes and HbA1c <5.7%, those with diabetes and HbA1c ≥7.0% had smaller total brain volume (β −0.20 SDs; 95% CI −0.31, −0.09) and smaller regional brain volumes, including frontal, temporal, occipital, and parietal lobes; deep gray matter; Alzheimer disease signature region; and hippocampus (all P < 0.05) […]. Compared with participants with diabetes and HbA1c <7.0%, those with diabetes and HbA1c ≥7.0% had smaller total brain volume (P < 0.001), frontal lobe volume (P = 0.012), temporal lobe volume (P = 0.012), occipital lobe volume (P = 0.008), parietal lobe volume (P = 0.015), deep gray matter volume (P < 0.001), Alzheimer disease signature region volume (0.031), and hippocampal volume (P = 0.016). Both participants with diabetes and HbA1c <7.0% and those with prediabetes (HbA1c 5.7 to <6.5%) had similar total and regional brain volumes compared with participants without diabetes and HbA1c <5.7% (all P > 0.05). […] No differences in the presence of lobar microhemorrhages, subcortical microhemorrhages, cortical infarcts, and lacunar infarcts were observed among the diabetes-HbA1c categories (all P > 0.05) […]. Compared with participants without diabetes and HbA1c <5.7%, those with diabetes and HbA1c ≥7.0% had increased WMH volume (P = 0.016). The WMH volume among participants with diabetes and HbA1c ≥7.0% was also significantly greater than among those with diabetes and HbA1c <7.0% (P = 0.017).”

“Those with diabetes duration ≥10 years were older than those with diabetes duration <10 years (75.9 vs. 75.0 years; P = 0.041) but were similar in terms of race and sex […]. Compared with participants with diabetes duration <10 years, those with diabetes duration ≥10 years has smaller adjusted total brain volume (β −0.13 SDs; 95% CI −0.20, −0.05) and smaller temporal lobe (β −0.14 SDs; 95% CI −0.24, −0.03), parietal lobe (β − 0.11 SDs; 95% CI −0.21, −0.01), and hippocampal (β −0.16 SDs; 95% CI −0.30, −0.02) volumes […]. Participants with diabetes duration ≥10 years also had a 2.44 times increased odds (95% CI 1.46, 4.05) of lacunar infarcts compared with those with diabetes duration <10 years”.

Conclusions
In this community-based population, we found that ARIC-NCS participants with diabetes with HbA1c ≥7.0% have smaller total and regional brain volumes and an increased burden of WMH, but those with prediabetes (HbA1c 5.7 to <6.5%) and diabetes with HbA1c <7.0% have brain volumes and markers of subclinical cerebrovascular disease similar to those without diabetes. Furthermore, among participants with diabetes, those with more-severe disease (as measured by higher HbA1c and longer disease duration) had smaller total and regional brain volumes and an increased burden of cerebrovascular disease compared with those with lower HbA1c and shorter disease duration. However, we found no evidence that associations of diabetes with smaller brain volumes are mediated by cerebrovascular disease.

The findings of this study extend the current literature that suggests that diabetes is strongly associated with brain volume loss (11,2527). Global brain volume loss (11,2527) has been consistently reported, but associations of diabetes with smaller specific brain regions have been less robust (27,28). Similar to prior studies, the current results show that compared with individuals without diabetes, those with diabetes have smaller total brain volume (11,2527) and regional brain volumes, including frontal and occipital lobes, deep gray matter, and the hippocampus (25,27). Furthermore, the current study suggests that greater severity of disease (as measured by HbA1c and diabetes duration) is associated with smaller total and regional brain volumes. […] Mechanisms whereby diabetes may contribute to brain volume loss include accelerated amyloid-β and hyperphosphorylated tau deposition as a result of hyperglycemia (29). Another possible mechanism involves pancreatic amyloid (amylin) infiltration of the brain, which then promotes amyloid-β deposition (29). […] Taken together, […] the current results suggest that diabetes is associated with both lower brain volumes and increased cerebrovascular pathology (WMH and lacunes).”

v. Interventions to increase attendance for diabetic retinopathy screening (Cochrane review).

“The primary objective of the review was to assess the effectiveness of quality improvement (QI) interventions that seek to increase attendance for DRS in people with type 1 and type 2 diabetes.

Secondary objectives were:
To use validated taxonomies of QI intervention strategies and behaviour change techniques (BCTs) to code the description of interventions in the included studies and determine whether interventions that include particular QI strategies or component BCTs are more effective in increasing screening attendance;
To explore heterogeneity in effect size within and between studies to identify potential explanatory factors for variability in effect size;
To explore differential effects in subgroups to provide information on how equity of screening attendance could be improved;
To critically appraise and summarise current evidence on the resource use, costs and cost effectiveness.”

“We included 66 RCTs conducted predominantly (62%) in the USA. Overall we judged the trials to be at low or unclear risk of bias. QI strategies were multifaceted and targeted patients, healthcare professionals or healthcare systems. Fifty-six studies (329,164 participants) compared intervention versus usual care (median duration of follow-up 12 months). Overall, DRS [diabetic retinopathy screening] attendance increased by 12% (risk difference (RD) 0.12, 95% confidence interval (CI) 0.10 to 0.14; low-certainty evidence) compared with usual care, with substantial heterogeneity in effect size. Both DRS-targeted (RD 0.17, 95% CI 0.11 to 0.22) and general QI interventions (RD 0.12, 95% CI 0.09 to 0.15) were effective, particularly where baseline DRS attendance was low. All BCT combinations were associated with significant improvements, particularly in those with poor attendance. We found higher effect estimates in subgroup analyses for the BCTs ‘goal setting (outcome)’ (RD 0.26, 95% CI 0.16 to 0.36) and ‘feedback on outcomes of behaviour’ (RD 0.22, 95% CI 0.15 to 0.29) in interventions targeting patients, and ‘restructuring the social environment’ (RD 0.19, 95% CI 0.12 to 0.26) and ‘credible source’ (RD 0.16, 95% CI 0.08 to 0.24) in interventions targeting healthcare professionals.”

“Ten studies (23,715 participants) compared a more intensive (stepped) intervention versus a less intensive intervention. In these studies DRS attendance increased by 5% (RD 0.05, 95% CI 0.02 to 0.09; moderate-certainty evidence).”

“Overall, we found that there is insufficient evidence to draw robust conclusions about the relative cost effectiveness of the interventions compared to each other or against usual care.”

“The results of this review provide evidence that QI interventions targeting patients, healthcare professionals or the healthcare system are associated with meaningful improvements in DRS attendance compared to usual care. There was no statistically significant difference between interventions specifically aimed at DRS and those which were part of a general QI strategy for improving diabetes care.”

vi. Diabetes in China: Epidemiology and Genetic Risk Factors and Their Clinical Utility in Personalized Medication.

“The incidence of type 2 diabetes (T2D) has rapidly increased over recent decades, and T2D has become a leading public health challenge in China. Compared with European descents, Chinese patients with T2D are diagnosed at a relatively young age and low BMI. A better understanding of the factors contributing to the diabetes epidemic is crucial for determining future prevention and intervention programs. In addition to environmental factors, genetic factors contribute substantially to the development of T2D. To date, more than 100 susceptibility loci for T2D have been identified. Individually, most T2D genetic variants have a small effect size (10–20% increased risk for T2D per risk allele); however, a genetic risk score that combines multiple T2D loci could be used to predict the risk of T2D and to identify individuals who are at a high risk. […] In this article, we review the epidemiological trends and recent progress in the understanding of T2D genetic etiology and further discuss personalized medicine involved in the treatment of T2D.”

“Over the past three decades, the prevalence of diabetes in China has sharply increased. The prevalence of diabetes was reported to be less than 1% in 1980 (2), 5.5% in 2001 (3), 9.7% in 2008 (4), and 10.9% in 2013, according to the latest published nationwide survey (5) […]. The prevalence of diabetes was higher in the senior population, men, urban residents, individuals living in economically developed areas, and overweight and obese individuals. The estimated prevalence of prediabetes in 2013 was 35.7%, which was much higher than the estimate of 15.5% in the 2008 survey. Similarly, the prevalence of prediabetes was higher in the senior population, men, and overweight and obese individuals. However, prediabetes was more prevalent in rural residents than in urban residents. […] the 2013 survey also compared the prevalence of diabetes among different races. The crude prevalence of diabetes was 14.7% in the majority group, i.e., Chinese Han, which was higher than that in most minority ethnic groups, including Tibetan, Zhuang, Uyghur, and Muslim. The crude prevalence of prediabetes was also higher in the Chinese Han ethnic group. The Tibetan participants had the lowest prevalence of diabetes and prediabetes (4.3% and 31.3%).”

“[T]he prevalence of diabetes in young people is relatively high and increasing. The prevalence of diabetes in the 20- to 39-year age-group was 3.2%, according to the 2008 national survey (4), and was 5.9%, according to the 2013 national survey (5). The prevalence of prediabetes also increased from 9.0% in 2008 to 28.8% in 2013 […]. Young people suffering from diabetes have a higher risk of chronic complications, which are the major cause of mortality and morbidity in diabetes. According to a study conducted in Asia (6), patients with young-onset diabetes had higher mean concentrations of HbA1c and LDL cholesterol and a higher prevalence of retinopathy (20% vs. 18%, P = 0.011) than those with late-onset diabetes. In the Chinese, patients with early-onset diabetes had a higher risk of nonfatal cardiovascular disease (7) than did patients with late-onset diabetes (odds ratio [OR] 1.91, 95% CI 1.81–2.02).”

“As approximately 95% of patients with diabetes in China have T2D, the rapid increase in the prevalence of diabetes in China may be attributed to the increasing rates of overweight and obesity and the reduction in physical activity, which is driven by economic development, lifestyle changes, and diet (3,11). According to a series of nationwide surveys conducted by the China Physical Fitness Surveillance Center (12), the prevalence of overweight (BMI ≥23.0 to <27.5 kg/m2) in Chinese adults aged 20–59 years increased from 37.4% in 2000 to 39.2% in 2005, 40.7% in 2010, and 41.2% in 2014, with an estimated increase of 0.27% per year. The prevalence of obesity (BMI ≥27.5 kg/m2) increased from 8.6% in 2000 to 10.3% in 2005, 12.2% in 2010, and 12.9% in 2014, with an estimated increase of 0.32% per year […]. The prevalence of central obesity increased from 13.9% in 2000 to 18.3% in 2005, 22.1% in 2010, and 24.9% in 2014, with an estimated increase of 0.78% per year. Notably, T2D develops at a considerably lower BMI in the Chinese population than that in European populations. […] The relatively high risk of diabetes at a lower BMI could be partially attributed to the tendency toward visceral adiposity in East Asian populations, including the Chinese population (13). Moreover, East Asian populations have been found to have a higher insulin sensitivity with a much lower insulin response than European descent and African populations, implying a lower compensatory β-cell function, which increases the risk of progressing to overt diabetes (14).”

“Over the past two decades, linkage analyses, candidate gene approaches, and large-scale GWAS have successfully identified more than 100 genes that confer susceptibility to T2D among the world’s major ethnic populations […], most of which were discovered in European populations. However, less than 50% of these European-derived loci have been successfully confirmed in East Asian populations. […] there is a need to identify specific genes that are associated with T2D in other ethnic populations. […] Although many genetic loci have been shown to confer susceptibility to T2D, the mechanism by which these loci participate in the pathogenesis of T2D remains unknown. Most T2D loci are located near genes that are related to β-cell function […] most single nucleotide polymorphisms (SNPs) contributing to the T2D risk are located in introns, but whether these SNPs directly modify gene expression or are involved in linkage disequilibrium with unknown causal variants remains to be investigated. Furthermore, the loci discovered thus far collectively account for less than 15% of the overall estimated genetic heritability.”

“The areas under the receiver operating characteristic curves (AUCs) are usually used to assess the discriminative accuracy of an approach. The AUC values range from 0.5 to 1.0, where an AUC of 0.5 represents a lack of discrimination and an AUC of 1 represents perfect discrimination. An AUC ≥0.75 is considered clinically useful. The dominant conventional risk factors, including age, sex, BMI, waist circumference, blood pressure, family history of diabetes, physical activity level, smoking status, and alcohol consumption, can be combined to construct conventional risk factor–based models (CRM). Several studies have compared the predictive capacities of models with and without genetic information. The addition of genetic markers to a CRM could slightly improve the predictive performance. For example, one European study showed that the addition of an 11-SNP GRS to a CRM marginally improved the risk prediction (AUC was 0.74 without and 0.75 with the genetic markers, P < 0.001) in a prospective cohort of 16,000 individuals (37). A meta-analysis (38) consisting of 23 studies investigating the predictive performance of T2D risk models also reported that the AUCs only slightly increased with the addition of genetic information to the CRM (median AUC was increased from 0.78 to 0.79). […] Despite great advances in genetic studies, the clinical utility of genetic information in the prediction, early identification, and prevention of T2D remains in its preliminary stage.”

“An increasing number of studies have highlighted that early nutrition has a persistent effect on the risk of diabetes in later life (40,41). China’s Great Famine of 1959–1962 is considered to be the largest and most severe famine of the 20th century […] Li et al. (43) found that offspring of mothers exposed to the Chinese famine have a 3.9-fold increased risk of diabetes or hyperglycemia as adults. A more recent study (the Survey on Prevalence in East China for Metabolic Diseases and Risk Factors [SPECT-China]) conducted in 2014, among 6,897 adults from Shanghai, Jiangxi, and Zhejiang provinces, had the same conclusion that famine exposure during the fetal period (OR 1.53, 95% CI 1.09–2.14) and childhood (OR 1.82, 95% CI 1.21–2.73) was associated with diabetes (44). These findings indicate that undernutrition during early life increases the risk of hyperglycemia in adulthood and this association is markedly exaggerated when facing overnutrition in later life.”

February 23, 2018 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Health Economics, Immunology, Medicine, Neurology, Ophthalmology, Pharmacology, Studies | Leave a comment

Endocrinology (part 5 – calcium and bone metabolism)

Some observations from chapter 6:

“*Osteoclasts – derived from the monocytic cells; resorb bone. *Osteoblasts – derived from the fibroblast-like cells; make bone. *Osteocytes – buried osteoblasts; sense mechanical strain in bone. […] In order to ensure that bone can undertake its mechanical and metabolic functions, it is in a constant state of turnover […] Bone is laid down rapidly during skeletal growth at puberty. Following this, there is a period of stabilization of bone mass in early adult life. After the age of ~40, there is a gradual loss of bone in both sexes. This occurs at the rate of approximately 0.5% annually. However, in ♀ after the menopause, there is a period of rapid bone loss. The accelerated loss is maximal in the first 2-5 years after the cessation of ovarian function and then gradually declines until the previous gradual rate of loss is once again established. The excess bone loss associated with the menopause is of the order of 10% of skeletal mass. This menopause-associated loss, coupled with higher peak bone mass acquisition in ♂, largely explains why osteoporosis and its associated fractures are more common in ♀.”

“The clinical utility of routine measurements of bone turnover markers is not yet established. […] Skeletal radiology[:] *Useful for: *Diagnosis of fracture. *Diagnosis of specific diseases (e.g. Paget’s disease and osteomalacia). *Identification of bone dysplasia. *Not useful for assessing bone density. […] Isotope bone scans are useful for identifying localized areas of bone disease, such as fracture, metastases, or Paget’s disease. […] Isotope bone scans are particularly useful in Paget’s disease to establish the extent and sites of skeletal involvement and the underlying disease activity. […] Bone biopsy is occasionally necessary for the diagnosis of patients with complex metabolic bone diseases. […] Bone biopsy is not indicated for the routine diagnosis of osteoporosis. It should only be undertaken in highly specialist centres with appropriate expertise. […] Measurement of 24h urinary excretion of calcium provides a measure of risk of renal stone formation or nephrocalcinosis in states of chronic hypercalcaemia. […] 250H vitamin D […] is the main storage form of vitamin D, and the measurement of ‘total vitamin D’ is the most clinically useful measure of vitamin D status. Internationally, there remains controversy around a ‘normal’ or ‘optimal’ concentration of vitamin D. Levels over 50nmol/L are generally accepted as satisfactory and values <25nmol/L representing deficiency. True osteomalacia occurs with vitamin D values <15 nmol/L. Low levels of 250HD can result from a variety of causes […] Bone mass is quoted in terms of the number of standard deviations from an expected mean. […] A reduction of one SD in bone density will approximately double the risk of fracture.”

[I should perhaps add a cautionary note here that while this variable is very useful in general, it is more useful in some contexts than in others; and in some specific disease process contexts it is quite clear that it will tend to underestimate the fracture risk. Type 1 diabetes is a clear example. For more details, see this post.]

“Hypercalcaemia is found in 5% of hospital patients and in 0.5% of the general population. […] Many different disease states can lead to hypercalcaemia. […] In asymptomatic community-dwelling subjects, the vast majority of hypercalcaemia is the result of hyperparathyroidism. […] The clinical features of hypercalcaemia are well recognized […]; unfortunately, they are non-specific […] [They include:] *Polyuria. *Polydipsia. […] *Anorexia. *Vomiting. *Constipation. *Abdominal pain. […] *Confusion. *Lethargy. *Depression. […] Clinical signs of hypercalcaemia are rare. […] the presence of bone pain or fracture and renal stones […] indicate the presence of chronic hypercalcaemia. […] Hypercalcaemia is usually a late manifestation of malignant disease, and the primary lesion is usually evident by the time hypercalcaemia is expressed (50% of patients die within 30 days).”

“Primary hyperparathyroidism [is] [p]resent in up to 1 in 500 of the general population where it is predominantly a disease of post-menopausal ♀ […] The normal physiological response to hypocalcaemia is an increase in PTH secretion. This is termed 2° hyperparathyroidism and is not pathological in as much as the PTH secretion remains under feedback control. Continued stimulation of the parathyroid glands can lead to autonomous production of PTH. This, in turn, causes hypercalcaemia which is termed tertiary hyperparathyroidism. This is usually seen in the context of renal disease […] In majority of patients [with hyperparathyroidism] without end-organ damage, disease is benign and stable. […] Investigation is, therefore, primarily aimed at determining the presence of end-organ damage from hypercalcaemia in order to determine whether operative intervention is indicated. […] It is generally accepted that all patients with symptomatic hyperparathyroidism or evidence of end-organ damage should be considered for parathyroidectomy. This would include: *Definite symptoms of hypercalcaemia. […] *Impaired renal function. *Renal stones […] *Parathyroid bone disease, especially osteitis fibrosis cystica. *Pancreatitis. […] Patients not managed with surgery require regular follow-up. […] <5% fail to become normocalcaemic [after surgery], and these should be considered for a second operation. […] Patients rendered permanently hypoparathyroid by surgery require lifelong supplements of active metabolites of vitamin D with calcium. This can lead to hypercalciuria, and the risk of stone formation may still be present in these patients. […] In hypoparathyroidism, the target serum calcium should be at the low end of the reference range. […] any attempt to raise the plasma calcium well into the normal range is likely to result in unacceptable hypercalciuria”.

“Although hypocalcaemia can result from failure of any of the mechanisms by which serum calcium concentration is maintained, it is usually the result of either failure of PTH secretion or because of the inability to release calcium from bone. […] The clinical features of hypocalcaemia are largely as a result of neuromuscular excitability. In order of  severity, these include: *Tingling – especially of fingers, toes, or lips. *Numbness – especially of fingers, toes, or lips. *Cramps. *Carpopedal spasm. *Stridor due to laryngospasm. *Seizures. […] symptoms of hypocalcaemia tend to reflect the severity and rapidity of onset of the metabolic abnormality. […] there may be clinical signs and symptoms associated with the underlying condition: *Vitamin D deficiency may be associated with generalized bone pain, fractures, or proximal myopathy […] *Hypoparathyroidism can be accompanied by mental slowing and personality disturbances […] *If hypocalcaemia is present during the development of permanent teeth, these may show areas of enamel hypoplasia. This can be a useful physical sign, indicating that the hypocalcaemia is long-standing. […] Acute symptomatic hypocalcaemia is a medical emergency and demands urgent treatment whatever the cause […] *Patients with tetany or seizures require urgent IV treatment with calcium gluconate […] Care must be taken […] as too rapid elevation of the plasma calcium can cause arrhythmias. […] *Treatment of chronic hypocalcaemia is more dependent on the cause. […] In patients with mild parathyroid dysfunction, it may be possible to achieve acceptable calcium concentrations by using calcium supplements alone. […] The majority of patients will not achieve adequate control with such treatment. In those cases, it is necessary to use vitamin D or its metabolites in pharmacological doses to maintain plasma calcium.”

“Pseudohypoparathyroidism[:] *Resistance to parathyroid hormone action. *Due to defective signalling of PTH action via cell membrane receptor. *Also affects TSH, LH, FSH, and GH signalling. […] Patients with the most common type of pseudohypoparathyroidism (type 1a) have a characteristic set of skeletal abnormalities, known as Albright’s hereditary osteodystrophy. This comprises: *Short stature. *Obesity. *Round face. *Short metacarpals. […] The principles underlying the treatment of pseudohypoparathyroidism are the same as those underlying hypoparathyroidism. *Patients with the most common form of pseudohypoparathyroidism may have resistance to the action of other hormones which rely on G protein signalling. They, therefore, need to be assessed for thyroid and gonadal dysfunction (because of defective TSH and gonadotrophin action). If these deficiencies are present, they need to be treated in the conventional manner.”

“Osteomalacia occurs when there is inadequate mineralization of mature bone. Rickets is a disorder of the growing skeleton where there is inadequate mineralization of bone as it is laid down at the epiphysis. In most instances, osteomalacia leads to build-up of excessive unmineralized osteoid within the skeleton. In rickets, there is build-up of unmineralized osteoid in the growth plate. […] These two related conditions may coexist. […] Clinical features [of osteomalacia:] *Bone pain. *Deformity. *Fracture. *Proximal myopathy. *Hypocalcaemia (in vitamin D deficiency). […] The majority of patients with osteomalacia will show no specific radiological abnormalities. *The most characteristic abnormality is the Looser’s zone or pseudofracture. If these are present, they are virtually pathognomonic of osteomalacia. […] Oncogenic osteomalacia[:] Certain tumours appear to be able to produce FGF23 which is phosphaturic. This is rare […] Clinically, such patients usually present with profound myopathy as well as bone pain and fracture. […] Complete removal of the tumour results in resolution of the biochemical and skeletal abnormalities. If this is not possible […], treatment with vitamin D metabolites and phosphate supplements […] may help the skeletal symptoms.”

Hypophosphataemia[:] Phosphate is important for normal mineralization of bone. In the absence of sufficient phosphate, osteomalacia results. […] In addition, phosphate is important in its own right for neuromuscular function, and profound hypophosphataemia can be accompanied by encephalopathy, muscle weakness, and cardiomyopathy. It must be remembered that, as phosphate is primarily an intracellular anion, a low plasma phosphate does not necessarily represent actual phosphate depletion. […] Mainstay [of treatment] is phosphate replacement […] *Long-term administration of phosphate supplements stimulates parathyroid activity. This can lead to hypercalcaemia, a further fall in phosphate, with worsening of the bone disease […] To minimize parathyroid stimulation, it is usual to give one of the active metabolites of vitamin D in conjunction with phosphate.”

“Although the term osteoporosis refers to the reduction in the amount of bony tissue within the skeleton, this is generally associated with a loss of structural integrity of the internal architecture of the bone. The combination of both these changes means that osteoporotic bone is at high risk of fracture, even after trivial injury. […] Historically, there has been a primary reliance on bone mineral density as a threshold for treatment, whereas currently there is far greater emphasis on assessing individual patients’ risk of fracture that incorporates multiple clinical risk factors as well as bone mineral density. […] Osteoporosis may arise from a failure of the body to lay down sufficient bone during growth and maturation; an earlier than usual onset of bone loss following maturity; or an rate of that loss. […] Early menopause or late puberty (in ♂ or ♀) is associated with risk of osteoporosis. […] Lifestyle factors affecting bone mass [include:] *weight-bearing exercise [increase bone mass] […] *Smoking. *Excessive alcohol. *Nulliparity. *Poor calcium nutrition. [These all decrease bone mass] […] The risk of osteoporotic fracture increases with age. Fracture rates in ♂ are approximately half of those seen in ♀ of the same age. An ♀ aged 50 has approximately a 1:2 chance [risk, surely… – US] of sustaining an osteoporotic fracture in the rest of her life. The corresponding figure for a ♂ is 1:5. […] One-fifth of hip fracture victims will die within 6 months of the injury, and only 50% will return to their previous level of independence.”

“Any fracture, other than those affecting fingers, toes, or face, which is caused by a fall from standing height or less is called a fragility (low-trauma) fracture, and underlying osteoporosis should be considered. Patients suffering such a fracture should be considered for investigation and/or treatment for osteoporosis. […] [Osteoporosis is] [u]sually clinically silent until an acute fracture. *Two-thirds of vertebral fractures do not come to clinical attention. […] Osteoporotic vertebral fractures only rarely lead to neurological impairment. Any evidence of spinal cord compression should prompt a search for malignancy or other underlying cause. […] Osteoporosis does not cause generalized skeletal pain. […] Biochemical markers of bone turnover may be helpful in the calculation of fracture risk and in judging the response to drug therapies, but they have no role in the diagnosis of osteoporosis. […] An underlying cause for osteoporosis is present in approximately 10-30% of women and up to 50% of men with osteoporosis. […] 2° causes of osteoporosis are more common in ♂ and need to be excluded in all ♂ with osteoporotic fracture. […] Glucocorticoid treatment is one of the major 2° causes of osteoporosis.”

February 22, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology | Leave a comment

A few diabetes papers of interest

(I hadn’t expected to only cover two papers in this post, but the second paper turned out to include a lot of stuff I figured might be worth adding here. I might add another post later this week including some of the other studies I had intended to cover in this post.)

i. Burden of Mortality Attributable to Diagnosed Diabetes: A Nationwide Analysis Based on Claims Data From 65 Million People in Germany.

“Diabetes is among the 10 most common causes of death worldwide (2). Between 1990 and 2010, the number of deaths attributable to diabetes has doubled (2). People with diabetes have a reduced life expectancy of ∼5 to 6 years (3). The most common cause of death in people with diabetes is cardiovascular disease (3,4). Over the past few decades, a reduction of diabetes mortality has been observed in several countries (59). However, the excess risk of death is still higher than in the population without diabetes, particularly in younger age-groups (4,9,10). Unfortunately, in most countries worldwide, reliable data on diabetes mortality are lacking (1). In a few European countries, such as Denmark (5) and Sweden (4), mortality analyses are based on national diabetes registries that include all age-groups. However, Germany and many other European countries do not have such national registries. Until now, age-standardized hazard ratios for diabetes mortality between 1.4 and 2.6 have been published for Germany on the basis of regional studies and surveys with small respondent numbers (1114). To the best of our knowledge, no nationwide estimates of the number of excess deaths due to diabetes have been published for Germany, and no information on older age-groups >79 years is currently available.

In 2012, changes in the regulation of data transparency enabled the use of nationwide routine health care data from the German statutory health insurance system, which insures ∼90% of the German population (15). These changes have allowed for new possibilities for estimating the burden of diabetes in Germany. Hence, this study estimates the number of excess deaths due to diabetes (ICD-10 codes E10–E14) and type 2 diabetes (ICD-10 code E11) in Germany, which is the number of deaths that could have been prevented if the diabetes mortality rate was as high as that of the population without diabetes.”

“Nationwide data on mortality ratios for diabetes and no diabetes are not available for Germany. […] the age- and sex-specific mortality rate ratios between people with diabetes and without diabetes were used from a Danish study wherein the Danish National Diabetes Register was linked to the individual mortality data from the Civil Registration System that includes all people residing in Denmark (5). Because the Danish National Diabetes Register is one of the most accurate diabetes registries in Europe, with a sensitivity of 86% and positive predictive value of 90% (5), we are convinced that the Danish estimates are highly valid and reliable. Denmark and Germany have a comparable standard of living and health care system. The diabetes prevalence in these countries is similar (Denmark 7.2%, Germany 7.4% [20]) and mortality of people with and without diabetes comparable, as shown in the European mortality database”

“In total, 174,627 excess deaths (137,950 from type 2 diabetes) could have been prevented in 2010 if mortality was the same in people with and without diabetes. Overall, 21% of all deaths in Germany were attributable to diabetes, and 16% were attributable to type 2 diabetes […] Most of the excess deaths occurred in the 70- to 79- and 80- to 89-year-old age-groups (∼34% each) […]. Substantial sex differences were found in diabetes-related excess deaths. From the age of ∼40 years, the number of male excess deaths due to diabetes started to grow, but the number of female excess deaths increased with a delay. Thus, the highest number of male excess deaths due to diabetes occurred at the age of ∼75 years, whereas the peak of female excess deaths was ∼10 years later. […] The diabetes mortality rates increased with age and were always higher than in the population without diabetes. The largest differences in mortality rates between people with and without diabetes were observed in the younger age-groups. […] These results are in accordance with previous studies worldwide (3,4,7,9) and regional studies in Germany (1113).”

“According to official numbers from the Federal Statistical Office, 858,768 people died in Germany in 2010, with 23,131 deaths due to diabetes, representing 2.7% of the all-cause mortality (26). Hence, in Germany, diabetes is not ranked among the top 10 most common causes of death […]. We found that 21% of all deaths were attributable to diabetes and 16% were attributable to type 2 diabetes; hence, we suggest that the number of excess deaths attributable to diabetes is strongly underestimated if we rely on reported causes of death from death certificates, as official statistics do. Estimating diabetes-related mortality is challenging because most people die as a result of diabetes complications and comorbidities, such as cardiovascular disease and renal failure, which often are reported as the underlying cause of death (1,23). For this reason, another approach is to focus not only on the underlying cause of death but also on the multiple causes of death to assess any mention of a disease on the death certificate (27). In a study from Italy, the method of assessing multiple causes of death revealed that in 12.3% of all studied death certificates, diabetes was mentioned, whereas only 2.9% reported diabetes as the underlying cause of death (27), corresponding to a four times higher proportion of death related to diabetes. Another nationwide analysis from Canada found that diabetes was more than twice as likely to be a contributing factor to death than the underlying cause of death from the years 2004–2008 (28). A recently published study from the U.S. that was based on two representative surveys from 1997 to 2010 found that 11.5% of all deaths were attributable to diabetes, which reflects a three to four times higher proportion of diabetes-related deaths (29). Overall, these results, together with the current calculations, demonstrate that deaths due to diabetes contribute to a much higher burden than previously assumed.”

ii. Standardizing Clinically Meaningful Outcome Measures Beyond HbA1c for Type 1 Diabetes: A Consensus Report of the American Association of Clinical Endocrinologists, the American Association of Diabetes Educators, the American Diabetes Association, the Endocrine Society, JDRF International, The Leona M. and Harry B. Helmsley Charitable Trust, the Pediatric Endocrine Society, and the T1D Exchange.

“Type 1 diabetes is a life-threatening, autoimmune disease that strikes children and adults and can be fatal. People with type 1 diabetes have to test their blood glucose multiple times each day and dose insulin via injections or an infusion pump 24 h a day every day. Too much insulin can result in hypoglycemia, seizures, coma, or death. Hyperglycemia over time leads to kidney, heart, nerve, and eye damage. Even with diligent monitoring, the majority of people with type 1 diabetes do not achieve recommended target glucose levels. In the U.S., approximately one in five children and one in three adults meet hemoglobin A1c (HbA1c) targets and the average patient spends 7 h a day hyperglycemic and over 90 min hypoglycemic (13). […] HbA1c is a well-accepted surrogate outcome measure for evaluating the efficacy of diabetes therapies and technologies in clinical practice as well as in research (46). […] While HbA1c is used as a primary outcome to assess glycemic control and as a surrogate for risk of developing complications, it has limitations. As a measure of mean blood glucose over 2 or 3 months, HbA1c does not capture short-term variations in blood glucose or exposure to hypoglycemia and hyperglycemia in individuals with type 1 diabetes; HbA1c also does not capture the impact of blood glucose variations on individuals’ quality of life. Recent advances in type 1 diabetes technologies have made it feasible to assess the efficacy of therapies and technologies using a set of outcomes beyond HbA1c and to expand definitions of outcomes such as hypoglycemia. While definitions for hypoglycemia in clinical care exist, they have not been standardized […]. The lack of standard definitions impedes and can confuse their use in clinical practice, impedes development processes for new therapies, makes comparison of studies in the literature challenging, and may lead to regulatory and reimbursement decisions that fail to meet the needs of people with diabetes. To address this vital issue, the type 1 diabetes–stakeholder community launched the Type 1 Diabetes Outcomes Program to develop consensus definitions for a set of priority outcomes for type 1 diabetes. […] The outcomes prioritized under the program include hypoglycemia, hyperglycemia, time in range, diabetic ketoacidosis (DKA), and patient-reported outcomes (PROs).”

“Hypoglycemia is a significant — and potentially fatal — complication of type 1 diabetes management and has been found to be a barrier to achieving glycemic goals (9). Repeated exposure to severe hypoglycemic events has been associated with an increased risk of cardiovascular events and all-cause mortality in people with type 1 or type 2 diabetes (10,11). Hypoglycemia can also be fatal, and severe hypoglycemic events have been associated with increased mortality (1214). In addition to the physical aspects of hypoglycemia, it can also have negative consequences on emotional status and quality of life.

While there is some variability in how and when individuals manifest symptoms of hypoglycemia, beginning at blood glucose levels <70 mg/dL (3.9 mmol/L) (which is at the low end of the typical post-absorptive plasma glucose range), the body begins to increase its secretion of counterregulatory hormones including glucagon, epinephrine, cortisol, and growth hormone. The release of these hormones can cause moderate autonomic effects, including but not limited to shaking, palpitations, sweating, and hunger (15). Individuals without diabetes do not typically experience dangerously low blood glucose levels because of counterregulatory hormonal regulation of glycemia (16). However, in individuals with type 1 diabetes, there is often a deficiency of the counterregulatory response […]. Moreover, as people with diabetes experience an increased number of episodes of hypoglycemia, the risk of hypoglycemia unawareness, impaired glucose counterregulation (for example, in hypoglycemia-associated autonomic failure [17]), and level 2 and level 3 hypoglycemia […] all increase (18). Therefore, it is important to recognize and treat all hypoglycemic events in people with type 1 diabetes, particularly in populations (children, the elderly) that may not have the ability to recognize and self-treat hypoglycemia. […] More notable clinical symptoms begin at blood glucose levels <54 mg/dL (3.0 mmol/L) (19,20). As the body’s primary utilizer of glucose, the brain is particularly sensitive to decreases in blood glucose concentrations. Both experimental and clinical evidence has shown that, at these levels, neurogenic and neuroglycopenic symptoms including impairments in reaction times, information processing, psychomotor function, and executive function begin to emerge. These neurological symptoms correlate to altered brain activity in multiple brain areas including the prefrontal cortex and medial temporal lobe (2124). At these levels, individuals may experience confusion, dizziness, blurred or double vision, tremors, and tingling sensations (25). Hypoglycemia at this glycemic level may also increase proinflammatory and prothrombotic markers (26). Left untreated, these symptoms can become severe to the point that an individual will require assistance from others to move or function. Prolonged untreated hypoglycemia that continues to drop below 50 mg/dL (2.8 mmol/L) increases the risk of seizures, coma, and death (27,28). Hypoglycemia that affects cognition and stamina may also increase the risk of accidents and falls, which is a particular concern for older adults with diabetes (29,30).

The glycemic thresholds at which these symptoms occur, as well as the severity with which they manifest themselves, may vary in individuals with type 1 diabetes depending on the number of hypoglycemic episodes they have experienced (3133). Counterregulatory physiological responses may evolve in patients with type 1 diabetes who endure repeated hypoglycemia over time (34,35).”

“The Steering Committee defined three levels of hypoglycemia […] Level 1 hypoglycemia is defined as a measurable glucose concentration <70 mg/dL (3.9 mmol/L) but ≥54 mg/dL (3.0 mmol/L) that can alert a person to take action. A blood glucose concentration of 70 mg/dL (3.9 mmol/L) has been recognized as a marker of physiological hypoglycemia in humans, as it approximates the glycemic threshold for neuroendocrine responses to falling glucose levels in individuals without diabetes. As such, blood glucose in individuals without diabetes is generally 70–100 mg/dL (3.9–5.6 mmol/L) upon waking and 70–140 mg/dL (3.9–7.8 mmol/L) after meals, and any excursions beyond those levels are typically countered with physiological controls (16,37). However, individuals with diabetes who have impaired or altered counterregulatory hormonal and neurological responses do not have the same internal regulation as individuals without diabetes to avoid dropping below 70 mg/dL (3.9 mmol/L) and becoming hypoglycemic. Recurrent episodes of hypoglycemia lead to increased hypoglycemia unawareness, which can become dangerous as individuals cease to experience symptoms of hypoglycemia, allowing their blood glucose levels to continue falling. Therefore, glucose levels <70 mg/dL (3.9 mmol/L) are clinically important, independent of the severity of acute symptoms.

Level 2 hypoglycemia is defined as a measurable glucose concentration <54 mg/dL (3.0 mmol/L) that needs immediate action. At ∼54 mg/dL (3.0 mmol/L), neurogenic and neuroglycopenic hypoglycemic symptoms begin to occur, ultimately leading to brain dysfunction at levels <50 mg/dL (2.8 mmol/L) (19,20). […] Level 3 hypoglycemia is defined as a severe event characterized by altered mental and/or physical status requiring assistance. Severe hypoglycemia captures events during which the symptoms associated with hypoglycemia impact a patient to such a degree that the patient requires assistance from others (27,28). […] Hypoglycemia that sets in relatively rapidly, such as in the case of a significant insulin overdose, may induce level 2 or level 3 hypoglycemia with little warning (38).”

“The data regarding the effects of chronic hyperglycemia on long-term outcomes is conclusive, indicating that chronic hyperglycemia is a major contributor to morbidity and mortality in type 1 diabetes (41,4345). […] Although the correlation between long-term poor glucose control and type 1 diabetes complications is well established, the impact of short-term hyperglycemia is not as well understood. However, hyperglycemia has been shown to have physiological effects and in an acute-care setting is linked to morbidity and mortality in people with and without diabetes. Short-term hyperglycemia, regardless of diabetes diagnosis, has been shown to reduce survival rates among patients admitted to the hospital with stroke or myocardial infarction (47,48). In addition to increasing mortality, short-term hyperglycemia is correlated with stroke severity and poststroke disability (49,50).

The effects of short-term hyperglycemia have also been observed in nonacute settings. Evidence indicates that hyperglycemia alters retinal cell firing through sensitization in patients with type 1 diabetes (51). This finding is consistent with similar findings showing increased oxygen consumption and blood flow in the retina during hyperglycemia. Because retinal cells absorb glucose through an insulin-independent process, they respond more strongly to increases in glucose in the blood than other cells in patients with type 1 diabetes. The effects of acute hyperglycemia on retinal response may underlie part of the development of retinopathy known to be a long-term complication of type 1 diabetes.”

“The Steering Committee defines hyperglycemia for individuals with type 1 diabetes as the following:

  • Level 1—elevated glucose: glucose >180 mg/dL (10 mmol/L) and glucose ≤250 mg/dL (13.9 mmol/L)

  • Level 2—very elevated glucose: glucose >250 mg/dL (13.9 mmol/L) […]

Elevated glucose is defined as a glucose concentration >180 mg/dL (10.0 mmol/L) but ≤250 mg/dL (13.9 mmol/L). In clinical practice, measures of hyperglycemia differ based on time of day (e.g., pre- vs. postmeal). This program, however, focused on defining outcomes for use in product development that are universally applicable. Glucose profiles and postprandial blood glucose data for individuals without diabetes suggest that 140 mg/dL (7.8 mmol/L) is the appropriate threshold for defining hyperglycemia. However, data demonstrate that the majority of individuals without diabetes exceed this threshold every day. Moreover, people with diabetes spend >60% of their day above this threshold, which suggests that 140 mg/dL (7.8 mmol/L) is too low of a threshold for measuring hyperglycemia in individuals with diabetes. Current clinical guidelines for people with diabetes indicate that peak prandial glucose should not exceed 180 mg/dL (10.0 mmol/L). As such, the Steering Committee identified 180 mg/dL (10.0 mmol/L) as the initial threshold defining elevated glucose. […]

Very elevated glucose is defined as a glucose concentration >250 mg/dL (13.9 mmol/L). Evidence examining the impact of hyperglycemia does not examine the incremental effects of increasing blood glucose. However, blood glucose values exceeding 250 mg/dL (13.9 mmol/L) increase the risk for DKA (58), and HbA1c readings at that level have been associated with a high likelihood of complications.”

“An individual whose blood glucose levels rarely extend beyond the thresholds defined for hypo- and hyperglycemia is less likely to be subject to the short-term or long-term effects experienced by those with frequent excursions beyond one or both thresholds. It is also evident that if the intent of a given intervention is to safely manage blood glucose but the intervention does not reliably maintain blood glucose within safe levels, then the intervention should not be considered effective.

The time in range outcome is distinguished from traditional HbA1c testing in several ways (4,59). Time in range captures fluctuations in glucose levels continuously, whereas HbA1c testing is done at static points in time, usually months apart (60). Furthermore, time in range is more specific and sensitive than traditional HbA1c testing; for example, a treatment that addresses acute instances of hypo- or hyperglycemia may be detected in a time in range assessment but not necessarily in an HbA1c assessment. As a percentage, time in range is also more likely to be comparable across patients than HbA1c values, which are more likely to have patient-specific variations in significance (61). Finally, time in range may be more likely than HbA1c levels to correlate with PROs, such as quality of life, because the outcome is more representative of the whole patient experience (62). Table 3 illustrates how the concept of time in range differs from current HbA1c testing. […] [V]ariation in what is considered “normal” glucose fluctuations across populations, as well as what is realistically achievable for people with type 1 diabetes, must be taken into account so as not to make the target range definition too restrictive.”

“The Steering Committee defines time in range for individuals with type 1 diabetes as the following:

  • Percentage of readings in the range of 70–180 mg/dL (3.9–10.0 mmol/L) per unit of time

The Steering Committee considered it important to keep the time in range definition wide in order to accommodate variations across the population with type 1 diabetes — including different age-groups — but limited enough to preclude the possibility of negative outcomes. The upper and lower bounds of the time in range definition are consistent with the definitions for hypo- and hyperglycemia defined above. For individuals without type 1 diabetes, 70–140 mg/dL (3.9–7.8 mmol/L) represents a normal glycemic range (66). However, spending most of the day in this range is not generally achievable for people with type 1 diabetes […] To date, there is limited research correlating time in range with positive short-term and long-term type 1 diabetes outcomes, as opposed to the extensive research demonstrating the negative consequences of excursions into hyper- or hypoglycemia. More substantial evidence demonstrating a correlation or a direct causative relationship between time in range for patients with type 1 diabetes and positive health outcomes is needed.”

“DKA is often associated with hyperglycemia. In most cases, in an individual with diabetes, the cause of hyperglycemia is also the cause of DKA, although the two conditions are distinct. DKA develops when a lack of glucose in cells prompts the body to begin breaking down fatty acid reserves. This increases the levels of ketones in the body (ketosis) and causes a drop in blood pH (acidosis). At its most severe, DKA can cause cerebral edema, acute respiratory distress, thromboembolism, coma, and death (69,70). […] Although the current definition for DKA includes a list of multiple criteria that must be met, not all information currently included in the accepted definition is consistently gathered or required to diagnose DKA. The Steering Committee defines DKA in individuals with type 1 diabetes in a clinical setting as the following:

  • Elevated serum or urine ketones (greater than the upper limit of the normal range), and

  • Serum bicarbonate <15 mmol/L or blood pH <7.3

Given the seriousness of DKA, it is unnecessary to stratify DKA into different levels or categories, as the presence of DKA—regardless of the differences observed in the separate biochemical tests—should always be considered serious. In individuals with known diabetes, plasma glucose values are not necessary to diagnose DKA. Further, new therapeutic agents, specifically sodium–glucose cotransporter 2 inhibitors, have been linked to euglycemic DKA, or DKA with blood glucose values <250 mg/dL (13.9 mmol/L).”

“In guidance released in 2009 (72), the U.S. Food and Drug Administration (FDA) defined PROs as “any report of the status of a patient’s health condition that comes directly from the patient, without interpretation of the patient’s response by a clinician or anyone else.” In the same document, the FDA clearly acknowledged the importance of PROs, advising that they be used to gather information that is “best known by the patient or best measured from the patient perspective.”

Measuring and using PROs is increasingly seen as essential to evaluating care from a patient-centered perspective […] Given that type 1 diabetes is a chronic condition primarily treated on an outpatient basis, much of what people with type 1 diabetes experience is not captured through standard clinical measurement. Measures that capture PROs can fill these important information gaps. […] The use of validated PROs in type 1 diabetes clinical research is not currently widespread, and challenges to effectively measuring some PROs, such as quality of life, continue to confront researchers and developers.”

February 20, 2018 Posted by | Cardiology, Diabetes, Medicine, Neurology, Ophthalmology, Studies | Leave a comment

Systems Biology (III)

Some observations from chapter 4 below:

The need to maintain a steady state ensuring homeostasis is an essential concern in nature while negative feedback loop is the fundamental way to ensure that this goal is met. The regulatory system determines the interdependences between individual cells and the organism, subordinating the former to the latter. In trying to maintain homeostasis, the organism may temporarily upset the steady state conditions of its component cells, forcing them to perform work for the benefit of the organism. […] On a cellular level signals are usually transmitted via changes in concentrations of reaction substrates and products. This simple mechanism is made possible due to limited volume of each cell. Such signaling plays a key role in maintaining homeostasis and ensuring cellular activity. On the level of the organism signal transmission is performed by hormones and the nervous system. […] Most intracellular signal pathways work by altering the concentrations of selected substances inside the cell. Signals are registered by forming reversible complexes consisting of a ligand (reaction product) and an allosteric receptor complex. When coupled to the ligand, the receptor inhibits the activity of its corresponding effector, which in turn shuts down the production of the controlled substance ensuring the steady state of the system. Signals coming from outside the cell are usually treated as commands (covalent modifications), forcing the cell to adjust its internal processes […] Such commands can arrive in the form of hormones, produced by the organism to coordinate specialized cell functions in support of general homeostasis (in the organism). These signals act upon cell receptors and are usually amplified before they reach their final destination (the effector).”

“Each concentration-mediated signal must first be registered by a detector. […] Intracellular detectors are typically based on allosteric proteins. Allosteric proteins exhibit a special property: they have two stable structural conformations and can shift from one form to the other as a result of changes in ligand concentrations. […] The concentration of a product (or substrate) which triggers structural realignment in the allosteric protein (such as a regulatory enzyme) depends on the genetically-determined affinity of the active site to its ligand. Low affinity results in high target concentration of the controlled substance while high affinity translates into lower concentration […]. In other words, high concentration of the product is necessary to trigger a low-affinity receptor (and vice versa). Most intracellular regulatory mechanisms rely on noncovalent interactions. Covalent bonding is usually associated with extracellular signals, generated by the organism and capable of overriding the cell’s own regulatory mechanisms by modifying the sensitivity of receptors […]. Noncovalent interactions may be compared to requests while covalent signals are treated as commands. Signals which do not originate in the receptor’s own feedback loop but modify its affinity are known as steering signals […] Hormones which act upon cells are, by their nature, steering signals […] Noncovalent interactions — dependent on substance concentrations — impose spatial restrictions on regulatory mechanisms. Any increase in cell volume requires synthesis of additional products in order to maintain stable concentrations. The volume of a spherical cell is given as V = 4/3 π r3, where r indicates cell radius. Clearly, even a slight increase in r translates into a significant increase in cell volume, diluting any products dispersed in the cytoplasm. This implies that cells cannot expand without incurring great energy costs. It should also be noted that cell expansion reduces the efficiency of intracellular regulatory mechanisms because signals and substrates need to be transported over longer distances. Thus, cells are universally small, regardless of whether they make up a mouse or an elephant.”

An effector is an element of a regulatory loop which counteracts changes in the regulated quantity […] Synthesis and degradation of biological compounds often involves numerous enzymes acting in sequence. The product of one enzyme is a substrate for another enzyme. With the exception of the initial enzyme, each step of this cascade is controlled by the availability of the supplied substrate […] The effector consists of a chain of enzymes, each of which depends on the activity of the initial regulatory enzyme […] as well as on the activity of its immediate predecessor which supplies it with substrates. The function of all enzymes in the effector chain is indirectly dependent on the initial enzyme […]. This coupling between the receptor and the first link in the effector chain is a universal phenomenon. It can therefore be said that the initial enzyme in the effector chain is, in fact, a regulatory enzyme. […] Most cell functions depend on enzymatic activity. […] It seems that a set of enzymes associated with a specific process which involves a negative feedback loop is the most typical form of an intracellular regulatory effector. Such effectors can be controlled through activation or inhibition of their associated enzymes.”

“The organism is a self-contained unit represented by automatic regulatory loops which ensure homeostasis. […] Effector functions are conducted by cells which are usually grouped and organized into tissues and organs. Signal transmission occurs by way of body fluids, hormones or nerve connections. Cells can be treated as automatic and potentially autonomous elements of regulatory loops, however their specific action is dependent on the commands issued by the organism. This coercive property of organic signals is an integral requirement of coordination, allowing the organism to maintain internal homeostasis. […] Activities of the organism are themselves regulated by their own negative feedback loops. Such regulation differs however from the mechanisms observed in individual cells due to its place in the overall hierarchy and differences in signal properties, including in particular:
• Significantly longer travel distances (compared to intracellular signals);
• The need to maintain hierarchical superiority of the organism;
• The relative autonomy of effector cells. […]
The relatively long distance travelled by organism’s signals and their dilution (compared to intracellular ones) calls for amplification. As a consequence, any errors or random distortions in the original signal may be drastically exacerbated. A solution to this problem comes in the form of encoding, which provides the signal with sufficient specificity while enabling it to be selectively amplified. […] a loudspeaker can […] assist in acoustic communication, but due to the lack of signal encoding it cannot compete with radios in terms of communication distance. The same reasoning applies to organism-originated signals, which is why information regarding blood glucose levels is not conveyed directly by glucose but instead by adrenalin, glucagon or insulin. Information encoding is handled by receptors and hormone-producing cells. Target cells are capable of decoding such signals, thus completing the regulatory loop […] Hormonal signals may be effectively amplified because the hormone itself does not directly participate in the reaction it controls — rather, it serves as an information carrier. […] strong amplification invariably requires encoding in order to render the signal sufficiently specific and unambiguous. […] Unlike organisms, cells usually do not require amplification in their internal regulatory loops — even the somewhat rare instances of intracellular amplification only increase signal levels by a small amount. Without the aid of an amplifier, messengers coming from the organism level would need to be highly concentrated at their source, which would result in decreased efficiency […] Most signals originated on organism’s level travel with body fluids; however if a signal has to reach its destination very rapidly (for instance in muscle control) it is sent via the nervous system”.

“Two types of amplifiers are observed in biological systems:
1. cascade amplifier,
2. positive feedback loop. […]
A cascade amplifier is usually a collection of enzymes which perform their action by activation in strict sequence. This mechanism resembles multistage (sequential) synthesis or degradation processes, however instead of exchanging reaction products, amplifier enzymes communicate by sharing activators or by directly activating one another. Cascade amplifiers are usually contained within cells. They often consist of kinases. […] Amplification effects occurring at each stage of the cascade contribute to its final result. […] While the kinase amplification factor is estimated to be on the order of 103, the phosphorylase cascade results in 1010-fold amplification. It is a stunning value, though it should also be noted that the hormones involved in this cascade produce particularly powerful effects. […] A positive feedback loop is somewhat analogous to a negative feedback loop, however in this case the input and output signals work in the same direction — the receptor upregulates the process instead of inhibiting it. Such upregulation persists until the available resources are exhausted.
Positive feedback loops can only work in the presence of a control mechanism which prevents them from spiraling out of control. They cannot be considered self-contained and only play a supportive role in regulation. […] In biological systems positive feedback loops are sometimes encountered in extracellular regulatory processes where there is a need to activate slowly-migrating components and greatly amplify their action in a short amount of time. Examples include blood coagulation and complement factor activation […] Positive feedback loops are often coupled to negative loop-based control mechanisms. Such interplay of loops may impart the signal with desirable properties, for instance by transforming a flat signals into a sharp spike required to overcome the activation threshold for the next stage in a signalling cascade. An example is the ejection of calcium ions from the endoplasmic reticulum in the phospholipase C cascade, itself subject to a negative feedback loop.”

“Strong signal amplification carries an important drawback: it tends to “overshoot” its target activity level, causing wild fluctuations in the process it controls. […] Nature has evolved several means of signal attenuation. The most typical mechanism superimposes two regulatory loops which affect the same parameter but act in opposite directions. An example is the stabilization of blood glucose levels by two contradictory hormones: glucagon and insulin. Similar strategies are exploited in body temperature control and many other biological processes. […] The coercive properties of signals coming from the organism carry risks associated with the possibility of overloading cells. The regulatory loop of an autonomous cell must therefore include an “off switch”, controlled by the cell. An autonomous cell may protect itself against excessive involvement in processes triggered by external signals (which usually incur significant energy expenses). […] The action of such mechanisms is usually timer-based, meaning that they inactivate signals following a set amount of time. […] The ability to interrupt signals protects cells from exhaustion. Uncontrolled hormone-induced activity may have detrimental effects upon the organism as a whole. This is observed e.g. in the case of the vibrio cholerae toxin which causes prolonged activation of intestinal epithelial cells by locking protein G in its active state (resulting in severe diarrhea which can dehydrate the organism).”

“Biological systems in which information transfer is affected by high entropy of the information source and ambiguity of the signal itself must include discriminatory mechanisms. These mechanisms usually work by eliminating weak signals (which are less specific and therefore introduce ambiguities). They create additional obstacles (thresholds) which the signals must overcome. A good example is the mechanism which eliminates the ability of weak, random antigens to activate lymphatic cells. It works by inhibiting blastic transformation of lymphocytes until a so-called receptor cap has accumulated on the surface of the cell […]. Only under such conditions can the activation signal ultimately reach the cell nucleus […] and initiate gene transcription. […] weak, reversible nonspecific interactions do not permit sufficient aggregation to take place. This phenomenon can be described as a form of discrimination against weak signals. […] Discrimination may also be linked to effector activity. […] Cell division is counterbalanced by programmed cell death. The most typical example of this process is apoptosis […] Each cell is prepared to undergo controlled death if required by the organism, however apoptosis is subject to tight control. Cells protect themselves against accidental triggering of the process via IAP proteins. Only strong proapoptotic signals may overcome this threshold and initiate cellular suicide”.

Simply knowing the sequences, structures or even functions of individual proteins does not provide sufficient insight into the biological machinery of living organisms. The complexity of individual cells and entire organisms calls for functional classification of proteins. This task can be accomplished with a proteome — a theoretical construct where individual elements (proteins) are grouped in a way which acknowledges their mutual interactions and interdependencies, characterizing the information pathways in a complex organism.
Most ongoing proteome construction projects focus on individual proteins as the basic building blocks […] [We would instead argue in favour of a model in which] [t]he basic unit of the proteome is one negative feedback loop (rather than a single protein) […]
Due to the relatively large number of proteins (between 25 and 40 thousand in the human organism), presenting them all on a single graph with vertex lengths corresponds to the relative duration of interactions would be unfeasible. This is why proteomes are often subdivided into functional subgroups such as the metabolome (proteins involved in metabolic processes), interactome (complex-forming proteins), kinomes (proteins which belong to the kinase family) etc.”

February 18, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine, Molecular biology | Leave a comment