# Econstudentlog

## Materials… (II)

“Whether materials are stiff and strong, or hard or weak, is the territory of mechanics. […] the 19th century continuum theory of linear elasticity is still the basis of much of modern solid mechanics. A stiff material is one which does not deform much when a force acts on it. Stiffness is quite distinct from strength. A material may be stiff but weak, like a piece of dry spaghetti. If you pull it, it stretches only slightly […], but as you ramp up the force it soon breaks. To put this on a more scientific footing, so that we can compare different materials, we might devise a test in which we apply a force to stretch a bar of material and measure the increase in length. The fractional change in length is the strain; and the applied force divided by the cross-sectional area of the bar is the stress. To check that it is Hookean, we double the force and confirm that the strain has also doubled. To check that it is truly elastic, we remove the force and check that the bar returns to the same length that it started with. […] then we calculate the ratio of the stress to the strain. This ratio is the Young’s modulus of the material, a quantity which measures its stiffness. […] While we are measuring the change in length of the bar, we might also see if there is a change in its width. It is not unreasonable to think that as the bar stretches it also becomes narrower. The Poisson’s ratio of the material is defined as the ratio of the transverse strain to the longitudinal strain (without the minus sign).”

“There was much argument between Cauchy and Lamé and others about whether there are two stiffness moduli or one. […] In fact, there are two stiffness moduli. One describes the resistance of a material to shearing and the other to compression. The shear modulus is the stiffness in distortion, for example in twisting. It captures the resistance of a material to changes of shape, with no accompanying change of volume. The compression modulus (usually called the bulk modulus) expresses the resistance to changes of volume (but not shape). This is what occurs as a cube of material is lowered deep into the sea, and is squeezed on all faces by the water pressure. The Young’s modulus [is] a combination of the more fundamental shear and bulk moduli, since stretching in one direction produces changes in both shape and volume. […] A factor of about 10,000 covers the useful range of Young’s modulus in engineering materials. The stiffness can be traced back to the forces acting between atoms and molecules in the solid state […]. Materials like diamond or tungsten with strong bonds are stiff in the bulk, while polymer materials with weak intermolecular forces have low stiffness.”

“In pure compression, the concept of ‘strength’ has no meaning, since the material cannot fail or rupture. But materials can and do fail in tension or in shear. To judge how strong a material is we can go back for example to the simple tension arrangement we used for measuring stiffness, but this time make it into a torture test in which the specimen is put on the rack. […] We find […] that we reach a strain at which the material stops being elastic and is permanently stretched. We have reached the yield point, and beyond this we have damaged the material but it has not failed. After further yielding, the bar may fail by fracture […]. On the other hand, with a bar of cast iron, there comes a point where the bar breaks, noisily and without warning, and without yield. This is a failure by brittle fracture. The stress at which it breaks is the tensile strength of the material. For the ductile material, the stress at which plastic deformation starts is the tensile yield stress. Both are measures of strength. It is in metals that yield and plasticity are of the greatest significance and value. In working components, yield provides a safety margin between small-strain elasticity and catastrophic rupture. […] plastic deformation is [also] exploited in making things from metals like steel and aluminium. […] A useful feature of plastic deformation in metals is that plastic straining raises the yield stress, particularly at lower temperatures.”

“Brittle failure is not only noisy but often scary. Engineers keep well away from it. An elaborate theory of fracture mechanics has been built up to help them avoid it, and there are tough materials to hand which do not easily crack. […] Since small cracks and flaws are present in almost any engineering component […], the trick is not to avoid cracks but to avoid long cracks which exceed [a] critical length. […] In materials which can yield, the tip stress can be relieved by plastic deformation, and this is a potent toughening mechanism in some materials. […] The trick of compressing a material to suppress cracking is a powerful way to toughen materials.”

“Hardness is a property which materials scientists think of in a particular and practical way. It tells us how well a material resists being damaged or deformed by a sharp object. That is useful information and it can be obtained easily. […] Soft is sometimes the opposite of hard […] But a different kind of soft is squidgy. […] In the soft box, we find many everyday materials […]. Some soft materials such as adhesives and lubricants are of great importance in engineering. For all of them, the model of a stiff crystal lattice provides no guidance. There is usually no crystal. The units are polymer chains, or small droplets of liquids, or small solid particles, with weak forces acting between them, and little structural organization. Structures when they exist are fragile. Soft materials deform easily when forces act on them […]. They sit as a rule somewhere between rigid solids and simple liquids. Their mechanical behaviour is dominated by various kinds of plasticity.”

“In pure metals, the resistivity is extremely low […] and a factor of ten covers all of them. […] the low resistivity (or, put another way, the high conductivity) arises from the existence of a conduction band in the solid which is only partly filled. Electrons in the conduction band are mobile and drift in an applied electric field. This is the electric current. The electrons are subject to some scattering from lattice vibrations which impedes their motion and generates an intrinsic resistance. Scattering becomes more severe as the temperature rises and the amplitude of the lattice vibrations becomes greater, so that the resistivity of metals increases with temperature. Scattering is further increased by microstructural heterogeneities, such as grain boundaries, lattice distortions, and other defects, and by phases of different composition. So alloys have appreciably higher resistivities than their pure parent metals. Adding 5 per cent nickel to iron doubles the resistivity, although the resistivities of the two pure metals are similar. […] Resistivity depends fundamentally on band structure. […] Plastics and rubbers […] are usually insulators. […] Electronically conducting plastics would have many uses, and some materials [e.g. this one] are now known. […] The electrical resistivity of many metals falls to exactly zero as they are cooled to very low temperatures. The critical temperature at which this happens varies, but for pure metallic elements it always lies below 10 K. For a few alloys, it is a little higher. […] Superconducting windings provide stable and powerful magnetic fields for magnetic resonance imaging, and many industrial and scientific uses.”

“A permanent magnet requires no power. Its magnetization has its origin in the motion of electrons in atoms and ions in the solid, but only a few materials have the favourable combination of quantum properties to give rise to useful ferromagnetism. […] Ferromagnetism disappears completely above the so-called Curie temperature. […] Below the Curie temperature, ferromagnetic alignment throughout the material can be established by imposing an external polarizing field to create a net magnetization. In this way a practical permanent magnet is made. The ideal permanent magnet has an intense magnetization (a strong field) which remains after the polarizing field is switched off. It can only be demagnetized by applying a strong polarizing field in the opposite direction: the size of this field is the coercivity of the magnet material. For a permanent magnet, it should be as high as possible. […] Permanent magnets are ubiquitous but more or less invisible components of umpteen devices. There are a hundred or so in every home […]. There are also important uses for ‘soft’ magnetic materials, in devices where we want the ferromagnetism to be temporary, not permanent. Soft magnets lose their magnetization after the polarizing field is removed […] They have low coercivity, approaching zero. When used in a transformer, such a soft ferromagnetic material links the input and output coils by magnetic induction. Ideally, the magnetization should reverse during every cycle of the alternating current to minimize energy losses and heating. […] Silicon transformer steels yielded large gains in efficiency in electrical power distribution when they were first introduced in the 1920s, and they remain pre-eminent.”

“At least 50 families of plastics are produced commercially today. […] These materials all consist of linear string molecules, most with simple carbon backbones, a few with carbon-oxygen backbones […] Plastics as a group are valuable because they are lightweight and work well in wet environments, and don’t go rusty. They are mostly unaffected by acids and salts. But they burn, and they don’t much like sunlight as the ultraviolet light can break the polymer backbone. Most commercial plastics are mixed with substances which make it harder for them to catch fire and which filter out the ultraviolet light. Above all, plastics are used because they can be formed and shaped so easily. The string molecule itself is held together by strong chemical bonds and is resilient, but the forces between the molecules are weak. So plastics melt at low temperatures to produce rather viscous liquids […]. And with modest heat and a little pressure, they can be injected into moulds to produce articles of almost any shape”.

“The downward cascade of high purity to adulterated materials in recycling is a kind of entropy effect: unmixing is thermodynamically hard work. But there is an energy-driven problem too. Most materials are thermodynamically unstable (or metastable) in their working environments and tend to revert to the substances from which they were made. This is well-known in the case of metals, and is the usual meaning of corrosion. The metals are more stable when combined with oxygen than uncombined. […] Broadly speaking, ceramic materials are more stable thermodynamically, since they already contain much oxygen in chemical combination. Even so, ceramics used in the open usually fall victim to some environmental predator. Often it is water that causes damage. Water steals sodium and potassium from glass surfaces by slow leaching. The surface shrinks and cracks, so the glass loses its transparency. […] Stones and bricks may succumb to the stresses of repeated freezing when wet; limestones decay also by the chemical action of sulfur and nitrogen gasses in polluted rainwater. Even buried archaeological pots slowly react with water in a remorseless process similar to that of rock weathering.”

## A few diabetes papers of interest

“More than 30 years ago in Diabetes Care, Schmidt et al. (1) defined “dawn phenomenon,” the night-to-morning elevation of blood glucose (BG) before and, to a larger extent, after breakfast in subjects with type 1 diabetes (T1D). Shortly after, a similar observation was made in type 2 diabetes (T2D) (2), and the physiology of glucose homeostasis at night was studied in normal, nondiabetic subjects (35). Ever since the first description, the dawn phenomenon has been studied extensively with at least 187 articles published as of today (6). […] what have we learned from the last 30 years of research on the dawn phenomenon? What is the appropriate definition, the identified mechanism(s), the importance (if any), and the treatment of the dawn phenomenon in T1D and T2D?”

“Physiology of glucose homeostasis in normal, nondiabetic subjects indicates that BG and plasma insulin concentrations remain remarkably flat and constant overnight, with a modest, transient increase in insulin secretion just before dawn (3,4) to restrain hepatic glucose production (4) and prevent hyperglycemia. Thus, normal subjects do not exhibit the dawn phenomenon sensu strictiori because they secrete insulin to prevent it.

In T1D, the magnitude of BG elevation at dawn first reported was impressive and largely secondary to the decrease of plasma insulin concentration overnight (1), commonly observed with evening administration of NPH or lente insulins (8) (Fig. 1). Even in early studies with intravenous insulin by the “artificial pancreas” (Biostator) (2), plasma insulin decreased overnight because of progressive inactivation of insulin in the pump (9). This artifact exaggerated the dawn phenomenon, now defined as need for insulin to limit fasting hyperglycemia (2). When the overnight waning of insulin was prevented by continuous subcutaneous insulin infusion (CSII) […] or by the long-acting insulin analogs (LA-IAs) (8), it was possible to quantify the real magnitude of the dawn phenomenon — 15–25 mg/dL BG elevation from nocturnal nadir to before breakfast […]. Nocturnal spikes of growth hormone secretion are the most likely mechanism of the dawn phenomenon in T1D (13,14). The observation from early pioneering studies in T1D (1012) that insulin sensitivity is higher after midnight until 3 a.m. as compared to the period 4–8 a.m., soon translated into use of more physiological replacement of basal insulin […] to reduce risk of nocturnal hypoglycemia while targeting fasting near-normoglycemia”.

“In T2D, identification of diurnal changes in BG goes back decades, but only quite recently fasting hyperglycemia has been attributed to a transient increase in hepatic glucose production (both glycogenolysis and gluconeogenesis) at dawn in the absence of compensatory insulin secretion (1517). Monnier et al. (7) report on the overnight (interstitial) glucose concentration (IG), as measured by continuous ambulatory IG monitoring, in three groups of 248 subjects with T2D […] Importantly, the dawn phenomenon had an impact on mean daily IG and A1C (mean increase of 0.39% [4.3 mmol/mol]), which was independent of treatment. […] Two messages from the data of Monnier et al. (7) are important. First, the dawn phenomenon is confirmed as a frequent event across the heterogeneous population of T2D independent of (oral) treatment and studied in everyday life conditions, not only in the setting of specialized clinical research units. Second, the article reaffirms that the primary target of treatment in T2D is to reestablish near-normoglycemia before and after breakfast (i.e., to treat the dawn phenomenon) to lower mean daily BG and A1C (8). […] the dawn phenomenon induces hyperglycemia not only before, but, to a larger extent, after breakfast as well (7,18). Over the years, fasting (and postbreakfast) hyperglycemia in T2D worsens as result of progressively impaired pancreatic B-cell function on the background of continued insulin resistance primarily at dawn (8,1518) and independently of age (19). Because it is an early metabolic abnormality leading over time to the vicious circle of “hyperglycemia begets hyperglycemia” by glucotoxicity and lipotoxicity, the dawn phenomenon in T2D should be treated early and appropriately before A1C continues to increase (20).”

“Oral medications do not adequately control the dawn phenomenon, even when given in combination (7,18). […] The evening replacement of basal insulin, which abolishes the dawn phenomenon by restraining hepatic glucose production and lipolysis (21), is an effective treatment as it mimics the physiology of glucose homeostasis in normal, nondiabetic subjects (4). Early use of basal insulin in T2D is an add-on option treatment after failure of metformin to control A1C <7.0% (20). However, […] it would be wise to consider initiation of basal insulin […] before — not after — A1C has increased well beyond 7.0%, as usually it is done in practice currently.”

“Diabetic peripheral neuropathy (DPN) is among the most distressing of all the chronic complications of diabetes and is a cause of significant disability and poor quality of life (4). Depending on the patient population and diagnostic criteria, the prevalence of DPN among adults with diabetes ranges from 30 to 70% (57). However, there are insufficient data on the prevalence and predictors of DPN among the pediatric population. Furthermore, early detection and good glycemic control have been proven to prevent or delay adverse outcomes associated with DPN (5,8,9). Near-normal control of blood glucose beginning as soon as possible after the onset of diabetes may delay the development of clinically significant nerve impairment (8,9). […] The American Diabetes Association (ADA) recommends screening for DPN in children and adolescents with type 2 diabetes at diagnosis and 5 years after diagnosis for those with type 1 diabetes, followed by annual evaluations thereafter, using simple clinical tests (10). Since subclinical signs of DPN may precede development of frank neuropathic symptoms, systematic, preemptive screening is required in order to identify DPN in its earliest stages.

There are various measures that can be used for the assessment of DPN. The Michigan Neuropathy Screening Instrument (MNSI) is a simple, sensitive, and specific tool for the screening of DPN (11). It was validated in large independent cohorts (12,13) and has been widely used in clinical trials and longitudinal cohort studies […] The aim of this pilot study was to provide preliminary estimates of the prevalence of and factors associated with DPN among children and adolescents with type 1 and type 2 diabetes.”

“A total of 399 youth (329 with type 1 and 70 with type 2 diabetes) participated in the pilot study. Youth with type 1 diabetes were younger (mean age 15.7 ± 4.3 years) and had a shorter duration of diabetes (mean duration 6.2 ± 0.9 years) compared with youth with type 2 diabetes (mean age 21.6 ± 4.1 years and mean duration 7.6 ± 1.8 years). Participants with type 2 diabetes had a higher BMI z score and waist circumference, were more likely to be smokers, and had higher blood pressure and lipid levels than youth with type 1 diabetes (all P < 0.001). A1C, however, did not significantly differ between the two groups (mean A1C 8.8 ± 1.8% [73 ± 2 mmol/mol] for type 1 diabetes and 8.5 ± 2.9% [72 ± 3 mmol/mol] for type 2 diabetes; P = 0.5) but was higher than that recommended by the ADA for this age-group (A1C ≤7.5%) (10). The prevalence of DPN (defined as the MNSIE score >2) was 8.2% among youth with type 1 diabetes and 25.7% among those with type 2 diabetes. […] Youth with DPN were older and had a longer duration of diabetes, greater central obesity (increased waist circumference), higher blood pressure, an atherogenic lipid profile (low HDL cholesterol and marginally high triglycerides), and microalbuminuria. A1C […] was not significantly different between those with and without DPN (9.0% ± 2.0 […] vs. 8.8% ± 2.1 […], P = 0.58). Although nearly 37% of youth with type 2 diabetes came from lower-income families with annual income <25,000 USD per annum (as opposed to 11% for type 1 diabetes), socioeconomic status was not significantly associated with DPN (P = 0.77).”

“In the unadjusted logistic regression model, the odds of having DPN was nearly four times higher among those with type 2 diabetes compared with youth with type 1 diabetes (odds ratio [OR] 3.8 [95% CI 1.9–7.5, P < 0.0001). This association was attenuated, but remained significant, after adjustment for age and sex (OR 2.3 [95% CI 1.1–5.0], P = 0.03). However, this association was no longer significant (OR 2.1 [95% CI 0.3–15.9], P = 0.47) when additional covariates […] were added to the model […] The loss of the association between diabetes type and DPN with addition of covariates in the fully adjusted model could be due to power loss, given the small number of youth with DPN in the sample, or indicative of stronger associations between these covariates and DPN such that conditioning on them eliminates the observed association between DPN and diabetes type.”

“The prevalence of DPN among type 1 diabetes youth in our pilot study is lower than that reported by Eppens et al. (15) among 1,433 Australian adolescents with type 1 diabetes assessed by thermal threshold testing and VPT (prevalence of DPN 27%; median age and duration 15.7 and 6.8 years, respectively). A much higher prevalence was also reported among Danish (62.5%) and Brazilian (46%) cohorts of type 1 diabetes youth (16,17) despite a younger age (mean age among Danish children 13.7 years and Brazilian cohort 12.9 years). The prevalence of DPN among youth with type 2 diabetes (26%) found in our study is comparable to that reported among the Australian cohort (21%) (15). The wide ranges in the prevalence estimates of DPN among the young cannot solely be attributed to the inherent racial/ethnic differences in this population but could potentially be due to the differing criteria and diagnostic tests used to define and characterize DPN.”

“In our study, the duration of diabetes was significantly longer among those with DPN, but A1C values did not differ significantly between the two groups, suggesting that a longer duration with its sustained impact on peripheral nerves is an important determinant of DPN. […] Cho et al. (22) reported an increase in the prevalence of DPN from 14 to 28% over 17 years among 819 Australian adolescents with type 1 diabetes aged 11–17 years at baseline, despite improvements in care and minor improvements in A1C (8.2–8.7%). The prospective Danish Study Group of Diabetes in Childhood also found no association between DPN (assessed by VPT) and glycemic control (23).”

“In conclusion, our pilot study found evidence that the prevalence of DPN in adolescents with type 2 diabetes approaches rates reported in adults with diabetes. Several CVD risk factors such as central obesity, elevated blood pressure, dyslipidemia, and microalbuminuria, previously identified as predictors of DPN among adults with diabetes, emerged as independent predictors of DPN in this young cohort and likely accounted for the increased prevalence of DPN in youth with type 2 diabetes.

“Type 1 diabetes appears to be a risk factor for the development of disturbed eating behavior (DEB) (1,2). Estimates of the prevalence of DEB among individuals with type 1 diabetes range from 10 to 49% (3,4), depending on methodological issues such as the definition and measurement of DEB. Some studies only report the prevalence of full-threshold diagnoses of anorexia nervosa, bulimia nervosa, and eating disorders not otherwise specified, whereas others also include subclinical eating disorders (1). […] Although different terminology complicates the interpretation of prevalence rates across studies, the findings are sufficiently robust to indicate that there is a higher prevalence of DEB in type 1 diabetes compared with healthy controls. A meta-analysis reported a three-fold increase of bulimia nervosa, a two-fold increase of eating disorders not otherwise specified, and a two-fold increase of subclinical eating disorders in patients with type 1 diabetes compared with controls (2). No elevated rates of anorexia nervosa were found.”

“When DEB and type 1 diabetes co-occur, rates of morbidity and mortality are dramatically increased. A Danish study of comorbid type 1 diabetes and anorexia nervosa showed that the crude mortality rate at 10-year follow-up was 2.5% for type 1 diabetes and 6.5% for anorexia nervosa, but the rate increased to 34.8% when occurring together (the standardized mortality rates were 4.06, 8.86, and 14.5, respectively) (9). The presence of DEB in general also can severely impair metabolic control and advance the onset of long-term diabetes complications (4). Insulin reduction or omission is an efficient weight loss strategy uniquely available to patients with type 1 diabetes and has been reported in up to 37% of patients (1012). Insulin restriction is associated with poorer metabolic control, and previous research has found that self-reported insulin restriction at baseline leads to a three-fold increased risk of mortality at 11-year follow-up (10).

Few population-based studies have specifically investigated the prevalence of and relationship between DEBs and insulin restriction. The generalizability of existing research remains limited by relatively small samples and a lack of males. Further, many studies have relied on generic measures of DEBs, which may not be appropriate for use in individuals with type 1 diabetes. The Diabetes Eating Problem Survey–Revised (DEPS-R) is a newly developed and diabetes-specific screening tool for DEBs. A recent study demonstrated satisfactory psychometric properties of the Norwegian version of the DEPS-R among children and adolescents with type 1 diabetes 11–19 years of age (13). […] This study aimed to assess young patients with type 1 diabetes to assess the prevalence of DEBs and frequency of insulin omission or restriction, to compare the prevalence of DEB between males and females across different categories of weight and age, and to compare the clinical features of participants with and without DEBs and participants who restrict and do not restrict insulin. […] The final sample consisted of 770 […] children and adolescents with type 1 diabetes 11–19 years of age. There were 380 (49.4%) males and 390 (50.6%) females.”

27.7% of female and 9% of male children and adolescents with type 1 diabetes receiving intensified insulin treatment scored above the predetermined cutoff on the DEPS-R, suggesting a level of disturbed eating that warrants further attention by treatment providers. […] Significant differences emerged across age and weight categories, and notable sex-specific trends were observed. […] For the youngest (11–13 years) and underweight (BMI <18.5) categories, the proportion of DEB was <10% for both sexes […]. Among females, the prevalence of DEB increased dramatically with age to ∼33% among 14 to 16 year olds and to nearly 50% among 17 to 19 year olds. Among males, the rate remained low at 7% for 14 to 16 year olds and doubled to ∼15% for 17 to 19 year olds.

A similar sex-specific pattern was detected across weight categories. Among females, the prevalence of DEB increased steadily and significantly from 9% among the underweight category to 23% for normal weight, 42% for overweight, and 53% for the obese categories, respectively. Among males, ∼6–7% of both the underweight and normal weight groups reported DEB, with rates increasing to ∼15% for both the overweight and obese groups. […] When separated by sex, females scoring above the cutoff on the DEPS-R had significantly higher HbA1c (9.2% [SD, 1.9]) than females scoring below the cutoff (8.4% [SD, 1.3]; P < 0.001). The same trend was observed among males (9.2% [SD, 1.6] vs. 8.4% [SD, 1.3]; P < 0.01). […] A total of 31.6% of the participants reported using less insulin and 6.9% reported skipping their insulin dose entirely at least occasionally after overeating. When assessing the sexes separately, we found that 36.8% of females reported restricting and 26.2% reported skipping insulin because of overeating. The rates for males were 9.4 and 4.5%, respectively.”

“The finding that DEBs are common in young patients with type 1 diabetes is in line with previous literature (2). However, because of different assessment methods and different definitions of DEB, direct comparison with other studies is complicated, especially because this is the first study to have used the DEPS-R in a prevalence study. However, two studies using the original DEPS have reported similar results, with 37.9% (23) and 53.8% (24) of the participants reporting engaging in unhealthy weight control practices. In our study, females scored significantly higher than males, which is not surprising given previous studies demonstrating an increased risk of development of DEB in nondiabetic females compared with males. In addition, the prevalence rates increased considerably by increasing age and weight. A relationship between eating pathology and older age and higher BMI also has been demonstrated in previous research conducted in both diabetic and nondiabetic adolescent populations.”

“Consistent with existent literature (1012,27), we found a high frequency of insulin restriction. For example, Bryden et al. (11) assessed 113 males and females (aged 17–25 years) with type 1 diabetes and found that a total of 37% of the females (no males) reported a history of insulin omission or reduction for weight control purposes. Peveler et al. (12) investigated 87 females with type 1 diabetes aged 11–25 years, and 36% reported intentionally reducing or omitting their insulin doses to control their weight. Finally, Goebel-Fabbri et al. (10) examined 234 females 13–60 years of age and found that 30% reported insulin restriction. Similarly, 36.8% of the participants in our study reported reducing their insulin doses occasionally or more often after overeating.”

“Despite good-quality evidence of tight glycemic control, particularly early in the disease trajectory (3), people with type 2 diabetes often do not reach recommended glycemic targets. Baseline characteristics in observational studies indicate that both insulin-experienced and insulin-naïve people may have mean HbA1c above the recommended target levels, reflecting the existence of patients with poor glycemic control in routine clinical care (810). […] U.K. data, based on an analysis reflecting previous NICE guidelines, show that it takes a mean of 7.7 years to initiate insulin after the start of the last OAD [oral antidiabetes drugs] (in people taking two or more OADs) and that mean HbA1c is ~10% (86 mmol/mol) at the time of insulin initiation (12). […] This failure to intensify treatment in a timely manner has been termed clinical inertia; however, data are lacking on clinical inertia in the diabetes-management pathway in a real-world primary care setting, and studies that have been carried out are, relatively speaking, small in scale (13,14). This retrospective cohort analysis investigates time to intensification of treatment in people with type 2 diabetes treated with OADs and the associated levels of glycemic control, and compares these findings with recommended treatment guidelines for diabetes.”

“We used the Clinical Practice Research Datalink (CPRD) database. This is the world’s largest computerized database, representing the primary care longitudinal records of >13 million patients from across the U.K. The CPRD is representative of the U.K. general population, with age and sex distributions comparable with those reported by the U.K. National Population Census (15). All information collected in the CPRD has been subjected to validation studies and been proven to contain consistent and high-quality data (16).”

This analysis shows that there is a delay in intensifying treatment in people with type 2 diabetes with suboptimal glycemic control, with patients remaining in poor glycemic control for >7 years before intensification of treatment with insulin. In patients taking one, two, or three OADs, median time from initiation of treatment to intensification with an additional OAD for any patient exceeded the maximum follow-up time of 7.2–7.3 years, dependent on subcohort. […] Despite having HbA1c levels for which diabetes guidelines recommend treatment intensification, few people appeared to undergo intensification (4,6,7). The highest proportion of people with clinical inertia was for insulin initiation in people taking three OADs. Consequently, these people experienced prolonged periods in poor glycemic control, which is detrimental to long-term outcomes.”

“Previous studies in U.K. general practice have shown similar findings. A retrospective study involving 14,824 people with type 2 diabetes from 154 general practice centers contributing to the Doctors Independent Network Database (DIN-LINK) between 1995 and 2005 observed that median time to insulin initiation for people prescribed multiple OADs was 7.7 years (95% CI 7.4–8.5 years); mean HbA1c before insulin was 9.85% (84 mmol/mol), which decreased by 1.34% (95% CI 1.24–1.44%) after therapy (12). A longitudinal observational study from health maintenance organization data in 3,891 patients with type 2 diabetes in the U.S. observed that, despite continued HbA1c levels >7% (>53 mmol/mol), people treated with sulfonylurea and metformin did not start insulin for almost 3 years (21). Another retrospective cohort study, using data from the Health Improvement Network database of 2,501 people with type 2 diabetes, estimated that only 25% of people started insulin within 1.8 years of multiple OAD failure, if followed for 5 years, and that 50% of people delayed starting insulin for almost 5 years after failure of glycemic control with multiple OADs (22). The U.K. cohort of a recent, 26-week observational study examining insulin initiation in clinical practice reported a large proportion of insulin-naïve people with HbA1c >9% (>75 mmol/mol) at baseline (64%); the mean HbA1c in the global cohort was 8.9% (74 mmol/mol) (10). Consequently, our analysis supports previous findings concerning clinical inertia in both U.K. and U.S. general practice and reflects little improvement in recent years, despite updated treatment guidelines recommending tight glycemic control.

“How hyperglycemia may cause damage to the nervous system is not fully understood. One consequence of hyperglycemia is the generation of advanced glycation end products (AGEs) that can form nonenzymatically between glucose, lipids, and amino groups. It is believed that AGEs are involved in the pathophysiology of neuropathy. AGEs tend to affect cellular function by altering protein function (11). One of the AGEs, N-ε-(carboxymethyl)lysine (CML), has been found in excessive amounts in the human diabetic peripheral nerve (12). High levels of methylglyoxal in serum have been found to be associated with painful peripheral neuropathy (13). In recent years, differentiation of affected nerves is possible by virtue of specific function tests to distinguish which fibers are damaged in diabetic polyneuropathy: large myelinated (Aα, Aβ), small thinly myelinated (Aδ), or small nonmyelinated (C) fibers. […] Our aims were to evaluate large- and small-nerve fiber function in long-term type 1 diabetes and to search for longitudinal associations with HbA1c and the AGEs CML and methylglyoxal-derived hydroimidazolone.”

“27 persons with type 1 diabetes of 40 ± 3 years duration underwent large-nerve fiber examinations, with nerve conduction studies at baseline and years 8, 17, and 27. Small-fiber functions were assessed by quantitative sensory thresholds (QST) and intraepidermal nerve fiber density (IENFD) at year 27. HbA1c was measured prospectively through 27 years. […] Fourteen patients (52%) reported sensory symptoms. Nine patients reported symptoms of a sensory neuropathy (reduced sensibility in feet or impaired balance), while three of these patients described pain. Five patients had symptoms compatible with carpal tunnel syndrome (pain or paresthesias within the innervation territory of the median nerve […]. An additional two had no symptoms but abnormal neurological tests with absent tendon reflexes and reduced sensibility. A total of 16 (59%) of the patients had symptoms or signs of neuropathy. […] No patient with symptoms of neuropathy had normal neurophysiological findings. […] Abnormal autonomic testing was observed in 7 (26%) of the patients and occurred together with neurophysiological signs of peripheral neuropathy. […] Twenty-two (81%) had small-fiber dysfunction by QST. Heat pain thresholds in the foot were associated with hydroimidazolone and HbA1c. IENFD was abnormal in 19 (70%) and significantly lower in diabetic patients than in age-matched control subjects (4.3 ± 2.3 vs. 11.2 ± 3.5 mm, P < 0.001). IENFD correlated negatively with HbA1c over 27 years (r = −0.4, P = 0.04) and CML (r = −0.5, P = 0.01). After adjustment for age, height, and BMI in a multiple linear regression model, CML was still independently associated with IENFD.”

Our study shows that small-fiber dysfunction is more prevalent than large-fiber dysfunction in diabetic neuropathy after long duration of type 1 diabetes. Although large-fiber abnormalities were less common than small-fiber abnormalities, almost 60% of the participants had their large nerves affected after 40 years with diabetes. Long-term blood glucose estimated by HbA1c measured prospectively through 27 years and AGEs predict large- and small-nerve fiber function.”

“Subarachnoid hemorrhage (SAH) is a life-threatening cerebrovascular event, which is usually caused by a rupture of a cerebrovascular aneurysm. These aneurysms are mostly found in relatively large-caliber (≥1 mm) vessels and can often be considered as macrovascular lesions. The overall incidence of SAH has been reported to be 10.3 per 100,000 person-years (1), even though the variation in incidence between countries is substantial (1). Notably, the population-based incidence of SAH is 35 per 100,000 person-years in the adult (≥25 years of age) Finnish population (2). The incidence of nonaneurysmal SAH is globally unknown, but it is commonly believed that 5–15% of all SAHs are of nonaneurysmal origin. Prospective, long-term, population-based SAH risk factor studies suggest that smoking (24), high blood pressure (24), age (2,3), and female sex (2,4) are the most important risk factors for SAH, whereas diabetes (both types 1 and 2) does not appear to be associated with an increased risk of SAH (2,3).

An increased risk of cardiovascular disease is well recognized in people with diabetes. There are, however, very few studies on the risk of cerebrovascular disease in type 1 diabetes since most studies have focused on type 2 diabetes alone or together with type 1 diabetes. Cerebrovascular mortality in the 20–39-year age-group of people with type 1 diabetes is increased five- to sevenfold in comparison with the general population but accounts only for 15% of all cardiovascular deaths (5). Of the cerebrovascular deaths in patients with type 1 diabetes, 23% are due to hemorrhagic strokes (5). However, the incidence of SAH in type 1 diabetes is unknown. […] In this prospective cohort study of 4,083 patients with type 1 diabetes, we aimed to determine the incidence and characteristics of SAH.”

“52% [of participants] were men, the mean age was 37.4 ± 11.8 years, and the duration of diabetes was 21.6 ± 12.1 years at enrollment. The FinnDiane Study is a nationwide multicenter cohort study of genetic, clinical, and environmental risk factors for microvascular and macrovascular complications in type 1 diabetes. […] all type 1 diabetic patients in the FinnDiane database with follow-up data and without a history of stroke at baseline were included. […] Fifteen patients were confirmed to have an SAH, and thus the crude incidence of SAH was 40.9 (95% CI 22.9–67.4) per 100,000 person-years. Ten out of these 15 SAHs were nonaneurysmal SAHs […] The crude incidence of nonaneurysmal SAH was 27.3 (13.1–50.1) per 100,000 person-years. None of the 10 nonaneurysmal SAHs were fatal. […] Only 3 out of 10 patients did not have verified diabetic microvascular or macrovascular complications prior to the nonaneurysmal SAH event. […] Four patients with type 1 diabetes had a fatal SAH, and all these patients died within 24 h after SAH.”

The presented study results suggest that the incidence of nonaneurysmal SAH is high among patients with type 1 diabetes. […] It is of note that smoking type 1 diabetic patients had a significantly increased risk of nonaneurysmal and all-cause SAHs. Smoking also increases the risk of microvascular complications in insulin-treated diabetic patients, and these patients more often have retinal and renal microangiopathy than never-smokers (8). […] Given the high incidence of nonaneurysmal SAH in patients with type 1 diabetes and microvascular changes (i.e., diabetic retinopathy and nephropathy), the results support the hypothesis that nonaneurysmal SAH is a microvascular rather than macrovascular subtype of stroke.”

“Only one patient with type 1 diabetes had a confirmed aneurysmal SAH. Four other patients died suddenly due to an SAH. If these four patients with type 1 diabetes and a fatal SAH had an aneurysmal SAH, which, taking into account the autopsy reports and imaging findings, is very likely, aneurysmal SAH may be an exceptionally deadly event in type 1 diabetes. Population-based evidence suggests that up to 45% of people die during the first 30 days after SAH, and 18% die at emergency rooms or outside hospitals (9). […] Contrary to aneurysmal SAH, nonaneurysmal SAH is virtually always a nonfatal event (1014). This also supports the view that nonaneurysmal SAH is a disease of small intracranial vessels, i.e., a microvascular disease. Diabetic retinopathy, a chronic microvascular complication, has been associated with an increased risk of stroke in patients with diabetes (15,16). Embryonically, the retina is an outgrowth of the brain and is similar in its microvascular properties to the brain (17). Thus, it has been suggested that assessments of the retinal vasculature could be used to determine the risk of cerebrovascular diseases, such as stroke […] Most interestingly, the incidence of nonaneurysmal SAH was at least two times higher than the incidence of aneurysmal SAH in type 1 diabetic patients. In comparison, the incidence of nonaneurysmal SAH is >10 times lower than the incidence of aneurysmal SAH in the general adult population (21).”

Keep in mind when looking at these data that this is type 2 data. Type 1 diabetes is very rare in Japan and the rest of East Asia.

“The risk for cardiovascular death was evaluated in a large cohort of participants selected randomly from the overall Japanese population. A total of 7,120 participants (2,962 men and 4,158 women; mean age 52.3 years) free of previous CVD were followed for 15 years. Adjusted hazard ratios (HRs) and 95% CIs among categories of HbA1c (<5.0%, 5.0–5.4%, 5.5–5.9%, 6.0–6.4%, and ≥6.5%) for participants without treatment for diabetes and HRs for participants with diabetes were calculated using a Cox proportional hazards model.

RESULTS During the study, there were 1,104 deaths, including 304 from CVD, 61 from coronary heart disease, and 127 from stroke (78 from cerebral infarction, 25 from cerebral hemorrhage, and 24 from unclassified stroke). Relations to HbA1c with all-cause mortality and CVD death were graded and continuous, and multivariate-adjusted HRs for CVD death in participants with HbA1c 6.0–6.4% and ≥6.5% were 2.18 (95% CI 1.22–3.87) and 2.75 (1.43–5.28), respectively, compared with participants with HbA1c <5.0%. Similar associations were observed between HbA1c and death from coronary heart disease and death from cerebral infarction.

CONCLUSIONS High HbA1c levels were associated with increased risk for all-cause mortality and death from CVD, coronary heart disease, and cerebral infarction in general East Asian populations, as in Western populations.”

November 15, 2017

## Materials (I)…

Useful matter is a good definition of materials. […] Materials are materials because inventive people find ingenious things to do with them. Or just because people use them. […] Materials science […] explains how materials are made and how they behave as we use them.”

I recently read this book, which I liked. Below I have added some quotes from the first half of the book, with some added hopefully helpful links, as well as a collection of links at the bottom of the post to other topics covered.

“We understand all materials by knowing about composition and microstructure. Despite their extraordinary minuteness, the atoms are the fundamental units, and they are real, with precise attributes, not least size. Solid materials tend towards crystallinity (for the good thermodynamic reason that it is the arrangement of lowest energy), and they usually achieve it, though often in granular, polycrystalline forms. Processing conditions greatly influence microstructures which may be mobile and dynamic, particularly at high temperatures. […] The idea that we can understand materials by looking at their internal structure in finer and finer detail goes back to the beginnings of microscopy […]. This microstructural view is more than just an important idea, it is the explanatory framework at the core of materials science. Many other concepts and theories exist in materials science, but this is the framework. It says that materials are intricately constructed on many length-scales, and if we don’t understand the internal structure we shall struggle to explain or to predict material behaviour.”

“Oxygen is the most abundant element in the earth’s crust and silicon the second. In nature, silicon occurs always in chemical combination with oxygen, the two forming the strong Si–O chemical bond. The simplest combination, involving no other elements, is silica; and most grains of sand are crystals of silica in the form known as quartz. […] The quartz crystal comes in right- and left-handed forms. Nothing like this happens in metals but arises frequently when materials are built from molecules and chemical bonds. The crystal structure of quartz has to incorporate two different atoms, silicon and oxygen, each in a repeating pattern and in the precise ratio 1:2. There is also the severe constraint imposed by the Si–O chemical bonds which require that each Si atom has four O neighbours arranged around it at the corners of a tetrahedron, every O bonded to two Si atoms. The crystal structure which quartz adopts (which of all possibilities is the one of lowest energy) is made up of triangular and hexagonal units. But within this there are buried helixes of Si and O atoms, and a helix must be either right- or left-handed. Once a quartz crystal starts to grow as right- or left-handed, its structure templates all the other helices with the same handedness. Equal numbers of right- and left-handed crystals occur in nature, but each is unambiguously one or the other.”

“In the living tree, and in the harvested wood that we use as a material, there is a hierarchy of structural levels, climbing all the way from the molecular to the scale of branch and trunk. The stiff cellulose chains are bundled into fibrils, which are themselves bonded by other organic molecules to build the walls of cells; which in turn form channels for the transport of water and nutrients, the whole having the necessary mechanical properties to support its weight and to resist the loads of wind and rain. In the living tree, the structure allows also for growth and repair. There are many things to be learned from biological materials, but the most universal is that biology builds its materials at many structural levels, and rarely makes a distinction between the material and the organism. Being able to build materials with hierarchical architectures is still more or less out of reach in materials engineering. Understanding how materials spontaneously self-assemble is the biggest challenge in contemporary nanotechnology.”

“The example of diamond shows two things about crystalline materials. First, anything we know about an atom and its immediate environment (neighbours, distances, angles) holds for every similar atom throughout a piece of material, however large; and second, everything we know about the unit cell (its size, its shape, and its symmetry) also applies throughout an entire crystal […] and by extension throughout a material made of a myriad of randomly oriented crystallites. These two general propositions provide the basis and justification for lattice theories of material behaviour which were developed from the 1920s onwards. We know that every solid material must be held together by internal cohesive forces. If it were not, it would fly apart and turn into a gas. A simple lattice theory says that if we can work out what forces act on the atoms in one unit cell, then this should be enough to understand the cohesion of the entire crystal. […] In lattice models which describe the cohesion and dynamics of the atoms, the role of the electrons is mainly in determining the interatomic bonding and the stiffness of the bond-spring. But in many materials, and especially in metals and semiconductors, some of the electrons are free to move about within the lattice. A lattice model of electron behaviour combines a geometrical description of the lattice with a more or less mechanical view of the atomic cores, and a fully quantum theoretical description of the electrons themselves. We need only to take account of the outer electrons of the atoms, as the inner electrons are bound tightly into the cores and are not itinerant. The outer electrons are the ones that form chemical bonds, so they are also called the valence electrons.”

“It is harder to push atoms closer together than to pull them further apart. While atoms are soft on the outside, they have harder cores, and pushed together the cores start to collide. […] when we bring a trillion atoms together to form a crystal, it is the valence electrons that are disturbed as the atoms approach each other. As the atomic cores come close to the equilibrium spacing of the crystal, the electron states of the isolated atoms morph into a set of collective states […]. These collective electron states have a continuous distribution of energies up to a top level, and form a ‘band’. But the separation of the valence electrons into distinct electron-pair states is preserved in the band structure, so that we find that the collective states available to the entire population of valence electrons in the entire crystal form a set of bands […]. Thus in silicon, there are two main bands.”

“The perfect crystal has atoms occupying all the positions prescribed by the geometry of its crystal lattice. But real crystalline materials fall short of perfection […] For instance, an individual site may be unoccupied (a vacancy). Or an extra atom may be squeezed into the crystal at a position which is not a lattice position (an interstitial). An atom may fall off its lattice site, creating a vacancy and an interstitial at the same time. Sometimes a site is occupied by the wrong kind of atom. Point defects of this kind distort the crystal in their immediate neighbourhood. Vacancies free up diffusional movement, allowing atoms to hop from site to site. Larger scale defects invariably exist too. A complete layer of atoms or unit cells may terminate abruptly within the crystal to produce a line defect (a dislocation). […] There are materials which try their best to crystallize, but find it hard to do so. Many polymer materials are like this. […] The best they can do is to form small crystalline regions in which the molecules lie side by side over limited distances. […] Often the crystalline domains comprise about half the material: it is a semicrystal. […] Crystals can be formed from the melt, from solution, and from the vapour. All three routes are used in industry and in the laboratory. As a rule, crystals that grow slowly are good crystals. Geological time can give wonderful results. Often, crystals are grown on a seed, a small crystal of the same material deliberately introduced into the crystallization medium. If this is a melt, the seed can gradually be pulled out, drawing behind it a long column of new crystal material. This is the Czochralski process, an important method for making semiconductors. […] However it is done, crystals invariably grow by adding material to the surface of a small particle to make it bigger.”

“As we go down the Periodic Table of elements, the atoms get heavier much more quickly than they get bigger. The mass of a single atom of uranium at the bottom of the Table is about 25 times greater than that of an atom of the lightest engineering metal, beryllium, at the top, but its radius is only 40 per cent greater. […] The density of solid materials of every kind is fixed mainly by where the constituent atoms are in the Periodic Table. The packing arrangement in the solid has only a small influence, although the crystalline form of a substance is usually a little denser than the amorphous form […] The range of solid densities available is therefore quite limited. At the upper end we hit an absolute barrier, with nothing denser than osmium (22,590 kg/m3). At the lower end we have some slack, as we can make lighter materials by the trick of incorporating holes to make foams and sponges and porous materials of all kinds. […] in the entire catalogue of available materials there is a factor of about a thousand for ingenious people to play with, from say 20 to 20,000 kg/m3.”

“The expansion of materials as we increase their temperature is a universal tendency. It occurs because as we raise the temperature the thermal energy of the atoms and molecules increases correspondingly, and this fights against the cohesive forces of attraction. The mean distance of separation between atoms in the solid (or the liquid) becomes larger. […] As a general rule, the materials with small thermal expansivities are metals and ceramics with high melting temperatures. […] Although thermal expansion is a smooth process which continues from the lowest temperatures to the melting point, it is sometimes interrupted by sudden jumps […]. Changes in crystal structure at precise temperatures are commonplace in materials of all kinds. […] There is a cluster of properties which describe the thermal behaviour of materials. Besides the expansivity, there is the specific heat, and also the thermal conductivity. These properties show us, for example, that it takes about four times as much energy to increase the temperature of 1 kilogram of aluminium by 1°C as 1 kilogram of silver; and that good conductors of heat are usually also good conductors of electricity. At everyday temperatures there is not a huge difference in specific heat between materials. […] In all crystalline materials, thermal conduction arises from the diffusion of phonons from hot to cold regions. As they travel, the phonons are subject to scattering both by collisions with other phonons, and with defects in the material. This picture explains why the thermal conductivity falls as temperature rises”.

## Common Errors in Statistics… (III)

This will be my last post about the book. I liked most of it, and I gave it four stars on goodreads, but that doesn’t mean there weren’t any observations included in the book with which I took issue/disagreed. Here’s one of the things I didn’t like:

“In the univariate [model selection] case, if the errors were not normally distributed, we could take advantage of permutation methods to obtain exact significance levels in tests of the coefficients. Exact permutation methods do not exist in the multivariable case.

When selecting variables to incorporate in a multivariable model, we are forced to perform repeated tests of hypotheses, so that the resultant p-values are no longer meaningful. One solution, if sufficient data are available, is to divide the dataset into two parts, using the first part to select variables, and the second part to test these same variables for significance.” (chapter 13)

The basic idea is to use the results of hypothesis tests to decide which variables to include in the model. This is both common- and bad practice. I found it surprising that such a piece of advice would be included in this book, as I’d figured beforehand that this would precisely be the sort of thing a book like this one would tell people not to do. I’ve said this before multiple times on this blog, but I’ll keep saying it, especially if/when I find this sort of advice in statistics textbooks: Using hypothesis testing as a basis for model selection is an invalid approach to model selection, and it’s in general a terrible idea. “There is no statistical theory that supports the notion that hypothesis testing with a fixed α level is a basis for model selection.” (Burnham & Anderson). Use information criteria, not hypothesis tests, to make your model selection decisions. (And read Burnham & Anderson’s book on these topics.)

Anyway, much of the stuff included in the book was good stuff and it’s a very decent book. I’ve added some quotes and observations from the last part of the book below.

“OLS is not the only modeling technique. To diminish the effect of outliers, and treat prediction errors as proportional to their absolute magnitude rather than their squares, one should use least absolute deviation (LAD) regression. This would be the case if the conditional distribution of the dependent variable were characterized by a distribution with heavy tails (compared to the normal distribution, increased probability of values far from the mean). One should also employ LAD regression when the conditional distribution of the dependent variable given the predictors is not symmetric and we wish to estimate its median rather than its mean value.
If it is not clear which variable should be viewed as the predictor and which the dependent variable, as is the case when evaluating two methods of measurement, then one should employ Deming or error in variable (EIV) regression.
If one’s primary interest is not in the expected value of the dependent variable but in its extremes (the number of bacteria that will survive treatment or the number of individuals who will fall below the poverty line), then one ought consider the use of quantile regression.
If distinct strata exist, one should consider developing separate regression models for each stratum, a technique known as ecological regression [] If one’s interest is in classification or if the majority of one’s predictors are dichotomous, then one should consider the use of classification and regression trees (CART) […] If the outcomes are limited to success or failure, one ought employ logistic regression. If the outcomes are counts rather than continuous measurements, one should employ a generalized linear model (GLM).”

“Linear regression is a much misunderstood and mistaught concept. If a linear model provides a good fit to data, this does not imply that a plot of the dependent variable with respect to the predictor would be a straight line, only that a plot of the dependent variable with respect to some not-necessarily monotonic function of the predictor would be a line. For example, y = A + B log[x] and y = A cos(x) + B sin(x) are both linear models whose coefficients A and B might be derived by OLS or LAD methods. Y = Ax5 is a linear model. Y = xA is nonlinear. […] Perfect correlation (ρ2 = 1) does not imply that two variables are identical but rather that one of them, Y, say, can be written as a linear function of the other, Y = a + bX, where b is the slope of the regression line and a is the intercept. […] Nonlinear regression methods are appropriate when the form of the nonlinear model is known in advance. For example, a typical pharmacological model will have the form A exp[bX] + C exp[dW]. The presence of numerous locally optimal but globally suboptimal solutions creates challenges, and validation is essential. […] To be avoided are a recent spate of proprietary algorithms available solely in software form that guarantee to find a best-fitting solution. In the words of John von Neumann, “With four parameters I can fit an elephant and with five I can make him wiggle his trunk.””

“[T]he most common errors associated with quantile regression include: 1. Failing to evaluate whether the model form is appropriate, for example, forcing linear fit through an obvious nonlinear response. (Of course, this is also a concern with mean regression, OLS, LAD, or EIV.) 2. Trying to over interpret a single quantile estimate (say 0.85) with a statistically significant nonzero slope (p < 0.05) when the majority of adjacent quantiles (say 0.5 − 0.84 and 0.86 − 0.95) are clearly zero (p > 0.20). 3. Failing to use all the information a quantile regression provides. Even if you think you are only interested in relations near maximum (say 0.90 − 0.99), your understanding will be enhanced by having estimates (and sampling variation via confidence intervals) across a wide range of quantiles (say 0.01 − 0.99).”

“Survival analysis is used to assess time-to-event data including time to recovery and time to revision. Most contemporary survival analysis is built around the Cox model […] Possible sources of error in the application of this model include all of the following: *Neglecting the possible dependence of the baseline function λ0 on the predictors. *Overmatching, that is, using highly correlated predictors that may well mask each other’s effects. *Using the parametric Breslow or Kaplan–Meier estimators of the survival function rather than the nonparametric Nelson–Aalen estimator. *Excluding patients based on post-hoc criteria. Pathology workups on patients who died during the study may reveal that some of them were wrongly diagnosed. Regardless, patients cannot be eliminated from the study as we lack the information needed to exclude those who might have been similarly diagnosed but who are still alive at the conclusion of the study. *Failure to account for differential susceptibility (frailty) of the patients”.

“In reporting the results of your modeling efforts, you need to be explicit about the methods used, the assumptions made, the limitations on your model’s range of application, potential sources of bias, and the method of validation […] Multivariable regression is plagued by the same problems univariate regression is heir to, plus many more of its own. […] If choosing the correct functional form of a model in a univariate case presents difficulties, consider that in the case of k variables, there are k linear terms (should we use logarithms? should we add polynomial terms?) and k(k − 1) first-order cross products of the form xixk. Should we include any of the k(k − 1)(k − 2) second-order cross products? A common error is to attribute the strength of a relationship to the magnitude of the predictor’s regression coefficient […] Just scale the units in which the predictor is reported to see how erroneous such an assumption is. […] One of the main problems in multiple regression is multicollinearity, which is the correlation among predictors. Even relatively weak levels of multicollinearity are enough to generate instability in multiple regression models […]. A simple solution is to evaluate the correlation matrix M among predictors, and use this matrix to choose the predictors that are less correlated. […] Test M for each predictor, using the variance inflation factor (VIF) given by (1 − R2) − 1, where R2 is the multiple coefficient of determination of the predictor against all other predictors. If VIF is large for a given predictor (>8, say) delete this predictor and reestimate the model. […] Dropping collinear variables from the analysis can result in a substantial loss of power”.

“It can be difficult to predict the equilibrium point for a supply-and-demand model, because producers change their price in response to demand and consumers change their demand in response to price. Failing to account for endogeneous variables can lead to biased estimates of the regression coefficients.
Endogeneity can arise not only as a result of omitted variables, but of measurement error, autocorrelated errors, simultaneity, and sample selection errors. One solution is to make use of instrument variables that should satisfy two conditions: 1. They should be correlated with the endogenous explanatory variables, conditional on the other covariates. 2. They should not be correlated with the error term in the explanatory equation, that is, they should not suffer from the same problem as the original predictor.
Instrumental variables are commonly used to estimate causal effects in contexts in which controlled experiments are not possible, for example in estimating the effects of past and projected government policies.”

“[T]he following errors are frequently associated with factor analysis: *Applying it to datasets with too few cases in relation to the number of variables analyzed […], without noticing that correlation coefficients have very wide confidence intervals in small samples. *Using oblique rotation to get a number of factors bigger or smaller than the number of factors obtained in the initial extraction by principal components, as a way to show the validity of a questionnaire. For example, obtaining only one factor by principal components and using the oblique rotation to justify that there were two differentiated factors, even when the two factors were correlated and the variance explained by the second factor was very small. *Confusion among the total variance explained by a factor and the variance explained in the reduced factorial space. In this way a researcher interpreted that a given group of factors explaining 70% of the variance before rotation could explain 100% of the variance after rotation.”

“Poisson regression is appropriate when the dependent variable is a count, as is the case with the arrival of individuals in an emergency room. It is also applicable to the spatial distributions of tornadoes and of clusters of galaxies.2 To be applicable, the events underlying the outcomes must be independent […] A strong assumption of the Poisson regression model is that the mean and variance are equal (equidispersion). When the variance of a sample exceeds the mean, the data are said to be overdispersed. Fitting the Poisson model to overdispersed data can lead to misinterpretation of coefficients due to poor estimates of standard errors. Naturally occurring count data are often overdispersed due to correlated errors in time or space, or other forms of nonindependence of the observations. One solution is to fit a Poisson model as if the data satisfy the assumptions, but adjust the model-based standard errors usually employed. Another solution is to estimate a negative binomial model, which allows for scalar overdispersion.”

“When multiple observations are collected for each principal sampling unit, we refer to the collected information as panel data, correlated data, or repeated measures. […] The dependency of observations violates one of the tenets of regression analysis: that observations are supposed to be independent and identically distributed or IID. Several concerns arise when observations are not independent. First, the effective number of observations (that is, the effective amount of information) is less than the physical number of observations […]. Second, any model that fails to specifically address [the] correlation is incorrect […]. Third, although the correct specification of the correlation will yield the most efficient estimator, that specification is not the only one to yield a consistent estimator.”

“The basic issue in deciding whether to utilize a fixed- or random-effects model is whether the sampling units (for which multiple observations are collected) represent the collection of most or all of the entities for which inference will be drawn. If so, the fixed-effects estimator is to be preferred. On the other hand, if those same sampling units represent a random sample from a larger population for which we wish to make inferences, then the random-effects estimator is more appropriate. […] Fixed- and random-effects models address unobserved heterogeneity. The random-effects model assumes that the panel-level effects are randomly distributed. The fixed-effects model assumes a constant disturbance that is a special case of the random-effects model. If the random-effects assumption is correct, then the random-effects estimator is more efficient than the fixed-effects estimator. If the random-effects assumption does not hold […], then the random effects model is not consistent. To help decide whether the fixed- or random-effects models is more appropriate, use the Durbin–Wu–Hausman3 test comparing coefficients from each model. […] Although fixed-effects estimators and random-effects estimators are referred to as subject-specific estimators, the GEEs available through PROC GENMOD in SAS or xtgee in Stata, are called population-averaged estimators. This label refers to the interpretation of the fitted regression coefficients. Subject-specific estimators are interpreted in terms of an effect for a given panel, whereas population-averaged estimators are interpreted in terms of an affect averaged over panels.”

“A favorite example in comparing subject-specific and population-averaged estimators is to consider the difference in interpretation of regression coefficients for a binary outcome model on whether a child will exhibit symptoms of respiratory illness. The predictor of interest is whether or not the child’s mother smokes. Thus, we have repeated observations on children and their mothers. If we were to fit a subject-specific model, we would interpret the coefficient on smoking as the change in likelihood of respiratory illness as a result of the mother switching from not smoking to smoking. On the other hand, the interpretation of the coefficient in a population-averaged model is the likelihood of respiratory illness for the average child with a nonsmoking mother compared to the likelihood for the average child with a smoking mother. Both models offer equally valid interpretations. The interpretation of interest should drive model selection; some studies ultimately will lead to fitting both types of models. […] In addition to model-based variance estimators, fixed-effects models and GEEs [Generalized Estimating Equation models] also admit modified sandwich variance estimators. SAS calls this the empirical variance estimator. Stata refers to it as the Robust Cluster estimator. Whatever the name, the most desirable property of the variance estimator is that it yields inference for the regression coefficients that is robust to misspecification of the correlation structure. […] Specification of GEEs should include careful consideration of reasonable correlation structure so that the resulting estimator is as efficient as possible. To protect against misspecification of the correlation structure, one should base inference on the modified sandwich variance estimator. This is the default estimator in SAS, but the user must specify it in Stata.”

“There are three main approaches to [model] validation: 1. Independent verification (obtained by waiting until the future arrives or through the use of surrogate variables). 2. Splitting the sample (using one part for calibration, the other for verification) 3. Resampling (taking repeated samples from the original sample and refitting the model each time).
Goodness of fit is no guarantee of predictive success. […] Splitting the sample into two parts, one for estimating the model parameters, the other for verification, is particularly appropriate for validating time series models in which the emphasis is on prediction or reconstruction. If the observations form a time series, the more recent observations should be reserved for validation purposes. Otherwise, the data used for validation should be drawn at random from the entire sample. Unfortunately, when we split the sample and use only a portion of it, the resulting estimates will be less precise. […] The proportion to be set aside for validation purposes will depend upon the loss function. If both the goodness-of-fit error in the calibration sample and the prediction error in the validation sample are based on mean-squared error, Picard and Berk [1990] report that we can minimize their sum by using between a quarter and a third of the sample for validation purposes.”

## Organic Chemistry (II)

I have included some observations from the second half of the book below, as well as some links to topics covered.

“[E]nzymes are used routinely to catalyse reactions in the research laboratory, and for a variety of industrial processes involving pharmaceuticals, agrochemicals, and biofuels. In the past, enzymes had to be extracted from natural sources — a process that was both expensive and slow. But nowadays, genetic engineering can incorporate the gene for a key enzyme into the DNA of fast growing microbial cells, allowing the enzyme to be obtained more quickly and in far greater yield. Genetic engineering has also made it possible to modify the amino acids making up an enzyme. Such modified enzymes can prove more effective as catalysts, accept a wider range of substrates, and survive harsher reaction conditions. […] New enzymes are constantly being discovered in the natural world as well as in the laboratory. Fungi and bacteria are particularly rich in enzymes that allow them to degrade organic compounds. It is estimated that a typical bacterial cell contains about 3,000 enzymes, whereas a fungal cell contains 6,000. Considering the variety of bacterial and fungal species in existence, this represents a huge reservoir of new enzymes, and it is estimated that only 3 per cent of them have been investigated so far.”

“One of the most important applications of organic chemistry involves the design and synthesis of pharmaceutical agents — a topic that is defined as medicinal chemistry. […] In the 19th century, chemists isolated chemical components from known herbs and extracts. Their aim was to identify a single chemical that was responsible for the extract’s pharmacological effects — the active principle. […] It was not long before chemists synthesized analogues of active principles. Analogues are structures which have been modified slightly from the original active principle. Such modifications can often improve activity or reduce side effects. This led to the concept of the lead compound — a compound with a useful pharmacological activity that could act as the starting point for further research. […] The first half of the 20th century culminated in the discovery of effective antimicrobial agents. […] The 1960s can be viewed as the birth of rational drug design. During that period there were important advances in the design of effective anti-ulcer agents, anti-asthmatics, and beta-blockers for the treatment of high blood pressure. Much of this was based on trying to understand how drugs work at the molecular level and proposing theories about why some compounds were active and some were not.”

“[R]ational drug design was boosted enormously towards the end of the century by advances in both biology and chemistry. The sequencing of the human genome led to the identification of previously unknown proteins that could serve as potential drug targets. […] Advances in automated, small-scale testing procedures (high-throughput screening) also allowed the rapid testing of potential drugs. In chemistry, advances were made in X-ray crystallography and NMR spectroscopy, allowing scientists to study the structure of drugs and their mechanisms of action. Powerful molecular modelling software packages were developed that allowed researchers to study how a drug binds to a protein binding site. […] the development of automated synthetic methods has vastly increased the number of compounds that can be synthesized in a given time period. Companies can now produce thousands of compounds that can be stored and tested for pharmacological activity. Such stores have been called chemical libraries and are routinely tested to identify compounds capable of binding with a specific protein target. These advances have boosted medicinal chemistry research over the last twenty years in virtually every area of medicine.”

“Drugs interact with molecular targets in the body such as proteins and nucleic acids. However, the vast majority of clinically useful drugs interact with proteins, especially receptors, enzymes, and transport proteins […] Enzymes are […] important drug targets. Drugs that bind to the active site and prevent the enzyme acting as a catalyst are known as enzyme inhibitors. […] Enzymes are located inside cells, and so enzyme inhibitors have to cross cell membranes in order to reach them—an important consideration in drug design. […] Transport proteins are targets for a number of therapeutically important drugs. For example, a group of antidepressants known as selective serotonin reuptake inhibitors prevent serotonin being transported into neurons by transport proteins.”

“The main pharmacokinetic factors are absorption, distribution, metabolism, and excretion. Absorption relates to how much of an orally administered drug survives the digestive enzymes and crosses the gut wall to reach the bloodstream. Once there, the drug is carried to the liver where a certain percentage of it is metabolized by metabolic enzymes. This is known as the first-pass effect. The ‘survivors’ are then distributed round the body by the blood supply, but this is an uneven process. The tissues and organs with the richest supply of blood vessels receive the greatest proportion of the drug. Some drugs may get ‘trapped’ or sidetracked. For example fatty drugs tend to get absorbed in fat tissue and fail to reach their target. The kidneys are chiefly responsible for the excretion of drugs and their metabolites.”

“Having identified a lead compound, it is important to establish which features of the compound are important for activity. This, in turn, can give a better understanding of how the compound binds to its molecular target. Most drugs are significantly smaller than molecular targets such as proteins. This means that the drug binds to quite a small region of the protein — a region known as the binding site […]. Within this binding site, there are binding regions that can form different types of intermolecular interactions such as van der Waals interactions, hydrogen bonds, and ionic interactions. If a drug has functional groups and substituents capable of interacting with those binding regions, then binding can take place. A lead compound may have several groups that are capable of forming intermolecular interactions, but not all of them are necessarily needed. One way of identifying the important binding groups is to crystallize the target protein with the drug bound to the binding site. X-ray crystallography then produces a picture of the complex which allows identification of binding interactions. However, it is not always possible to crystallize target proteins and so a different approach is needed. This involves synthesizing analogues of the lead compound where groups are modified or removed. Comparing the activity of each analogue with the lead compound can then determine whether a particular group is important or not. This is known as an SAR study, where SAR stands for structure–activity relationships.” Once the important binding groups have been identified, the pharmacophore for the lead compound can be defined. This specifies the important binding groups and their relative position in the molecule.”

“One way of identifying the active conformation of a flexible lead compound is to synthesize rigid analogues where the binding groups are locked into defined positions. This is known as rigidification or conformational restriction. The pharmacophore will then be represented by the most active analogue. […] A large number of rotatable bonds is likely to have an adverse effect on drug activity. This is because a flexible molecule can adopt a large number of conformations, and only one of these shapes corresponds to the active conformation. […] In contrast, a totally rigid molecule containing the required pharmacophore will bind the first time it enters the binding site, resulting in greater activity. […] It is also important to optimize a drug’s pharmacokinetic properties such that it can reach its target in the body. Strategies include altering the drug’s hydrophilic/hydrophobic properties to improve absorption, and the addition of substituents that block metabolism at specific parts of the molecule. […] The drug candidate must [in general] have useful activity and selectivity, with minimal side effects. It must have good pharmacokinetic properties, lack toxicity, and preferably have no interactions with other drugs that might be taken by a patient. Finally, it is important that it can be synthesized as cheaply as possible”.

“Most drugs that have reached clinical trials for the treatment of Alzheimer’s disease have failed. Between 2002 and 2012, 244 novel compounds were tested in 414 clinical trials, but only one drug gained approval. This represents a failure rate of 99.6 per cent as against a failure rate of 81 per cent for anti-cancer drugs.”

“It takes about ten years and £160 million to develop a new pesticide […] The volume of global sales increased 47 per cent in the ten-year period between 2002 and 2012, while, in 2012, total sales amounted to £31 billion. […] In many respects, agrochemical research is similar to pharmaceutical research. The aim is to find pesticides that are toxic to ‘pests’, but relatively harmless to humans and beneficial life forms. The strategies used to achieve this goal are also similar. Selectivity can be achieved by designing agents that interact with molecular targets that are present in pests, but not other species. Another approach is to take advantage of any metabolic reactions that are unique to pests. An inactive prodrug could then be designed that is metabolized to a toxic compound in the pest, but remains harmless in other species. Finally, it might be possible to take advantage of pharmacokinetic differences between pests and other species, such that a pesticide reaches its target more easily in the pest. […] Insecticides are being developed that act on a range of different targets as a means of tackling resistance. If resistance should arise to an insecticide acting on one particular target, then one can switch to using an insecticide that acts on a different target. […] Several insecticides act as insect growth regulators (IGRs) and target the moulting process rather than the nervous system. In general, IGRs take longer to kill insects but are thought to cause less detrimental effects to beneficial insects. […] Herbicides control weeds that would otherwise compete with crops for water and soil nutrients. More is spent on herbicides than any other class of pesticide […] The synthetic agent 2,4-D […] was synthesized by ICI in 1940 as part of research carried out on biological weapons […] It was first used commercially in 1946 and proved highly successful in eradicating weeds in cereal grass crops such as wheat, maize, and rice. […] The compound […] is still the most widely used herbicide in the world.”

“The type of conjugated system present in a molecule determines the specific wavelength of light absorbed. In general, the more extended the conjugation, the higher the wavelength absorbed. For example, β-carotene […] is the molecule responsible for the orange colour of carrots. It has a conjugated system involving eleven double bonds, and absorbs light in the blue region of the spectrum. It appears red because the reflected light lacks the blue component. Zeaxanthin is very similar in structure to β-carotene, and is responsible for the yellow colour of corn. […] Lycopene absorbs blue-green light and is responsible for the red colour of tomatoes, rose hips, and berries. Chlorophyll absorbs red light and is coloured green. […] Scented molecules interact with olfactory receptors in the nose. […] there are around 400 different olfactory protein receptors in humans […] The natural aroma of a rose is due mainly to 2-phenylethanol, geraniol, and citronellol.”

“Over the last fifty years, synthetic materials have largely replaced natural materials such as wood, leather, wool, and cotton. Plastics and polymers are perhaps the most visible sign of how organic chemistry has changed society. […] It is estimated that production of global plastics was 288 million tons in 2012 […] Polymerization involves linking molecular strands called polymers […]. By varying the nature of the monomer, a huge range of different polymers can be synthesized with widely differing properties. The idea of linking small molecular building blocks into polymers is not a new one. Nature has been at it for millions of years using amino acid building blocks to make proteins, and nucleotide building blocks to make nucleic acids […] The raw materials for plastics come mainly from oil, which is a finite resource. Therefore, it makes sense to recycle or depolymerize plastics to recover that resource. Virtually all plastics can be recycled, but it is not necessarily economically feasible to do so. Traditional recycling of polyesters, polycarbonates, and polystyrene tends to produce inferior plastics that are suitable only for low-quality goods.”

November 11, 2017

## Quotes

i. “Much of the skill in doing science resides in knowing where in the hierarchy you are looking – and, as a consequence, what is relevant and what is not.” (Philip Ball – Molecules: A very Short Introduction)

ii. “…statistical software will no more make one a statistician than a scalpel will turn one into a neurosurgeon. Allowing these tools to do our thinking is a sure recipe for disaster.” (Philip Good & James Hardin, Common Errors in Statistics (and how to avoid them))

iii. “Just as 95% of research efforts are devoted to data collection, 95% of the time remaining should be spent on ensuring that the data collected warrant analysis.” (-ll-)

iv. “One reason why many statistical models are incomplete is that they do not specify the sources of randomness generating variability among agents, i.e., they do not specify why otherwise observationally identical people make different choices and have different outcomes given the same choice.” (James J. Heckman, -ll-)

v. “If a thing is not worth doing, it is not worth doing well.” (J. W. Tukey, -ll-)

vi. “Hypocrisy is the lubricant of society.” (David Hull)

vii. “Every time I fire a linguist, the performance of our speech recognition system goes up.” (Fred Jelinek)

viii. “For most of my life, one of the persons most baffled by my own work was myself.” (Benoît Mandelbrot)

ix. “I’m afraid love is just a word.” (Harry Mulisch)

x. “The worst thing about death is that you once were, and now you are not.” (José Saramago)

xi. “Sometimes the most remarkable things seem commonplace. I mean, when you think about it, jet travel is pretty freaking remarkable. You get in a plane, it defies the gravity of an entire planet by exploiting a loophole with air pressure, and it flies across distances that would take months or years to cross by any means of travel that has been significant for more than a century or three. You hurtle above the earth at enough speed to kill you instantly should you bump into something, and you can only breathe because someone built you a really good tin can that has seams tight enough to hold in a decent amount of air. Hundreds of millions of man-hours of work and struggle and research, blood, sweat, tears, and lives have gone into the history of air travel, and it has totally revolutionized the face of our planet and societies.
But get on any flight in the country, and I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about the drinks. The drinks, people.” (Jim Butcher, Summer Knight)

xii. “The best way to keep yourself from doing something grossly self-destructive and stupid is to avoid the temptation to do it. For example, it is far easier to fend off inappropriate amorous desires if one runs screaming from the room every time a pretty girl comes in.” (Jim Butcher, Proven Guilty)

xiii. “One certain effect of war is to diminish freedom of expression. Patriotism becomes the order of the day, and those who question the war are seen as traitors, to be silenced and imprisoned.” (Howard Zinn)

xiv. “While inexact models may mislead, attempting to allow for every contingency a priori is impractical. Thus models must be built by an iterative feedback process in which an initial parsimonious model may be modified when diagnostic checks applied to residuals indicate the need.” (G. E. P. Box)

xv. “In our analysis of complex systems (like the brain and language) we must avoid the trap of trying to find master keys. Because of the mechanisms by which complex systems structure themselves, single principles provide inadequate descriptions. We should rather be sensitive to complex and self-organizing interactions and appreciate the play of patterns that perpetually transforms the system itself as well as the environment in which it operates.” (Paul Cilliers)

xvi. “The nature of the chemical bond is the problem at the heart of all chemistry.” (Bryce Crawford)

xvii. “When there’s a will to fail, obstacles can be found.” (John McCarthy)

xviii. “We understand human mental processes only slightly better than a fish understands swimming.” (-ll-)

xix. “He who refuses to do arithmetic is doomed to talk nonsense.” (-ll-)

xx. “The trouble with men is that they have limited minds. That’s the trouble with women, too.” (Joanna Russ)

## Organic Chemistry (I)

This book‘s a bit longer than most ‘A very short introduction to…‘ publications, and it’s quite dense at times and included a lot of interesting stuff. It took me a while to finish it as I put it away a while back when I hit some of the more demanding content, but I did pick it up later and I really enjoyed most of the coverage. In the end I decided that I wouldn’t be doing the book justice if I were to limit my coverage of it to just one post, so this will be only the first of two posts of coverage of this book, covering roughly the first half of it.

As usual I have included in my post both some observations from the book (…and added a few links to these quotes where I figured they might be helpful) as well as some wiki links to topics discussed in the book.

“Organic chemistry is a branch of chemistry that studies carbon-based compounds in terms of their structure, properties, and synthesis. In contrast, inorganic chemistry covers the chemistry of all the other elements in the periodic table […] carbon-based compounds are crucial to the chemistry of life. [However] organic chemistry has come to be defined as the chemistry of carbon-based compounds, whether they originate from a living system or not. […] To date, 16 million compounds have been synthesized in organic chemistry laboratories across the world, with novel compounds being synthesized every day. […] The list of commodities that rely on organic chemistry include plastics, synthetic fabrics, perfumes, colourings, sweeteners, synthetic rubbers, and many other items that we use every day.”

“For a neutral carbon atom, there are six electrons occupying the space around the nucleus […] The electrons in the outer shell are defined as the valence electrons and these determine the chemical properties of the atom. The valence electrons are easily ‘accessible’ compared to the two electrons in the first shell. […] There is great significance in carbon being in the middle of the periodic table. Elements which are close to the left-hand side of the periodic table can lose their valence electrons to form positive ions. […] Elements on the right-hand side of the table can gain electrons to form negatively charged ions. […] The impetus for elements to form ions is the stability that is gained by having a full outer shell of electrons. […] Ion formation is feasible for elements situated to the left or the right of the periodic table, but it is less feasible for elements in the middle of the table. For carbon to gain a full outer shell of electrons, it would have to lose or gain four valence electrons, but this would require far too much energy. Therefore, carbon achieves a stable, full outer shell of electrons by another method. It shares electrons with other elements to form bonds. Carbon excels in this and can be considered chemistry’s ultimate elemental socialite. […] Carbon’s ability to form covalent bonds with other carbon atoms is one of the principle reasons why so many organic molecules are possible. Carbon atoms can be linked together in an almost limitless way to form a mind-blowing variety of carbon skeletons. […] carbon can form a bond to hydrogen, but it can also form bonds to atoms such as nitrogen, phosphorus, oxygen, sulphur, fluorine, chlorine, bromine, and iodine. As a result, organic molecules can contain a variety of different elements. Further variety can arise because it is possible for carbon to form double bonds or triple bonds to a variety of other atoms. The most common double bonds are formed between carbon and oxygen, carbon and nitrogen, or between two carbon atoms. […] The most common triple bonds are found between carbon and nitrogen, or between two carbon atoms.”

[C]hirality has huge importance. The two enantiomers of a chiral molecule behave differently when they interact with other chiral molecules, and this has important consequences in the chemistry of life. As an analogy, consider your left and right hands. These are asymmetric in shape and are non-superimposable mirror images. Similarly, a pair of gloves are non-superimposable mirror images. A left hand will fit snugly into a left-hand glove, but not into a right-hand glove. In the molecular world, a similar thing occurs. The proteins in our bodies are chiral molecules which can distinguish between the enantiomers of other molecules. For example, enzymes can distinguish between the two enantiomers of a chiral compound and catalyse a reaction with one of the enantiomers but not the other.”

“A key concept in organic chemistry is the functional group. A functional group is essentially a distinctive arrangement of atoms and bonds. […] Functional groups react in particular ways, and so it is possible to predict how a molecule might react based on the functional groups that are present. […] it is impossible to build a molecule atom by atom. Instead, target molecules are built by linking up smaller molecules. […] The organic chemist needs to have a good understanding of the reactions that are possible between different functional groups when choosing the molecular building blocks to be used for a synthesis. […] There are many […] reasons for carrying out FGTs [functional group transformations], especially when synthesizing complex molecules. For example, a starting material or a synthetic intermediate may lack a functional group at a key position of the molecular structure. Several reactions may then be required to introduce that functional group. On other occasions, a functional group may be added to a particular position then removed at a later stage. One reason for adding such a functional group would be to block an unwanted reaction at that position of the molecule. Another common situation is where a reactive functional group is converted to a less reactive functional group such that it does not interfere with a subsequent reaction. Later on, the original functional group is restored by another functional group transformation. This is known as a protection/deprotection strategy. The more complex the target molecule, the greater the synthetic challenge. Complexity is related to the number of rings, functional groups, substituents, and chiral centres that are present. […] The more reactions that are involved in a synthetic route, the lower the overall yield. […] retrosynthesis is a strategy by which organic chemists design a synthesis before carrying it out in practice. It is called retrosynthesis because the design process involves studying the target structure and working backwards to identify how that molecule could be synthesized from simpler starting materials. […] a key stage in retrosynthesis is identifying a bond that can be ‘disconnected’ to create those simpler molecules.”

“[V]ery few reactions produce the spectacular visual and audible effects observed in chemistry demonstrations. More typically, reactions involve mixing together two colourless solutions to produce another colourless solution. Temperature changes are a bit more informative. […] However, not all reactions generate heat, and monitoring the temperature is not a reliable way of telling whether the reaction has gone to completion or not. A better approach is to take small samples of the reaction solution at various times and to test these by chromatography or spectroscopy. […] If a reaction is taking place very slowly, different reaction conditions could be tried to speed it up. This could involve heating the reaction, carrying out the reaction under pressure, stirring the contents vigorously, ensuring that the reaction is carried out in a dry atmosphere, using a different solvent, using a catalyst, or using one of the reagents in excess. […] There are a large number of variables that can affect how efficiently reactions occur, and organic chemists in industry are often employed to develop the ideal conditions for a specific reaction. This is an area of organic chemistry known as chemical development. […] Once a reaction has been carried out, it is necessary to isolate and purify the reaction product. This often proves more time-consuming than carrying out the reaction itself. Ideally, one would remove the solvent used in the reaction and be left with the product. However, in most reactions this is not possible as other compounds are likely to be present in the reaction mixture. […] it is usually necessary to carry out procedures that will separate and isolate the desired product from these other compounds. This is known as ‘working up’ the reaction.”

“Proteins are large molecules (macromolecules) which serve a myriad of purposes, and are essentially polymers constructed from molecular building blocks called amino acids […]. In humans, there are twenty different amino acids having the same ‘head group’, consisting of a carboxylic acid and an amine attached to the same carbon atom […] The amino acids are linked up by the carboxylic acid of one amino acid reacting with the amine group of another to form an amide link. Since a protein is being produced, the amide bond is called a peptide bond, and the final protein consists of a polypeptide chain (or backbone) with different side chains ‘hanging off’ the chain […]. The sequence of amino acids present in the polypeptide sequence is known as the primary structure. Once formed, a protein folds into a specific 3D shape […] Nucleic acids […] are another form of biopolymer, and are formed from molecular building blocks called nucleotides. These link up to form a polymer chain where the backbone consists of alternating sugar and phosphate groups. There are two forms of nucleic acid — deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). In DNA, the sugar is deoxyribose , whereas the sugar in RNA is ribose. Each sugar ring has a nucleic acid base attached to it. For DNA, there are four different nucleic acid bases called adenine (A), thymine (T), cytosine (C), and guanine (G) […]. These bases play a crucial role in the overall structure and function of nucleic acids. […] DNA is actually made up of two DNA strands […] where the sugar-phosphate backbones are intertwined to form a double helix. The nucleic acid bases point into the centre of the helix, and each nucleic acid base ‘pairs up’ with a nucleic acid base on the opposite strand through hydrogen bonding. The base pairing is specifically between adenine and thymine, or between cytosine and guanine. This means that one polymer strand is complementary to the other, a feature that is crucial to DNA’s function as the storage molecule for genetic information. […]  [E]ach strand […] act as the template for the creation of a new strand to produce two identical ‘daughter’ DNA double helices […] [A] genetic alphabet of four letters (A, T, G, C) […] code for twenty amino acids. […] [A]n amino acid is coded, not by one nucleotide, but by a set of three. The number of possible triplet combinations using four ‘letters’ is more than enough to encode all the amino acids.”

“Proteins have a variety of functions. Some proteins, such as collagen, keratin, and elastin, have a structural role. Others catalyse life’s chemical reactions and are called enzymes. They have a complex 3D shape, which includes a cavity called the active site […]. This is where the enzyme binds the molecules (substrates) that undergo the enzyme-catalysed reaction. […] A substrate has to have the correct shape to fit an enzyme’s active site, but it also needs binding groups to interact with that site […]. These interactions hold the substrate in the active site long enough for a reaction to occur, and typically involve hydrogen bonds, as well as van der Waals and ionic interactions. When a substrate binds, the enzyme normally undergoes an induced fit. In other words, the shape of the active site changes slightly to accommodate the substrate, and to hold it as tightly as possible. […] Once a substrate is bound to the active site, amino acids in the active site catalyse the subsequent reaction.”

“Proteins called receptors are involved in chemical communication between cells and respond to chemical messengers called neurotransmitters if they are released from nerves, or hormones if they are released by glands. Most receptors are embedded in the cell membrane, with part of their structure exposed on the outer surface of the cell membrane, and another part exposed on the inner surface. On the outer surface they contain a binding site that binds the molecular messenger. An induced fit then takes place that activates the receptor. This is very similar to what happens when a substrate binds to an enzyme […] The induced fit is crucial to the mechanism by which a receptor conveys a message into the cell — a process known as signal transduction. By changing shape, the protein initiates a series of molecular events that influences the internal chemistry within the cell. For example, some receptors are part of multiprotein complexes called ion channels. When the receptor changes shape, it causes the overall ion channel to change shape. This opens up a central pore allowing ions to flow across the cell membrane. The ion concentration within the cell is altered, and that affects chemical reactions within the cell, which ultimately lead to observable results such as muscle contraction. Not all receptors are membrane-bound. For example, steroid receptors are located within the cell. This means that steroid hormones need to cross the cell membrane in order to reach their target receptors. Transport proteins are also embedded in cell membranes and are responsible for transporting polar molecules such as amino acids into the cell. They are also important in controlling nerve action since they allow nerves to capture released neurotransmitters, such that they have a limited period of action.”

“RNA […] is crucial to protein synthesis (translation). There are three forms of RNA — messenger RNA (mRNA), transfer RNA (tRNA), and ribosomal RNA (rRNA). mRNA carries the genetic code for a particular protein from DNA to the site of protein production. Essentially, mRNA is a single-strand copy of a specific section of DNA. The process of copying that information is known as transcription. tRNA decodes the triplet code on mRNA by acting as a molecular adaptor. At one end of tRNA, there is a set of three bases (the anticodon) that can base pair to a set of three bases on mRNA (the codon). An amino acid is linked to the other end of the tRNA and the type of amino acid present is related to the anticodon that is present. When tRNA with the correct anticodon base pairs to the codon on mRNA, it brings the amino acid encoded by that codon. rRNA is a major constituent of a structure called a ribosome, which acts as the factory for protein production. The ribosome binds mRNA then coordinates and catalyses the translation process.”

## A few diabetes papers of interest

“The study examined long-term IHD-specific mortality in a Finnish population-based cohort of patients with early-onset (0–14 years) and late-onset (15–29 years) T1D (n = 17,306). […] Follow-up started from the time of diagnosis of T1D and ended either at the time of death or at the end of 2011. […] ICD codes used to define patients as having T1D were 2500B–2508B, E10.0–E10.9, or O24.0. […] The median duration of diabetes was 24.4 (interquartile range 17.6–32.2) years. Over a 41-year study period totaling 433,782 person-years of follow-up, IHD accounted for 27.6% of the total 1,729 deaths. Specifically, IHD was identified as the cause of death in 478 patients, in whom IHD was the primary cause of death in 303 and a contributory cause in 175. […] Within the early-onset cohort, the average crude mortality rate in women was 33.3% lower than in men (86.3 [95% CI 65.2–112.1] vs. 128.2 [104.2–156.1] per 100,000 person-years, respectively, P = 0.02). When adjusted for duration of diabetes and the year of diabetes diagnosis, the mortality RR between women and men of 0.64 was only of borderline significance (P = 0.05) […]. In the late-onset cohort, crude mortality in women was, on average, only one-half that of men (117.2 [92.0–147.1] vs. 239.7 [210.9–271.4] per 100,000 person-years, respectively, P < 0.0001) […]. An RR of 0.43 remained highly significant after adjustment for duration of diabetes and year of diabetes diagnosis. Every year of duration of diabetes increased the risk 10–13%”

“The number of deaths from IHD in the patients with T1D were compared with the number of deaths from IHD in the background population, and the SMRs were calculated. For the total cohort (early and late onset pooled), the SMR was 7.2 (95% CI 6.4–8.0) […]. In contrast to the crude mortality rates, the SMRs were higher in women (21.6 [17.2–27.0]) than in men (5.8 [5.1–6.6]). When stratified by the age at onset of diabetes, the SMR was considerably higher in patients with early onset (16.9 [13.5–20.9]) than in those with late onset (5.9 [5.2–6.8]). In both the late- and the early-onset cohorts, there was a striking difference in the SMRs between women and men, and this was especially evident in the early-onset cohort where the SMR for women was 52.8 (36.3–74.5) compared with 12.1 (9.2–15.8) for men. This higher risk of death from IHD compared with the background population was evident in all women, regardless of age. However, the most pronounced effect was seen in women in the early-onset cohort <40 years of age, who were 83 times more likely to die of IHD than the age-matched women in the background population. This compares with a 37 times higher risk of death from IHD in women aged >40 years. The corresponding SMRs for men aged <40 and ≥40 years were 19.4 and 8.5, respectively.”

“Overall, the 40-year cumulative mortality for IHD was 8.8% (95% CI 7.9–9.7%) in all patients […] The 40-year cumulative IHD mortality in the early-onset cohort was 6.3% (4.8–7.8%) for men and 4.5% (3.1–5.9%) for women (P = 0.009 by log-rank test) […]. In the late-onset cohort, the corresponding cumulative mortality rates were 16.6% (14.3–18.7%) in men and 8.5% (6.5–10.4%) in women (P < 0.0001 by log-rank test)”

“The major findings of the current study are that women with early-onset T1D are exceptionally vulnerable to dying from IHD, which is especially evident in those receiving a T1D diagnosis during the prepubertal and pubertal years. Crude mortality rates were similar for women compared with men, highlighting the loss of cardioprotection in women. […] Although men of all ages have greater crude mortality rates than women regardless of the age at onset of T1D, the current study shows that mortality from IHD attributable to diabetes is much more pronounced in women than in men. […] it is conceivable that one of the underlying reasons for the loss of female sex as a protective factor against the development of CVD in the setting of diabetes may be the loss of ovarian hormones. Indeed, women with T1D have been shown to have reduced levels of plasma estradiol compared with age-matched nondiabetic women (23) possibly because of idiopathic ovarian failure or dysregulation of the hypothalamic-pituitary-ovarian axis.”

“One of the novelties of the present study is that the risk of death from IHD highly depends on the age at onset of T1D. The data show that the SMR was considerably higher in early-onset (0–14 years) than in late-onset (15–29 years) T1D in both sexes. […] the risk of dying from IHD is high in both women and men receiving a diagnosis of T1D at a young age.

“The term “microalbuminuria” (MA) originated in 1964 when Professor Harry Keen first used it to signify a small amount of albumin in the urine of patients with type 1 diabetes (1). […] Whereas early research focused on the relevance of MA as a risk factor for diabetic kidney disease, research over the past 2 decades has shifted to examine whether MA is a true risk factor. To appreciate fully the contribution of MA to overall cardiorenal risk, it is important to distinguish between a risk factor and risk marker. A risk marker is a variable that identifies a pathophysiological state, such as inflammation or infection, and is not necessarily involved, directly or causally, in the genesis of a specified outcome (e.g., association of a cardiovascular [CV] event with fever, high-sensitivity C-reactive protein [hs-CRP], or MA). Conversely, a risk factor is involved clearly and consistently with the cause of a specified event (e.g., a CV event associated with persistently elevated blood pressure or elevated levels of LDL). Both a risk marker and a risk factor can predict an adverse outcome, but only one lies within the causal pathway of a disease. Moreover, a reduction (or alteration in a beneficial direction) of a risk factor (i.e., achievement of blood pressure goal) generally translates into a reduction of adverse outcomes, such as CV events; this is not necessarily true for a risk marker.”

“The data sources included in this article were all PubMed-referenced articles in English-language peer-reviewed journals since 1964. Studies selected had to have a minimum follow-up of 1 year; include at least 100 participants; be either a randomized trial, a systematic review, a meta-analysis, or a large observational cohort study in patients with any type of diabetes; or be trials of high CV risk that included at least 50% of patients with diabetes. All studies had to assess changes in MA tied to CV or CKD outcomes and not purely reflect changes in MA related to blood pressure, unless they were mechanistic studies. On the basis of these inclusion criteria, 31 studies qualified and provide the data used for this review.”

“Early studies in patients with diabetes supported the concept that as MA increases to higher levels, the risk of CKD progression and CV risk also increases […]. Moreover, evidence from epidemiological studies in patients with diabetes suggested that the magnitude of urine albumin excretion should be viewed as a continuum of CV risk, with the lower the albumin excretion, the lower the CV risk (15,16). However, MA values can vary daily up to 100% (11). These large biological variations are a result of a variety of conditions, with a central core tied to inflammation associated with factors ranging from increased blood pressure variability, high blood glucose levels, high LDL cholesterol, and high uric acid levels to high sodium ingestion, smoking, and exercise (17) […]. Additionally, any febrile illness, regardless of etiology, will increase urine albumin excretion (18). Taken together, these data support the concept that MA is highly variable and that values over a short time period (i.e., 3–6 months) are meaningless in predicting any CV or kidney disease outcome.”

“Initial studies to understand the mechanisms of MA examined changes in glomerular membrane permeability as a key determinant in patients with diabetes […]. Many factors affect the genesis and level of MA, most of which are linked to inflammatory conditions […]. A good evidence base, however, supports the concept that MA directly reflects the amount of inflammation and vascular “leakiness” present in patients with diabetes (16,18,19).

More recent studies have found a number of other factors that affect glomerular permeability by modifying cytokines that affect permeability. Increased amounts of glycated albumin reduce glomerular nephrin and increase vascular endothelial growth factor (20). Additionally, increases in sodium intake (21) as well as intraglomerular pressure secondary to high protein intake or poorly controlled blood pressure (22,23) increase glomerular permeability in diabetes and, hence, MA levels.

In individuals with diabetes, albumin is glycated and associated with the generation of reactive oxygen species. In addition, many other factors such as advanced glycation end products, reactive oxygen species, and other cellular toxins contribute to vascular injury. Once such injury occurs, the effect of pressor hormones, such as angiotensin II, is magnified, resulting in a faster progression of vascular injury. The end result is direct injury to the vascular smooth muscle cells, endothelial cells, and visceral epithelial cells (podocytes) of the glomerular capillary wall membrane as well as to the proximal tubular cells and podocyte basement membrane of the nephron (20,24,25). All these contribute to the development of MA. […] better glycemic control is associated with far lower levels of inflammatory markers (31).”

“MA is accepted as a CV risk marker for myocardial infarction and stroke, regardless of diabetes status. […] there is good evidence in those with type 2 diabetes that the presence of MA >100 mg/day is associated with higher CV events and greater likelihood of kidney disease development (6). Evidence for this association comes from many studies and meta-analyses […] a meta-analysis by Perkovic et al. (37) demonstrated a dose-response relationship between the level of albuminuria and CV risk. In this meta-analysis, individuals with MA were at 50% greater risk of coronary heart disease (risk ratio 1.47 [95% CI 1.30–1.66]) than those without. Those with macroalbuminuria (i.e., >300 mg/day) had more than a twofold risk for coronary heart disease (risk ratio 2.17 [95% CI 1.87–2.52]) (37). Despite these data indicating a higher CV risk in patients with MA regardless of diabetes status and other CV risk factors, there is no consensus that the addition of MA to conventional CV risk stratification for the general population (e.g., Framingham or Reynolds scoring systems) is of any clinical value, and that includes patients with diabetes (38).”

“Given that MA was evaluated in a post hoc manner in almost all interventional studies, it is likely that the reduction in MA simply reflects the effects of either renin-angiotensin system (RAS) blockade on endothelial function or significant blood pressure reduction rather than the MA itself being implicated as a CV disease risk factor (18). […] associations of lowering MA with angiotensin-converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs) does not prove a direct benefit on CV event lowering associated with MA reduction in diabetes. […] Four long-term, appropriately powered trials demonstrated an inverse relationship between reductions in MA and primary event rates for CV events […]. Taken together, these studies support the concept that MA is a risk marker in diabetes and is consistent with data of other inflammatory markers, such as hs-CRP [here’s a relevant link – US], such that the higher the level, the higher the risk (15,39,42). The importance of MA as a CV risk marker is exemplified further by another meta-analysis that showed that MA has a similar magnitude of CV risk as hs-CRP and is a better predictor of CV events (43). Thus, the data supporting MA as a risk marker for CV events are relatively consistent, clearly indicate that an association exists, and help to identify the presence of underlying inflammatory states, regardless of etiology.”

“In people with early stage nephropathy (i.e., stage 2 or 3a [GFR 45–89 mL/min/1.73 m2]) and MA, there is no clear benefit on slowing GFR decline by reducing MA with drugs that block the RAS independent of lowering blood pressure (16). This is exemplified by many trials […]. Thus, blood pressure lowering is the key goal for all patients with early stage nephropathy associated with normoalbuminuria or MA. […] When albuminuria levels are in the very high or macroalbuminuria range (i.e., >300 mg/day), it is accepted that the patient has CKD and is likely to progress ultimately to ESRD, unless they die of a CV event (39,52). However, only one prospective randomized trial evaluated the role of early intervention to reduce blood pressure with an ACE inhibitor versus a calcium channel blocker in CKD progression by assessing change in MA and creatinine clearance in people with type 2 diabetes (Appropriate Blood Pressure Control in Diabetes [ABCD] trial) (23). After >7 years of follow-up, there was no relationship between changes in MA and CKD progression. Moreover, there was regression to the mean of MA.”

“Many observational studies used development of MA as indicating the presence of early stage CKD. Early studies by the individual groups of Mogensen and Parving demonstrated a relationship between increases in MA and progression to nephropathy in type 1 diabetes. These groups also showed that use of ACE inhibitors, blood pressure reduction, and glucose control reduced MA (9,58,59). However, more recent studies in both type 1 and type 2 diabetes demonstrated that only a subgroup of patients progress from MA to >300 mg/day albuminuria, and this subgroup accounts for those destined to progress to ESRD (29,32,6063). Thus, the presence of MA alone is not predictive of CKD progression. […] some patients with type 2 diabetes progress to ESRD without ever having developed albuminuria levels of ≥300 mg/day (67). […] Taken together, data from outcome trials, meta-analyses, and observations demonstrate that MA [Micro-Albuminuria] alone is not synonymous with the presence of clearly defined CKD [Chronic Kidney Disease] in diabetes, although it is used as part of the criteria for the diagnosis of CKD in the most recent CKD classification and staging (71). Note that only a subgroup of ∼25–30% of people with diabetes who also have MA will likely progress to more advanced stages of CKD. Predictors of progression to ESRD, apart from family history, and many years of poor glycemic and blood pressure control are still not well defined. Although there are some genetic markers, such as CUBN and APOL1, their use in practice is not well established.”

“In the context of the data presented in this article, MA should be viewed as a risk marker associated with an increase in CV risk and for kidney disease, but its presence alone does not indicate established kidney disease, especially if the eGFR is well above 60 mL/min/1.73 m2. Increases in MA, with blood pressure and other CV risk factors controlled, are likely but not proven to portend a poor prognosis for CKD progression over time. Achieving target blood pressure (<140/80 mmHg) and target HbA1c (<7%) should be priorities in treating patients with MA. Recent guidelines from both the American Diabetes Association and the National Kidney Foundation provide a strong recommendation for using agents that block the RAS, such as ACE inhibitors and ARBs, as part of the regimen for those with albuminuria levels >300 mg/day but not MA (73). […] maximal antialbuminuric effects will [however] not be achieved with these agents unless a low-sodium diet is strictly followed.”

“The SEARCH for Diabetes in Youth (SEARCH) study was initiated in 2000, with funding from the Centers for Disease Control and Prevention and support from the National Institute of Diabetes and Digestive and Kidney Diseases, to address major knowledge gaps in the understanding of childhood diabetes. SEARCH is being conducted at five sites across the U.S. and represents the largest, most diverse study of diabetes among U.S. youth. An active registry of youth diagnosed with diabetes at age <20 years allows the assessment of prevalence (in 2001 and 2009), annual incidence (since 2002), and trends by age, race/ethnicity, sex, and diabetes type. Prevalence increased significantly from 2001 to 2009 for both type 1 and type 2 diabetes in most age, sex, and race/ethnic groups. SEARCH has also established a longitudinal cohort to assess the natural history and risk factors for acute and chronic diabetes-related complications as well as the quality of care and quality of life of persons with diabetes from diagnosis into young adulthood. […] This review summarizes the study methods, describes key registry and cohort findings and their clinical and public health implications, and discusses future directions.”

“SEARCH includes a registry and a cohort study […]. The registry study identifies incident cases each year since 2002 through the present with ∼5.5 million children <20 years of age (∼6% of the U.S. population <20 years) under surveillance annually. Approximately 3.5 million children <20 years of age were under surveillance in 2001 at the six SEARCH recruitment centers, with approximately the same number at the five centers under surveillance in 2009.”

“The prevalence of all types of diabetes was 1.8/1,000 youth in 2001 and was 2.2/1,000 youth in 2009, which translated to at least 154,000 children/youth in the U.S. with diabetes in 2001 (5) and at least 192,000 in 2009 (6). Overall, between 2001 and 2009, prevalence of type 1 diabetes in youth increased by 21.1% (95% CI 15.6–27.0), with similar increases for boys and girls and in most racial/ethnic and age groups (2) […]. The prevalence of type 2 diabetes also increased significantly over the same time period by 30.5% (95% CI 17.3–45.1), with increases observed in both sexes, 10–14- and 15–19-year-olds, and among Hispanic and non-Hispanic white and African American youth (2). These data on changes in type 2 are consistent with smaller U.S. studies (711).”

“The incidence of diabetes […] in 2002 to 2003 was 24.6/100,000/year (12), representing ∼15,000 new patients every year with type 1 diabetes and 3,700 with type 2 diabetes, increasing to 18,436 newly diagnosed type 1 and 5,089 with type 2 diabetes in 2008 to 2009 (13). Among non-Hispanic white youth, the incidence of type 1 diabetes increased by 2.7% (95% CI 1.2–4.3) annually between 2002 and 2009. Significant increases were observed among all age groups except the youngest age group (0–4 years) (14). […] The underlying factors responsible for this increase have not yet been identified.”

Over 50% of youth are hospitalized at diabetes onset, and ∼30% of children newly diagnosed with diabetes present with diabetic ketoacidosis (DKA) (19). Prevalence of DKA at diagnosis was three times higher among youth with type 1 diabetes (29.4%) compared with youth with type 2 diabetes (9.7%) and was lowest in Asian/Pacific Islanders (16.2%) and highest among Hispanics (27.0%).”

“A significant proportion of youth with diabetes, particularly those with type 2 diabetes, have very poor glycemic control […]: 17% of youth with type 1 diabetes and 27% of youth with type 2 diabetes had A1C levels ≥9.5% (≥80 mmol/mol). Minority youth were significantly more likely to have higher A1C levels compared with non-Hispanic white youth, regardless of diabetes type. […] Optimal care is an important component of successful long-term management for youth with diabetes. While there are high levels of adherence for some diabetes care indicators such as blood pressure checks (95%), urinary protein tests (83%), and lipid assessments (88%), approximately one-third of youth had no documentation of eye or A1C values at appropriate intervals and therefore were not meeting the American Diabetes Association (ADA)-recommended screening for diabetic control and complications (40). Participants ≥18 years old, particularly those with type 2 diabetes, and minority youth with type 1 diabetes had fewer tests of all kinds performed. […] Despite current treatment options, the prevalence of poor glycemic control is high, particularly among minority youth. Our initial findings suggest that a substantial number of youth with diabetes will develop serious, debilitating complications early in life, which is likely to have significant implications for their quality of life, as well as economic and health care implications.”

“Because recognition of the broader spectrum of diabetes in children and adolescents is recent, there are no gold-standard definitions for differentiating the types of diabetes in this population, either for research or clinical purposes or for public health surveillance. The ADA classification of diabetes as type 1 and type 2 does not include operational definitions for the specific etiologic markers of diabetes type, such as types and numbers of diabetes autoantibodies or measures of insulin resistance, hallmarks of type 1 and 2 diabetes, respectively (43). Moreover, obese adolescents with a clinical phenotype suggestive of type 2 diabetes can present with ketoacidosis (44) or have evidence of autoimmunity (45).”

“Using the ADA framework (43), we operationalized definitions of two main etiologic markers, autoimmunity and insulin sensitivity, to identify four etiologic subgroups based on the presence or absence of markers. Autoimmunity was based on presence of one or more diabetes autoantibodies (GAD65 and IA2). Insulin sensitivity was estimated using clinical variables (A1C, triglyceride level, and waist circumference) from a formula that was highly associated with estimated insulin sensitivity measured using a euglycemic-hyperinsulinemic clamp among youth with type 1 and 2 and normal control subjects (46). Participants were categorized as insulin resistant […] and insulin sensitive (47). Using this approach, 54.5% of SEARCH cases were classified as typical type 1 (autoimmune, insulin-sensitive) diabetes, while 15.9% were classified as typical type 2 (nonautoimmune, insulin-resistant) diabetes. Cases that were classified as autoimmune and insulin-resistant likely represent individuals with type 1 autoimmune diabetes and concomitant obesity, a phenotype becoming more prevalent as a result of the recent increase in the frequency of obesity, but is unlikely to be a distinct etiologic entity.”

“Ten percent of SEARCH participants had no evidence of either autoimmunity or insulin resistance and thus require additional testing, including additional measurements of diabetes-related autoantibodies (only two antibodies were measured in SEARCH) as well as testing for monogenic forms of diabetes to clarify etiology. Among antibody-negative youth, 8% of those tested had a mutation in one or more of the hepatocyte nuclear factor-1α (HNF-1α), glucokinase, and HNF-4α genes, an estimated monogenic diabetes population prevalence of at least 1.2% (48).”

The short answer is ‘yes, it does’. Some observations from the paper:

“Diabetic sensorimotor polyneuropathy (DSP) is a common complication of diabetes, affecting 28–55% of patients (1). A prospective Finnish study found evidence of probable or definite neuropathy in 8.3% of diabetic patients at the time of diagnosis, 16.7% after 5 years, and 41.9% after 10 years (2). Diabetes-related peripheral neuropathy results in serious morbidity, including chronic neuropathic pain, leg weakness and falls, sensory loss and foot ulceration, and amputation (3). Health care costs associated with diabetic neuropathy were estimated at \$10.9 billion in the U.S. in 2003 (4). However, despite the high prevalence of diabetes and DSP, and the important public health implications, there is a lack of serum- or tissue-based biomarkers to diagnose and follow patients with DSP longitudinally. Moreover, numerous attempts at treatment have yielded negative results.”

“DSP is known to cause injury to both large-diameter, myelinated (Aα and Aβ) fibers and small-diameter, unmyelinated nerve (Aδ and C) fibers; however, the sequence of nerve fiber damage remains uncertain. While earlier reports seemed to indicate simultaneous loss of small- and large-diameter nerve fibers, with preserved small/large ratios (5), more recent studies have suggested the presence of early involvement of small-diameter Aδ and C fibers (611). Some suggest a temporal relationship of small-fiber impairment preceding that of large fibers. For example, impairment in the density of the small intraepidermal nerve fibers in symptomatic patients with impaired glucose tolerance (prediabetes) have been observed in the face of normal large-fiber function, as assessed by nerve conduction studies (NCSs) (9,10). In addition, surveys of patients with DSP have demonstrated an overwhelming predominance of sensory and autonomic symptoms, as compared with motor weakness. Again, this has been interpreted as indicative of preferential small-fiber dysfunction (12). Though longitudinal studies are limited, such studies have lead to the current prevailing hypothesis for the natural history of DSP that measures of small-fiber morphology and function decline prior to those of large fibers. One implication of this hypothesis is that small-fiber testing could serve as an earlier, subclinical primary end point in clinical trials investigating interventions for DSP (13).

The hypothesis described above has been investigated exclusively in type 2 diabetic or prediabetic patients. Through the study of a cohort of healthy volunteers and type 1 diabetic subjects […], we had the opportunity to evaluate in cross-sectional analysis the relationship between measures of large-fiber function and small-fiber structure and function. Under the hypothesis that small-fiber abnormalities precede large-fiber dysfunction in the natural history of DSP, we sought to determine if: 1) the majority of subjects who meet criteria for large-fiber dysfunction have concurrent evidence of small-fiber dysfunction and 2) the subset of patients without DSP includes a spectrum with normal small-fiber tests (indicating lack of initiation of nerve injury) as well as abnormal small-fiber tests (indicating incipient DSP).”

“Overall, 57 of 131 (43.5%) type 1 diabetic patients met DSP criteria, and 74 of 131 (56.5%) did not meet DSP criteria. Abnormality of CCM [link] was present in 30 of 57 (52.6%) DSP patients and 6 of 74 (8.1%) type 1 diabetic patients without DSP. Abnormality of CDT [Cooling Detection Thresholds, relevant link] was present in 47 of 56 (83.9%) DSP patients and 17 of 73 (23.3%) without DSP. Abnormality of LDIflare [laser Doppler imaging of heat-evoked flare] was present in 30 of 57 (52.6%) DSP patients and 20 of 72 (27.8%) without DSP. Abnormality of HRV [Heart Rate Variability] was present in 18 of 45 (40.0%) DSP patients and 6 of 70 (8.6%) without DSP. […] sensitivity analysis […] revealed that abnormality of any one of the four small-fiber measures was present in 55 of 57 (96.5%) DSP patients […] and 39 of 74 (52.7%) type 1 diabetic patients without DSP. Similarly, abnormality of any two of the four small-fiber measures was present in 43 of 57 (75.4%) DSP patients […] and 9 of 74 (12.2%) without DSP. Finally, abnormality of either CDT or CCM (with these two tests selected based on their high reliability) was noted in 53 of 57 (93.0%) DSP patients and 21 of 74 (28.4%) patients without DSP […] When DSP was defined based on symptoms and signs plus abnormal sural SNAP [sensory nerve action potential] amplitude or conduction velocity, there were 68 of 131 patients who met DSP criteria and 63 of 131 who did not. Abnormality of any one of the four small-fiber measures was present in 63 of 68 (92.6%) DSP patients and 31 of 63 (49.2%) type 1 diabetic patients without DSP. […] Finally, if DSP was defined based on clinical symptoms and signs alone, with TCNS ≥5, there were 68 of 131 patients who met DSP criteria and 63 of 131 who did not. Abnormality of any one of the four small-fiber measures was present in 62 of 68 (91.2%) DSP patients and 32 of 63 (50.8%) type 1 diabetic patients without DSP.”

“Qualitative analysis of contingency tables shows that the majority of patients with DSP have concurrent evidence of small-fiber dysfunction, and patients without DSP include a spectrum with normal small-fiber tests (indicating lack of initiation of nerve injury) as well as abnormal small-fiber tests. Evidence of isolated large-fiber injury was much less frequent […]. These findings suggest that small-fiber damage may herald the onset of DSP in type 1 diabetes. In addition, the above findings remained true when alternative definitions of DSP were explored in a sensitivity analysis. […] The second important finding was the linear relationships noted between small-fiber structure and function tests (CDT, CNFL, LDIflare, and HRV) […] and the number of NCS abnormalities (a marker of large-fiber function). This might indicate that once the process of large-fiber nerve injury in DSP has begun, damage to large and small nerve fibers occurs simultaneously.”

“Records from the Royal Prince Alfred Hospital Diabetes Clinical Database, established in 1986, were matched with the Australian National Death Index to establish mortality outcomes for all subjects until June 2011. Clinical and mortality outcomes in 354 patients with T2DM, age of onset between 15 and 30 years (T2DM15–30), were compared with T1DM in several ways but primarily with 470 patients with T1DM with a similar age of onset (T1DM15–30) to minimize the confounding effect of age on outcome.

RESULTS For a median observation period of 21.4 (interquartile range 14–30.7) and 23.4 (15.7–32.4) years for the T2DM and T1DM cohorts, respectively, 71 of 824 patients (8.6%) died. A significant mortality excess was noted in T2DM15–30 (11 vs. 6.8%, P = 0.03), with an increased hazard for death (hazard ratio 2.0 [95% CI 1.2–3.2], P = 0.003). Death for T2DM15–30 occurred after a significantly shorter disease duration (26.9 [18.1–36.0] vs. 36.5 [24.4–45.4] years, P = 0.01) and at a relatively young age. There were more cardiovascular deaths in T2DM15–30 (50 vs. 30%, P < 0.05). Despite equivalent glycemic control and shorter disease duration, the prevalence of albuminuria and less favorable cardiovascular risk factors were greater in the T2DM15–30 cohort, even soon after diabetes onset. Neuropathy scores and macrovascular complications were also increased in T2DM15–30 (P < 0.0001).

CONCLUSIONS Young-onset T2DM is the more lethal phenotype of diabetes and is associated with a greater mortality, more diabetes complications, and unfavorable cardiovascular disease risk factors when compared with T1DM.

“Only a few previous studies have looked at comparative mortality in T1DM and T2DM onset in patients <30 years of age. In a Swedish study of patients with diabetes aged 15–34 years compared with a general population, the standardized mortality ratio was higher for the T2DM than for the T1DM cohort (2.9 vs. 1.8) (17). […] Recently, Dart et al. (19) examined survival in youth aged 1–18 years with T2DM versus T1DM. Kaplan-Meier analysis revealed a statistically significant lower survival probability for the youth with T2DM, although the number at risk was low after 10 year’s duration. Taken together, these findings are in keeping with the present observations and are supportive evidence for a higher mortality in young-onset T2DM than in T1DM. The majority of deaths appear to be from cardiovascular causes and significantly more so for young T2DM.”

“Although the age of onset of T1DM diabetes is usually in little doubt because of a more abrupt presentation, it is possible that the age of onset of T2DM was in fact earlier than recognized. With a previously published method for estimating time delay until diagnosis of T2DM (26) by plotting the prevalence of retinopathy against duration and extrapolating to a point of zero retinopathy, we found that there is no difference in the slope and intercept of this relationship between the T2DM and the T1DM cohorts […] delay in diagnosis is unlikely to be an explanation for the differences in observed outcome.”

“Increased arterial stiffness independently predicts all-cause and CVD mortality (3), and higher pulse pressure predicts CVD mortality, incidence, and end-stage renal disease development among adults with type 1 diabetes (1,4,5). Several reports have shown that youth and adults with type 1 diabetes have elevated arterial stiffness, though the mechanisms are largely unknown (6). The etiology of advanced atherosclerosis in type 1 diabetes is likely multifactorial, involving metabolic, behavioral, and diabetes-specific cardiovascular (CV) risk factors. Aging, high blood pressure (BP), obesity, the metabolic syndrome (MetS), and type 2 diabetes are the main contributors of sustained increased arterial stiffness in adults (7,8). However, the natural history, the age-related progression, and the possible determinants of increased arterial stiffness in youth with type 1 diabetes have not been studied systematically. […] There are currently no data examining the impact of CV risk factors and their clustering in youth with type 1 diabetes on subsequent CVD morbidity and mortality […]. Thus, the aims of this report were: 1) to describe the progression of arterial stiffness, as measured by pulse wave velocity (PWV), over time, among youth with type 1 diabetes, and 2) to explore the association of CV risk factors and their clustering as MetS with PWV in this cohort.”

“Youth were age 14.5 years (SD 2.8) and had an average disease duration of 4.8 (3.8) years at baseline, 46.3% were female, and 87.6% were of NHW race/ethnicity. At baseline, 10.0% had high BP, 10.9% had a large waist circumference, 11.6% had HDL-c ≤40 mg/dL, 10.9% had a TG level ≥110 mg/dL, and 7.0% had at least two of the above CV risk factors (MetS). In addition, 10.3% had LDL-c ≥130 mg/dL, 72.0% had an HbA1c ≥7.5% (58 mmol/mol), and 9.2% had ACR ≥30 μg/mL. Follow-up measures were obtained on average at age 19.2 years, when the average duration of diabetes was 10.1 (3.9) years.”

“Over an average follow-up period of ∼5 years, there was a statistically significant increase of 0.7 m/s in PWV (from 5.2 to 5.9 m/s), representing an annual increase of 2.8% or 0.145 m/s. […] Based on our data, if this rate of change is stable over time, the estimated average PWV by the time these youth enter their third decade of life will be 11.3 m/s, which was shown to be associated with a threefold increased hazard for major CV events (26). There are no similar studies in youth to compare these findings. In adults, the rate of change in PWV was 0.081 m/s/year in nondiabetic normotensive patients, although it was higher in hypertensive adults (0.147 m/s/year) (7). We also showed that the presence of central adiposity and elevated BP at baseline, as well as clustering of at least two CV risk factors, was associated with significantly worse PWV over time, although these baseline factors did not significantly influence the rate of change in PWV over this period of time. Changes in CV risk factors, specifically increases in central adiposity, LDL-c levels, and worsening glucose control, were independently associated with worse PWV over time. […] Our inability to detect a difference in the rate of change in PWV in our youth with MetS (vs. those without MetS) may be due to several factors, including a combination of a relatively small sample size, short period of follow-up, and young age of the cohort (thus with lower baseline PWV levels).”

November 8, 2017

## Words

Most of the words below are words which I encountered while reading the Jim Butcher novels: Fool Moon, Grave Peril, Summer Knight, Death Masks, Blood Rites, Dead Beat, and Proven Guilty.

## Things I learn from my patients…

Here’s the link. Some sample quotes below:

“When attempting a self-circumcision do not use dry ice to numb the area… and when the dry ice sticks to the… a…. area, do not attempt to remove the ice with boiling water.”

“When your 97-year old mother trips and falls on the floor and doesn’t say anything or really seem to move at all, you should [definitely] wait 5-6 days before calling EMS. If she starts to feel cold (even though she hasn’t said that she’s cold), just cover her with blankets and surround her with space heaters. She’s probably just sleeping and will get up when she’s good and ready. Nevermind the smell and the roaches.”

“….the vagina is not the best place to store those pieces of broken glass you were collecting.”

“BASED ON A CASE TODAY…
don’t allow someone with a known poorly controlled seizure disorder to perform oral sex on you…”

“Only Santa will actually fit down a chimney. All the way down, anyway.
And Santa can get back out without the rescue squad.”

“Broken glass is not the ideal surgical tool for self-castration”

“If you come into the ER with a chief complaint of “Falling down because I’m drunk” don’t cause a scene 20 minutes later when we tell you you’re too drunk to sign yourself out AMA.”

“When transported by EMS for syncopal episode after drinking a case of beer with your buddies, don’t keep bothering the doctor about when you are going to be discharged. Especially when the reason you want to leave so bad is because you drive an 18-wheeler and you have to be in Texas by midnight. Your doc will begin inventing reasons to admit you.”

“If your family/doctor/government whatever has taken away your drivers license because you have frequent seizures and refuse to take your pheno, please use a riding lawn-mower as your primary means of transportation. Chances are, you won’t seize, hit a telephone pole, burn your leg and scalp on the mower as you fall off of it, and cause a power outage in your surrounding area.”

“If you happen to be driving drunk and feeling that you can’t stay awake anymore, you shouldn’t turn off your lights when you park in the middle of the interstate to take your nap. ”

“While yelling at the top of your lungs that you are having chest pain and you need a doctor to see you immediately, it is best to quit masterbating once said doctor enters the room to evaluate you. Your doctor really doesn’t want to see you do that.”

“Don’t stick things in your rectum. A good general rule. Should you break this rule be sure that you are not a 14 year old boy who has swiped your mom’s vibrator. Once the vibrator disappears and doesn’t come out for 3 days you will have to come to the ER and go for an EUA/removal. Trying to explain this string of events to your dad is significantly more akward than, say, explaining how you wrecked the car.”

“Sitting on the porch minding your own business is the #1 cause of knife wounds.”

“1) If you fall off a three-story high ladder, you should definitely drink a fifth of vodka in your buddy’s car on the way to the ED.
2) Walking in and announcing “I just fell three stories” will make the triage staff move almost as quickly as they move when someone says “I have free cookies.””

“If you’ve been stabbed in the head and blood is jetting out of your temporal artery taking a shower to wash off the blood before coming to the ER won’t help.”

“If you steal someone’s prescription pad, be aware that “Mofine” isn’t usually prescribed by the unit “pound” (as in “A pound of Mofine”)”

“If you have sex with a girl, and your frat brother tells you right after you come downstairs that she has herpes, pouring bleach all over your privates will not take care of ANY of your problems!”

“If you are 17 and very drunk and are brought to the ER with a face that looks like hamburger and an upper lip that needs to be put back together, please just say you got into a fight. We would prefer not to know that someone bet you \$20 that you couldn’t punch yourself unconscious (and you won).”

“Me: Do you have any health problems.
Pt: No.
Pt’s wife: He’s never been sick a day in his life.
Me: That looks like a heart bypass scar.
Pt: Yes, it is.
Pt’s wife: He got that after his diabetes gave him a heart attack.”

“When you cut off your penis to show your ex-girlfriend you won’t take her dumping you lying down, please tell us where we can locate said appendage BEFORE you try and puke up the answer.”

“No, young lady, being on top will not protect you from getting pregnant.”

“When a patient insists that her oral contraceptive is a CIA mind control drug and that she’s the reincarnation of Joan of Arc… don’t ring me. RING PSYCH!!”

“If you are planning to huff carburator cleaner try to get the stuff that’s just hydrocarbons. You’ll get a hydrocarbon pneumonitis but that’s about it. If you pick the stuff that has methanol in it you’ll get to experience the miracle of dialysis.”

“If you present with “M’ahmbroke” because you got drunk and fell, fracturing your distal humerus, beware the orthopedic consult. They’ll try to trick you into consenting to surgical repair. Luckily, you’re too clever to fall for that. While they’re busy reading the radiographs, remove your IV and head for the door. When they try to stop you, declare loudly and repeatedly “you ain’t cuttin m’ahmoff!” Negotiate for a cast instead.”

“If you’re coming in to the ED to see your “friend”, it would be wise if you knew what his last name was, and not to change your story about his injury from “gunshot wound” to “car accident” to “gash in leg” when you speak to myself and two other people.”

“If your mom really likes to party and never met a drug she didn’t like you probably shouldn’t leave her alone in your house. When you come back 4 hours later she will have gone through all your booze, pot, crack, meth, and I think PCP and then completely trashed your house while for some reason smearing feces all over your living room. Then she will be brought to my ER where she will fall only one drug short (opiates) of a perfect score on my tox screen.”

“A huge number of the patients I see don’t have a PMD because they “just” moved here (sometime within the last year). Here are some medical tips for your move [:]
-You can’t quit dialysis just because you moved. Your kidneys won’t work any better in Vegas than they did in LA.
-If you move here and run out of insulin (after 2 months) and call your doctor back in Jersey and tell him you feel bad and can’t quit puking (’cause you’re in DKA) he’ll tell you to go to the ER. I’ll tell you you’re an idiot.”

“self injecting boiling crisco into the urethra will not resolve an erectile dysfunction… no matter how many times your best friend assures you.”

“If you’re a 65+ male, if you start feeling chest pain, do not try to cure these pains over a period of one week by placing hot coins in various places on your chest roughly corresponding to the location of the pain (over sternum and left chest in the direction of the left shoulder), not even if you hate doctors and hospitals.”

“The statement “I’m not hearing voices. They’re just talking to themselves.” will still result in a psych hold.”

“If you have a 20 year history of uncontrolled diabetes and a severe case of peripheral neuropathy, it is probably not a good idea to sleep with your dog, even if it is only a chihuahua. It’s gotta be a bit of a shock to wake up and see that the dog has gnawed off the tip of your big toe while you were napping, and you couldn’t feel a thing.”

## Common Errors in Statistics… (II)

Some more observations from the book below:

“[A] multivariate test, can be more powerful than a test based on a single variable alone, providing the additional variables are relevant. Adding variables that are unlikely to have value in discriminating among the alternative hypotheses simply because they are included in the dataset can only result in a loss of power. Unfortunately, what works when making a comparison between two populations based on a single variable fails when we attempt a multivariate comparison. Unless the data are multivariate normal, Hötelling’s T2, the multivariate analog of Student’s t, will not provide tests with the desired significance level. Only samples far larger than those we are likely to afford in practice are likely to yield multi-variate results that are close to multivariate normal. […] [A]n exact significance level can [however] be obtained in the multivariate case regardless of the underlying distribution by making use of the permutation distribution of Hötelling’s T2.”

“If you are testing against a one-sided alternative, for example, no difference versus improvement, then you require a one-tailed or one-sided test. If you are doing a head-to-head comparison — which alternative is best? — then a two-tailed test is required. […] A comparison of two experimental effects requires a statistical test on their difference […]. But in practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (p < 0.05) but the other is not (p > 0.05). Nieuwenhuis, Forstmann, and Wagenmakers [2011] reviewed 513 behavioral, systems, and cognitive neuroscience articles in five top-ranking journals and found that 78 used the correct procedure and 79 used the incorrect procedure. […] When the logic of a situation calls for demonstration of similarity rather than differences among responses to various treatments, then equivalence tests are often more relevant than tests with traditional no-effect null hypotheses […] Two distributions F and G, such that G[x] = F[x − δ], are said to be equivalent providing |δ| < Δ, where Δ is the smallest difference of clinical significance. To test for equivalence, we obtain a confidence interval for δ, rejecting equivalence only if this interval contains values in excess of |Δ|. The width of a confidence interval decreases as the sample size increases; thus, a very large sample may be required to demonstrate equivalence just as a very large sample may be required to demonstrate a clinically significant effect.”

“The most common test for comparing the means of two populations is based upon Student’s t. For Student’s t-test to provide significance levels that are exact rather than approximate, all the observations must be independent and, under the null hypothesis, all the observations must come from identical normal distributions. Even if the distribution is not normal, the significance level of the t-test is almost exact for sample sizes greater than 12; for most of the distributions one encounters in practice,5 the significance level of the t-test is usually within a percent or so of the correct value for sample sizes between 6 and 12. For testing against nonnormal alternatives, more powerful tests than the t-test exist. For example, a permutation test replacing the original observations with their normal scores is more powerful than the t-test […]. Permutation tests are derived by looking at the distribution of values the test statistic would take for each of the possible assignments of treatments to subjects. For example, if in an experiment two treatments were assigned at random to six subjects so that three subjects got one treatment and three the other, there would have been a total of 20 possible assignments of treatments to subjects.6 To determine a p-value, we compute for the data in hand each of the 20 possible values the test statistic might have taken. We then compare the actual value of the test statistic with these 20 values. If our test statistic corresponds to the most extreme value, we say that p = 1/20 = 0.05 (or 1/10 = 0.10 if this is a two-tailed permutation test). Against specific normal alternatives, this two-sample permutation test provides a most powerful unbiased test of the distribution-free hypothesis that the centers of the two distributions are the same […]. Violation of assumptions can affect not only the significance level of a test but the power of the test […] For example, although the significance level of the t-test is robust to departures from normality, the power of the t-test is not.”

“Group randomized trials (GRTs) in public health research typically use a small number of randomized groups with a relatively large number of participants per group. Typically, some naturally occurring groups are targeted: work sites, schools, clinics, neighborhoods, even entire towns or states. A group can be assigned to either the intervention or control arm but not both; thus, the group is nested within the treatment. This contrasts with the approach used in multicenter clinical trials, in which individuals within groups (treatment centers) may be assigned to any treatment. GRTs are characterized by a positive correlation of outcomes within a group and by the small number of groups. Feng et al. [2001] report a positive intraclass correlation (ICC) between the individuals’ target-behavior outcomes within the same group. […] The variance inflation factor (VIF) as a result of such commonalities is 1 + (n − 1)σ. […] Although σ in GRTs is usually quite small, the VIFs could still be quite large because VIF is a function of the product of the correlation and group size n. […] To be appropriate, an analysis method of GRTs need acknowledge both the ICC and the relatively small number of groups.”

“Recent simulations reveal that the classic test based on Pearson correlation is almost distribution free [Good, 2009]. Still, too often we treat a test of the correlation between two variables X and Y as if it were a test of their independence. X and Y can have a zero correlation coefficient, yet be totally dependent (for example, Y = X2). Even when the expected value of Y is independent of the expected value of X, the variance of Y might be directly proportional to the variance of X.”

“[O]ne of the most common statistical errors is to assume that because an effect is not statistically significant it does not exist. One of the most common errors in using the analysis of variance is to assume that because a factor such as sex does not yield a significant p-value that we may eliminate it from the model. […] The process of eliminating nonsignificant factors one by one from an analysis of variance means that we are performing a series of tests rather than a single test; thus, the actual significance level is larger than the declared significance level.”

“The greatest error associated with the use of statistical procedures is to make the assumption that one single statistical methodology can suffice for all applications. From time to time, a new statistical procedure will be introduced or an old one revived along with the assertion that at last the definitive solution has been found. […] Every methodology [however] has a proper domain of application and another set of applications for which it fails. Every methodology has its drawbacks and its advantages, its assumptions and its sources of error.”

“[T]o use the bootstrap or any other statistical methodology effectively, one has to be aware of its limitations. The bootstrap is of value in any situation in which the sample can serve as a surrogate for the population. If the sample is not representative of the population because the sample is small or biased, not selected at random, or its constituents are not independent of one another, then the bootstrap will fail. […] When using Bayesian methods[:] Do not use an arbitrary prior. Never report a p-value. Incorporate potential losses in the decision. Report the Bayes’ factor. […] In performing a meta-analysis, we need to distinguish between observational studies and randomized trials. Confounding and selection bias can easily distort the findings from observational studies. […] Publication and selection bias also plague the meta-analysis of completely randomized trials. […] One can not incorporate in a meta-analysis what one is not aware of. […] Similarly, the decision as to which studies to incorporate can dramatically affect the results. Meta-analyses of the same issue may reach opposite conclusions […] Where there are substantial differences between the different studies incorporated in a meta-analysis (their subjects or their environments), or substantial quantitative differences in the results from the different trials, a single overall summary estimate of treatment benefit has little practical applicability […]. Any analysis that ignores this heterogeneity is clinically misleading and scientifically naive […]. Heterogeneity should be scrutinized, with an attempt to explain it […] Bayesian methods can be effective in meta-analyses […]. In such situations, the parameters of various trials are considered to be random samples from a distribution of trial parameters. The parameters of this higher-level distribution are called hyperparameters, and they also have distributions. The model is called hierarchical. The extent to which the various trials reinforce each other is determined by the data. If the trials are very similar, the variation of the hyperparameters will be small, and the analysis will be very close to a classical meta-analysis. If the trials do not reinforce each other, the conclusions of the hierarchical Bayesian analysis will show a very high variance in the results. A hierarchical Bayesian analysis avoids the necessity of a prior decision as to whether the trials can be combined; the extent of the combination is determined purely by the data. This does not come for free; in contrast to the meta-analyses discussed above, all the original data (or at least the sufficient statistics) must be available for inclusion in the hierarchical model. The Bayesian method is also vulnerable to […] selection bias”.

“For small samples of three to five observations, summary statistics are virtually meaningless. Reproduce the actual observations; this is easier to do and more informative. Though the arithmetic mean or average is in common use for summarizing measurements, it can be very misleading. […] When the arithmetic mean is meaningful, it is usually equal to or close to the median. Consider reporting the median in the first place. The geometric mean is more appropriate than the arithmetic in three sets of circumstances: 1. When losses or gains can best be expressed as a percentage rather than a fixed value. 2. When rapid growth is involved, as is the case with bacterial and viral populations. 3. When the data span several orders of magnitude, as with the concentration of pollutants. […] Most populations are actually mixtures of populations. If multiple modes are observed in samples greater than 25 in size, the number of modes should be reported. […] The terms dispersion, precision, and accuracy are often confused. Dispersion refers to the variation within a sample or a population. Standard measures of dispersion include the variance, the mean absolute deviation, the interquartile range, and the range. Precision refers to how close several estimates based upon successive samples will come to one another, whereas accuracy refers to how close an estimate based on a sample will come to the population parameter it is estimating.”

“One of the most egregious errors in statistics, one encouraged, if not insisted upon by the editors of journals in the biological and social sciences, is the use of the notation “Mean ± Standard Error” to report the results of a set of observations. The standard error is a useful measure of population dispersion if the observations are continuous measurements that come from a normal or Gaussian distribution. […] But if the observations come from a nonsymmetric distribution such as an exponential or a Poisson, or a truncated distribution such as the uniform, or a mixture of populations, we cannot draw any such inference. Recall that the standard error equals the standard deviation divided by the square root of the sample size […] As the standard error depends on the squares of individual observations, it is particularly sensitive to outliers. A few extreme or outlying observations will have a dramatic impact on its value. If you can not be sure your observations come from a normal distribution, then consider reporting your results either in the form of a histogram […] or a Box and Whiskers plot […] If the underlying distribution is not symmetric, the use of the ± SE notation can be deceptive as it suggests a nonexistent symmetry. […] When the estimator is other than the mean, we cannot count on the Central Limit Theorem to ensure a symmetric sampling distribution. We recommend that you use the bootstrap whenever you report an estimate of a ratio or dispersion. […] If you possess some prior knowledge of the shape of the population distribution, you should take advantage of that knowledge by using a parametric bootstrap […]. The parametric bootstrap is particularly recommended for use in determining the precision of percentiles in the tails (P20, P10, P90, and so forth).”

“A common error is to misinterpret the confidence interval as a statement about the unknown parameter. It is not true that the probability that a parameter is included in a 95% confidence interval is 95%. What is true is that if we derive a large number of 95% confidence intervals, we can expect the true value of the parameter to be included in the computed intervals 95% of the time. (That is, the true values will be included if the assumptions on which the tests and confidence intervals are based are satisfied 100% of the time.) Like the p-value, the upper and lower confidence limits of a particular confidence interval are random variables, for they depend upon the sample that is drawn. […] In interpreting a confidence interval based on a test of significance, it is essential to realize that the center of the interval is no more likely than any other value, and the confidence to be placed in the interval is no greater than the confidence we have in the experimental design and statistical test it is based upon.”

“How accurate our estimates are and how consistent they will be from sample to sample will depend upon the nature of the error terms. If none of the many factors that contribute to the value of ε make more than a small contribution to the total, then ε will have a Gaussian distribution. If the {εi} are independent and normally distributed (Gaussian), then the ordinary least-squares estimates of the coefficients produced by most statistical software will be unbiased and have minimum variance. These desirable properties, indeed the ability to obtain coefficient values that are of use in practical applications, will not be present if the wrong model has been adopted. They will not be present if successive observations are dependent. The values of the coefficients produced by the software will not be of use if the associated losses depend on some function of the observations other than the sum of the squares of the differences between what is observed and what is predicted. In many practical problems, one is more concerned with minimizing the sum of the absolute values of the differences or with minimizing the maximum prediction error. Finally, if the error terms come from a distribution that is far from Gaussian, a distribution that is truncated, flattened or asymmetric, the p-values and precision estimates produced by the software may be far from correct.”

“I have attended far too many biology conferences at which speakers have used a significant linear regression of one variable on another as “proof” of a “linear” relationship or first-order behavior. […] The unfortunate fact, which should not be forgotten, is that if EY = a f[X], where f is a monotonically, increasing function of X, then any attempt to fit the equation Y = bg[X], where g is also a monotonically increasing function of X, will result in a value of b that is significantly different from zero. The “trick,” […] is in selecting an appropriate (cause-and-effect-based) functional form g to begin with. Regression methods and expensive software will not find the correct form for you.”

## Acute Coronary Syndromes

A few quotes from the lecture, as well as some links to related stuff:

“You might say: Why doesn’t coronary stenting prevent heart attacks? You got an 80 % blockage causing some angina and you stent it, why doesn’t that prevent a heart attack? And the answer is very curious. The plaques that are most likely to rupture are mild. They’re typically less than 50 %. They have a thin fibrous cap, a lot of lipid, and they rupture during stress. This has been the real confusion for my specialty over the last 30 years, starting to realize that, you know, when you get angina we find the blockage and we fix it and your angina’s better, but the lesions that were gonna cause next week’s heart attack often are not the lesion we fixed, but there’s 25 other moderate plaques in the coronary tree and one of them is heating up and it’s vulnerable. […] ACS, the whole thing here is the idea of a vulnerable plaque rupture. And it’s often not a severe narrowing.” (3-5 minutes in)

[One of the plaque rupture triggers of relevance is inflammatory cytokines…] “What’s a good example of that? Influenza. Right, influenza releases things like, IL-6 and other cytokines. What do they do? Well, they make you shake and shiver and feel like your muscles are dying. They also dissolve plaques. […] If you take a town like Ann Arbor and vaccinate everybody for influenza, we reduce heart attacks by a lot … 20-30 % during flu season.” (~11-12 minutes in)

“What happens to your systolic function as you get older? Any ideas? I’m happy to tell you it stays strong. […] What happens to diastole? […] As your myocardial cells die, a few die every day, […] those cells get replaced by fibrous tissue. So an aging heart becomes gradually stiffer [this is apparently termed ‘presbycardia’]. It beats well because the cells that are alive can overcome the fibrosis and squeeze, but it doesn’t relax as well. So left ventricular and diastolic pressure goes up. Older patients are much more likely to develop heart failure [in the ACS setting] because they already have impaired diastole from […] presbycardia.” (~1.14-1.15)

Some links to coverage of topics covered during the lecture:

## A few diabetes papers of interest

“Fatigue is a classical symptom of hyperglycemia, but the relationship between chronic fatigue and diabetes has not been systematically studied. […] glucose control [in diabetics] is often suboptimal with persistent episodes of hyperglycemia that may result in sustained fatigue. Fatigue may also sustain in diabetic patients because it is associated with the presence of a chronic disease, as has been demonstrated in patients with rheumatoid arthritis and various neuromuscular disorders (2,3).

It is important to distinguish between acute and chronic fatigue, because chronic fatigue, defined as severe fatigue that persists for at least 6 months, leads to substantial impairments in patients’ daily functioning (4,5). In contrast, acute fatigue can largely vary during the day and generally does not cause functional impairments.

Literature provides limited evidence for higher levels of fatigue in diabetic patients (6,7), but its chronicity, impact, and determinants are unknown. In various chronic diseases, it has been proven useful to distinguish between precipitating and perpetuating factors of chronic fatigue (3,8). Illness-related factors trigger acute fatigue, while other factors, often cognitions and behaviors, cause fatigue to persist. Sleep disturbances, low self-efficacy concerning fatigue, reduced physical activity, and a strong focus on fatigue are examples of these fatigue-perpetuating factors (810). An episode of hyperglycemia or hypoglycemia could trigger acute fatigue for diabetic patients (11,12). However, variations in blood glucose levels might also contribute to chronic fatigue, because these variations continuously occur.

The current study had two aims. First, we investigated the prevalence and impact of chronic fatigue in a large sample of type 1 diabetic (T1DM) patients and compared the results to a group of age- and sex-matched population-based controls. Secondly, we searched for potential determinants of chronic fatigue in T1DM.”

“A significantly higher percentage of T1DM patients were chronically fatigued (40%; 95% CI 34–47%) than matched controls (7%; 95% CI 3–10%). Mean fatigue severity was also significantly higher in T1DM patients (31 ± 14) compared with matched controls (17 ± 9; P < 0.001). T1DM patients with a comorbidity_mr [a comorbidity affecting patients’ daily functioning, based on medical records – US] or clinically relevant depressive symptoms [based on scores on the Beck Depression Inventory for Primary Care – US] were significantly more often chronically fatigued than patients without a comorbidity_mr (55 vs. 36%; P = 0.014) or without clinically relevant depressive symptoms (88 vs. 31%; P < 0.001). Patients who reported neuropathy, nephropathy, or cardiovascular disease as complications of diabetes were more often chronically fatigued […] Chronically fatigued T1DM patients were significantly more impaired compared with nonchronically fatigued T1DM patients on all aspects of daily functioning […]. Fatigue was the most troublesome symptom of the 34 assessed diabetes-related symptoms. The five most troublesome symptoms were overall sense of fatigue, lack of energy, increasing fatigue in the course of the day, fatigue in the morning when getting up, and sleepiness or drowsiness”.

“This study establishes that chronic fatigue is highly prevalent and clinically relevant in T1DM patients. While current blood glucose level was only weakly associated with chronic fatigue, cognitive behavioral factors were by far the strongest potential determinants.”

“Another study found that type 2 diabetic, but not T1DM, patients had higher levels of fatigue compared with healthy controls (7). This apparent discrepancy may be explained by the relatively small sample size of this latter study, potential selection bias (patients were not randomly selected), and the use of a different fatigue questionnaire.”

“Not only was chronic fatigue highly prevalent, fatigue also had a large impact on T1DM patients. Chronically fatigued T1DM patients had more functional impairments than nonchronically fatigued patients, and T1DM patients considered fatigue as the most burdensome diabetes-related symptom.

Contrary to what was expected, there was at best a weak relationship between blood glucose level and chronic fatigue. Chronically fatigued T1DM patients spent slightly less time in hypoglycemia, but average glucose levels, glucose variability, hyperglycemia, or HbA1c were not related to chronic fatigue. In type 2 diabetes mellitus also, no relationship was found between fatigue and HbA1c (7).”

“Regarding demographic characteristics, current health status, diabetes-related factors, and fatigue-related cognitions and behaviors as potential determinants of chronic fatigue, we found that sleeping problems, physical activity, self-efficacy concerning fatigue, age, depression, and pain were significantly associated with chronic fatigue in T1DM. Although depression was strongly related, it could not completely explain the presence of chronic fatigue (38), as 31% was chronically fatigued without having clinically relevant depressive symptoms.”

It seems from estimates like these likely that a not unsubstantial proportion of type 1 diabetics over time go on to develop other health problems that might if unaddressed/undiagnosed cause fatigue, and this may in my opinion be a potentially much more important cause than direct metabolic effects such as hyperglycemia, or chronic inflammation. If this is the case you’d however expect to see a substantial sex difference, as the autoimmune syndromes are in general much more likely to hit females than males. I’m not completely sure how to interpret a few of the results reported, but to me it doesn’t look like the sex differences in this study are anywhere near ‘large enough’ to support such an explanatory model, though. Another big problem is also that fatigue seems to be more common in young patients, which is weird; most long-term complications display significant (positive) duration dependence, and when diabetes is a component of an autoimmune syndrome diabetes tend to develop first, with other diseases hitting later, usually in middle age. Duration and age are strongly correlated, and a negative duration dependence in a diabetes complication setting is a surprising and unusual finding that needs to be explained, badly; it’s unexpected and may in my opinion be the sign of a poor disease model. It’d make more sense for disease-related fatigue to present late, rather than early, I don’t really know what to make of that negative age gradient. ‘More studies needed’ (preferably by people familiar with those autoimmune syndromes..), etc…

“It is well known that diabetic nephropathy is the leading cause of end-stage renal disease (ESRD) in many regions, including the U.S. (1). Type 1 diabetes accounts for >45,000 cases of ESRD per year (2), and the incidence may be higher than in people with type 2 diabetes (3). Despite this, there are few population-based data available regarding the prevalence and incidence of ESRD in people with type 1 diabetes in the U.S. (4). A declining incidence of ESRD has been suggested by findings of lower incidence with increasing calendar year of diagnosis and in comparison with older reports in some studies in Europe and the U.S. (58). This is consistent with better diabetes management tools becoming available and increased renoprotective efforts, including the greater use of ACE inhibitors and angiotensin type II receptor blockers, over the past two to three decades (9). Conversely, no reduction in the incidence of ESRD across enrollment cohorts was found in a recent clinic-based study (9). Further, an increase in ESRD has been suggested for older but not younger people (9). Recent improvements in diabetes care have been suggested to delay rather than prevent the development of renal disease in people with type 1 diabetes (4).

A decrease in the prevalence of proliferative retinopathy by increasing calendar year of type 1 diabetes diagnosis was previously reported in the Wisconsin Epidemiologic Study of Diabetic Retinopathy (WESDR) cohort (10); therefore, we sought to determine if a similar pattern of decline in ESRD would be evident over 25 years of follow-up. Further, we investigated factors that may mediate a possible decline in ESRD as well as other factors associated with incident ESRD over time.”

“At baseline, 99% of WESDR cohort members were white and 51% were male. Individuals were 3–79 years of age (mean 29) with diabetes duration of 0–59 years (mean 15), diagnosed between 1922 and 1980. Four percent of individuals used three or more daily insulin injections and none used an insulin pump. Mean HbA1c was 10.1% (87 mmol/mol). Only 16% were using an antihypertensive medication, none was using an ACE inhibitor, and 3% reported a history of renal transplant or dialysis (ESRD). At 25 years, 514 individuals participated (52% of original cohort at baseline, n = 996) and 367 were deceased (37% of baseline). Mean HbA1c was much lower than at baseline (7.5%, 58 mmol/mol), the decline likely due to the improvements in diabetes care, with 80% of participants using intensive insulin management (three or more daily insulin injections or insulin pump). The decline in HbA1c was steady, becoming slightly steeper following the results of the DCCT (25). Overall, at the 25-year follow-up, 47% had proliferative retinopathy, 53% used aspirin daily, and 54% reported taking antihypertensive medications, with the majority (87%) using an ACE inhibitor. Thirteen percent reported a history of ESRD.”

“Prevalence of ESRD was negligible until 15 years of diabetes duration and then steadily increased with 5, 8, 10, 13, and 14% reporting ESRD by 15–19, 20–24, 25–29, 30–34, and 35+ years of diabetes duration, respectively. […] After 15 years of diagnosis, prevalence of ESRD increased with duration in people diagnosed from 1960 to 1980, with the lowest increase in people with the most recent diagnosis. People diagnosed from 1922 to 1959 had consistent rather than increasing levels of ESRD with duration of 20+ years. If not for their greater mortality (at the 25-year follow-up, 48% of the deceased had been diagnosed prior to 1960), an increase with duration may have also been observed.

From baseline, the unadjusted cumulative 25-year incidence of ESRD was 17.9% (95% CI 14.3–21.5) in males, 10.3% (7.4–13.2) in females, and 14.2% (11.9–16.5) overall. For those diagnosed in 1970–1980, the cumulative incidence at 14, 20, and 25 years of follow-up (or ∼15–25, 20–30, and 25–35 years diabetes duration) was 5.2, 7.9, and 9.3%, respectively. At 14, 20, and 25 years of follow-up (or 35, 40, and 45 up to 65+ years diabetes duration), the cumulative incidence in those diagnosed during 1922–1969 was 13.6, 16.3, and 18.8%, respectively, consistent with the greater prevalence observed for these diagnosis periods at longer duration of diabetes.”

“The unadjusted hazard of ESRD was reduced by 70% among those diagnosed in 1970–1980 as compared with those in 1922–1969 (HR 0.29 [95% CI 0.19–0.44]). Duration (by 10%) and HbA1c (by an additional 10%) partially mediated this association […] Blood pressure and antihypertensive medication use each further attenuated the association. When fully adjusted for these and [other risk factors included in the model], period of diagnosis was no longer significant (HR 0.89 [0.55–1.45]). Sensitivity analyses for the hazard of incident ESRD or death due to renal disease showed similar findings […] The most parsimonious model included diabetes duration, HbA1c, age, sex, systolic and diastolic blood pressure, and history of antihypertensive medication […]. A 32% increased risk for incident ESRD was found per increasing year of diabetes duration at 0–15 years (HR 1.32 per year [95% CI 1.16–1.51]). The hazard plateaued (1.01 per year [0.98–1.05]) after 15 years of duration of diabetes. Hazard of ESRD increased with increasing HbA1c (1.28 per 1% or 10.9 mmol/mol increase [1.14–1.45]) and blood pressure (1.51 per 10 mmHg increase in systolic pressure [1.35–1.68]; 1.12 per 5 mmHg increase in diastolic pressure [1.01–1.23]). Use of antihypertensive medications increased the hazard of incident ESRD nearly fivefold [this finding is almost certainly due to confounding by indication, as also noted by the authors later on in the paper – US], and males had approximately two times the risk as compared with females. […] Having proliferative retinopathy was strongly associated with increased risk (HR 5.91 [3.00–11.6]) and attenuated the association between sex and ESRD.”

“The current investigation […] sought to provide much-needed information on the prevalence and incidence of ESRD and associated risk specific to people with type 1 diabetes. Consistent with a few previous studies (5,7,8), we observed decreased prevalence and incidence of ESRD among individuals with type 1 diabetes diagnosed in the 1970s compared with prior to 1970. The Epidemiology of Diabetes Complications (EDC) Study, another large cohort of people with type 1 diabetes followed over a long period of time, reported cumulative incidence rates of 2–6% for those diagnosed after 1970 and with similar duration (7), comparable to our findings. Slightly higher cumulative incidence (7–13%) reported from older studies at slightly lower duration also supports a decrease in incidence of ESRD (2830). Cumulative incidences through 30 years in European cohorts were even lower (3.3% in Sweden [6] and 7.8% in Finland [5]), compared with the 9.3% noted for those diagnosed during 1970–1980 in the WESDR cohort. The lower incidence could be associated with nationally organized care, especially in Sweden where a nationwide intensive diabetes management treatment program was implemented at least a decade earlier than recommendations for intensive care followed from the results of the DCCT in the U.S.”

“We noted an increased risk of incident ESRD in the first 15 years of diabetes not evident at longer durations. This pattern also demonstrated by others could be due to a greater earlier risk among people most genetically susceptible, as only a subset of individuals with type 1 diabetes will develop renal disease (27,28). The risk plateau associated with greater durations of diabetes and lower risk associated with increasing age may also reflect more death at longer durations and older ages. […] Because age and duration are highly correlated, we observed a positive association between age and ESRD only in univariate analyses, without adjustment for duration. The lack of adjustment for diabetes duration may have, in part, explained the increasing incidence of ESRD shown with age for some people in a recent investigation (9). Adjustment for both age and duration was found appropriate after testing for collinearity in the current analysis.”

In conclusion, this U.S. population-based report showed a lower prevalence and incidence of ESRD among those more recently diagnosed, explained by improvements in glycemic and blood pressure control over the last several decades. Even lower rates may be expected for those diagnosed during the current era of diabetes care. Intensive diabetes management, especially for glycemic control, remains important even in long-standing diabetes as potentially delaying the development of ESRD.

The prevalence of type 2 diabetes in youth is increasing worldwide, coinciding with the rising obesity epidemic (1,2). […] Diabetes is associated with both microvascular and macrovascular complications. The evolution of these complications has been well described in type 1 diabetes (6) and in adult type 2 diabetes (7), wherein significant complications typically manifest 15–20 years after diagnosis (8). Because type 2 diabetes is a relatively new disease in children (first described in the 1980s), long-term outcome data on complications are scant, and risk factors for the development of complications are incompletely understood. The available literature suggests that development of complications in youth with type 2 diabetes may be more rapid than in adults, thus afflicting individuals at the height of their individual and social productivity (9). […] A small but notable proportion of type 2 diabetes is associated with a polymorphism of hepatic nuclear factor (HNF)-1α, a transcription factor expressed in many tissues […] It is not yet known what effect the HNF-1α polymorphism has on the risk of complications associated with diabetes.”

“The main objective of the current study was to describe the time course and risk factors for microvascular complications (nephropathy, retinopathy, and neuropathy) and macrovascular complications (cardiac, cerebrovascular, and peripheral vascular diseases) in a large cohort of youth [diagnosed with type 2 diabetes] who have been carefully followed for >20 years and to compare this evolution with that of youth with type 1 diabetes. We also compared vascular complications in the youth with type 2 diabetes with nondiabetic control youth. Finally, we addressed the impact of HNF-1α G319S on the evolution of complications in young patients with type 2 diabetes.”

“All prevalent cases of type 2 diabetes and type 1 diabetes (control group 1) seen between January 1986 and March 2007 in the DER-CA for youth aged 1–18 years were included. […] The final type 2 diabetes cohort included 342 youth, and the type 1 diabetes control group included 1,011. The no diabetes control cohort comprised 1,710 youth matched to the type 2 diabetes cohort from the repository […] Compared with the youth with type 1 diabetes, the youth with type 2 diabetes were, on average, older at the time of diagnosis and more likely to be female. They were more likely to have a higher BMIz, live in a rural area, have a low SES, and have albuminuria at diagnosis. […] one-half of the type 2 diabetes group was either a heterozygote (GS) or a homozygote (SS) for the HNF-1α polymorphism […] At the time of the last available follow-up in the DER-CA, the youth with diabetes were, on average, between 15 and 16 years of age. […] The median follow-up times in the repository were 4.4 (range 0–27.4) years for youth with type 2 diabetes, 6.7 ( 0–28.2) years for youth with type 1 diabetes, and 6.0 (0–29.9) years for nondiabetic control youth.”

“After controlling for low SES, sex, and BMIz, the risk associated with type 2 versus type 1 diabetes of any complication was an HR of 1.47 (1.02–2.12, P = 0.04). […] In the univariate analysis, youth with type 2 diabetes were at significantly higher risk of developing any vascular (HR 6.15 [4.26–8.87], P < 0.0001), microvascular (6.26 [4.32–9.10], P < 0.0001), or macrovascular (4.44 [1.71–11.52], P < 0.0001) disease compared with control youth without diabetes. In addition, the youth with type 2 diabetes had an increased risk of opthalmologic (19.49 [9.75–39.00], P < 0.0001), renal (16.13 [7.66–33.99], P < 0.0001), and neurologic (2.93 [1.79–4.80], P ≤ 0.001) disease. There were few cardiovascular, cerebrovascular, and peripheral vascular disease events in all groups (five or fewer events per group). Despite this, there was still a statistically significant higher risk of peripheral vascular disease in the type 2 diabetes group (6.25 [1.68–23.28], P = 0.006).”

“Differences in renal and neurologic complications between the two diabetes groups began to occur before 5 years postdiagnosis, whereas differences in ophthalmologic complications began 10 years postdiagnosis. […] Both cardiovascular and cerebrovascular complications were rare in both groups, but peripheral vascular complications began to occur 15 years after diagnosis in the type 2 diabetes group […] The presence of HNF-1α G319S polymorphism in youth with type 2 diabetes was found to be protective of complications. […] Overall, major complications were rare in the type 1 diabetes group, but they occurred in 1.1% of the type 2 diabetes cohort at 10 years, in 26.0% at 15 years, and in 47.9% at 20 years after diagnosis (P < 0.001) […] youth with type 2 diabetes have a higher risk of any complication than youth with type 1 diabetes and nondiabetic control youth. […] The time to both renal and neurologic complications was significantly shorter in youth with type 2 diabetes than in control youth, whereas differences were not significant with respect to opthalmologic and cardiovascular complications between cohorts. […] The current study is consistent with the literature, which has shown high rates of cardiovascular risk factors in youth with type 2 diabetes. However, despite the high prevalence of risk, this study reports low rates of clinical events. Because the median follow-up time was between 5 and 8 years, it is possible that a longer follow-up period would be required to correctly evaluate macrovascular outcomes in young adults. Also possible is that diagnoses of mild disease are not being made because of a low index of suspicion in 20- and 30-year-old patients.”

In conclusion, youth with type 2 diabetes have an increased risk of complications early in the course of their disease. Microvascular complications and cardiovascular risk factors are highly prevalent, whereas macrovascular complications are rare in young adulthood. HbA1c is an important modifiable risk factor; thus, optimizing glycemic control should remain an important goal of therapy.”

“We prospectively investigated the association of HbA1c at baseline and during follow-up with CHD risk among 17,510 African American and 12,592 white patients with type 2 diabetes. […] During a mean follow-up of 6.0 years, 7,258 incident CHD cases were identified. The multivariable-adjusted hazard ratios of CHD associated with different levels of HbA1c at baseline (<6.0 [reference group], 6.0–6.9, 7.0–7.9, 8.0–8.9, 9.0–9.9, 10.0–10.9, and ≥11.0%) were 1.00, 1.07 (95% CI 0.97–1.18), 1.16 (1.04–1.31), 1.15 (1.01–1.32), 1.26 (1.09–1.45), 1.27 (1.09–1.48), and 1.24 (1.10–1.40) (P trend = 0.002) for African Americans and 1.00, 1.04 (0.94–1.14), 1.15 (1.03–1.28), 1.29 (1.13–1.46), 1.41 (1.22–1.62), 1.34 (1.14–1.57), and 1.44 (1.26–1.65) (P trend <0.001) for white patients, respectively. The graded association of HbA1c during follow-up with CHD risk was observed among both African American and white diabetic patients (all P trend <0.001). Each one percentage increase of HbA1c was associated with a greater increase in CHD risk in white versus African American diabetic patients. When stratified by sex, age, smoking status, use of glucose-lowering agents, and income, this graded association of HbA1c with CHD was still present. […] The current study in a low-income population suggests a graded positive association between HbA1c at baseline and during follow-up with the risk of CHD among both African American and white diabetic patients with low socioeconomic status.”

A few more observations from the conclusions:

“Diabetic patients experience high mortality from cardiovascular causes (2). Observational studies have confirmed the continuous and positive association between glycemic control and the risk of cardiovascular disease among diabetic patients (4,5). But the findings from RCTs are sometimes uncertain. Three large RCTs (79) designed primarily to determine whether targeting different glucose levels can reduce the risk of cardiovascular events in patients with type 2 diabetes failed to confirm the benefit. Several reasons for the inconsistency of these studies can be considered. First, small sample sizes, short follow-up duration, and few CHD cases in some RCTs may limit the statistical power. Second, most epidemiological studies only assess a single baseline measurement of HbA1c with CHD risk, which may produce potential bias. The recent analysis of 10 years of posttrial follow-up of the UKPDS showed continued reductions for myocardial infarction and death from all causes despite an early loss of glycemic differences (10). The scientific evidence from RCTs was not sufficient to generate strong recommendations for clinical practice. Thus, consensus groups (AHA, ACC, and ADA) have provided a conservative endorsement (class IIb recommendation, level of evidence A) for the cardiovascular benefits of glycemic control (11). In the absence of conclusive evidence from RCTs, observational epidemiological studies might provide useful information to clarify the relationship between glycemia and CHD risk. In the current study with 30,102 participants with diabetes and 7,258 incident CHD cases during a mean follow-up of 6.0 years, we found a graded positive association by various HbA1c intervals of clinical relevance or by using HbA1c as a continuous variable at baseline and during follow-up with CHD risk among both African American and white diabetic patients. Each one percentage increase in baseline and follow-up HbA1c was associated with a 2 and 5% increased risk of CHD in African American and 6 and 11% in white diabetic patients. Each one percentage increase of HbA1c was associated with a greater increase in CHD risk in white versus African American diabetic patients.”

“Blood viscosity (BV) is the force that counteracts the free sliding of the blood layers within the circulation and depends on the internal cohesion between the molecules and the cells. Abnormally high BV can have several negative effects: the heart is overloaded to pump blood in the vascular bed, and the blood itself, more viscous, can damage the vessel wall. Furthermore, according to Poiseuille’s law (1), BV is inversely related to flow and might therefore reduce the delivery of insulin and glucose to peripheral tissues, leading to insulin resistance or diabetes (25).

It is generally accepted that BV is increased in diabetic patients (68). Although the reasons for this alteration are still under investigation, it is believed that the increase in osmolarity causes increased capillary permeability and, consequently, increased hematocrit and viscosity (9). It has also been suggested that the osmotic diuresis, consequence of hyperglycemia, could contribute to reduce plasma volume and increase hematocrit (10).

Cross-sectional studies have also supported a link between BV, hematocrit, and insulin resistance (1117). Recently, a large prospective study has demonstrated that BV and hematocrit are risk factors for type 2 diabetes. Subjects in the highest quartile of BV were >60% more likely to develop diabetes than their counterparts in the lowest quartile (18). This finding confirms previous observations obtained in smaller or selected populations, in which the association between hemoglobin or hematocrit and occurrence of type 2 diabetes was investigated (1922).

These observations suggest that the elevation in BV may be very early, well before the onset of diabetes, but definite data in subjects with normal glucose or prediabetes are missing. In the current study, we evaluated the relationship between BV and blood glucose in subjects with normal glucose or prediabetes in order to verify whether alterations in viscosity are appreciable in these subjects and at which blood glucose concentration they appear.”

“According to blood glucose levels, participants were divided into three groups: group A, blood glucose <90 mg/dL; group B, blood glucose between 90 and 99 mg/dL; and group C, blood glucose between 100 and 125 mg/dL. […] Hematocrit (P < 0.05) and BV (P between 0.01 and 0.001) were significantly higher in subjects with prediabetes and in those with blood glucose ranging from 90 to 99 mg/dL compared with subjects with blood glucose <90 mg/dL. […] The current study shows, for the first time, a direct relationship between BV and blood glucose in nondiabetic subjects. It also suggests that, even within glucose values ​​considered completely normal, individuals with higher blood glucose levels have increases in BV comparable with those observed in subjects with prediabetes. […] Overall, changes in viscosity in diabetic patients are accepted as common and as a result of the disease. However, the relationship between blood glucose, diabetes, and viscosity may be much more complex. […] the main finding of the study is that BV significantly increases already at high-normal blood glucose levels, independently of other common determinants of hemorheology. Intervention studies are needed to verify whether changes in BV can influence the development of type 2 diabetes.”

“Type 1 diabetes and multiple sclerosis (MS) are organ-specific inflammatory diseases, which result from an autoimmune attack against either pancreatic β-cells or the central nervous system; a combined appearance has been described repeatedly (13). For children and adolescents below the age of 21 years, the prevalence of type 1 diabetes in Germany and Austria is ∼19.4 cases per 100,000 population, and for MS it is 7–10 per 100,000 population (46). A Danish cohort study revealed a three times higher risk for the development of MS in patients with type 1 diabetes (7). Further, an Italian study conducted in Sardinia showed a five times higher risk for the development of type 1 diabetes in MS patients (8,9). An American study on female adults in whom diabetes developed before the age of 21 years yielded an up to 20 times higher risk for the development of MS (10).

These findings support the hypothesis of clustering between type 1 diabetes and MS. The pathogenesis behind this association is still unclear, but T-cell cross-reactivity was discussed as well as shared disease associations due to the HLA-DRB1-DQB1 gene loci […] The aim of this study was to evaluate the prevalence of MS in a diabetic population and to look for possible factors related to the co-occurrence of MS in children and adolescents with type 1 diabetes using a large multicenter survey from the Diabetes Patienten Verlaufsdokumentation (DPV) database.”

“We used a large database of pediatric and adolescent type 1 diabetic patients to analyze the RR of MS co-occurrence. The DPV database includes ∼98% of the pediatric diabetic population in Germany and Austria below the age of 21 years. In children and adolescents, the RR for MS in type 1 diabetes was estimated to be three to almost five times higher in comparison with the healthy population.”

November 2, 2017

## Common Errors in Statistics…

“Pressed by management or the need for funding, too many research workers have no choice but to go forward with data analysis despite having insufficient statistical training. Alas, though a semester or two of undergraduate statistics may develop familiarity with the names of some statistical methods, it is not enough to be aware of all the circumstances under which these methods may be applicable.

The purpose of the present text is to provide a mathematically rigorous but readily understandable foundation for statistical procedures. Here are such basic concepts in statistics as null and alternative hypotheses, p-value, significance level, and power. Assisted by reprints from the statistical literature, we reexamine sample selection, linear regression, the analysis of variance, maximum likelihood, Bayes’ Theorem, meta-analysis and the bootstrap. New to this edition are sections on fraud and on the potential sources of error to be found in epidemiological and case-control studies.

Examples of good and bad statistical methodology are drawn from agronomy, astronomy, bacteriology, chemistry, criminology, data mining, epidemiology, hydrology, immunology, law, medical devices, medicine, neurology, observational studies, oncology, pricing, quality control, seismology, sociology, time series, and toxicology. […] Lest the statisticians among you believe this book is too introductory, we point out the existence of hundreds of citations in statistical literature calling for the comprehensive treatment we have provided. Regardless of past training or current specialization, this book will serve as a useful reference; you will find applications for the information contained herein whether you are a practicing statistician or a well-trained scientist who just happens to apply statistics in the pursuit of other science.”

I’ve been reading this book. I really like it so far, this is a nice book. A lot of the stuff included is review, but there are of course also some new ideas here and there (for example I’d never heard about Stein’s paradox before) and given how much stuff you need to remember and keep in mind in order not to make silly mistakes when analyzing data or interpreting the results of statistical analyses, occasional reviews of these things is probably a very good idea.

I have added some more observations from the first 100 pages or so below:

“Test only relevant null hypotheses. The null hypothesis has taken on an almost mythic role in contemporary statistics. Obsession with the null (more accurately spelled and pronounced nil), has been allowed to shape the direction of our research. […] Virtually any quantifiable hypothesis can be converted into null form. There is no excuse and no need to be content with a meaningless nil. […] we need to have an alternative hypothesis or alternatives firmly in mind when we set up a test. Too often in published research, such alternative hypotheses remain unspecified or, worse, are specified only after the data are in hand. We must specify our alternatives before we commence an analysis, preferably at the same time we design our study. Are our alternatives one-sided or two-sided? If we are comparing several populations at the same time, are their means ordered or unordered? The form of the alternative will determine the statistical procedures we use and the significance levels we obtain. […] The critical values and significance levels are quite different for one-tailed and two-tailed tests and, all too often, the wrong test has been employed in published work. McKinney et al. [1989] reviewed some 70-plus articles that appeared in six medical journals. In over half of these articles, Fisher’s exact test was applied improperly. Either a one-tailed test had been used when a two-tailed test was called for or the authors of the paper simply had not bothered to state which test they had used. […] the F-ratio and the chi-square are what are termed omnibus tests, designed to be sensitive to all possible alternatives. As such, they are not particularly sensitive to ordered alternatives such “as more fertilizer equals more growth” or “more aspirin equals faster relief of headache.” Tests for such ordered responses at k distinct treatment levels should properly use the Pitman correlation“.

“Before we initiate data collection, we must have a firm idea of what we will measure and how we will measure it. A good response variable

• Is easy to record […]
• Can be measured objectively on a generally accepted scale.
• Is measured in appropriate units.
• Takes values over a sufficiently large range that discriminates well.
• Is well defined. […]
• Has constant variance over the range used in the experiment (Bishop and Talbot, 2001).”

“A second fundamental principle is also applicable to both experiments and surveys: Collect exact values whenever possible. Worry about grouping them in intervals or discrete categories later.”

“Sample size must be determined for each experiment; there is no universally correct value. We need to understand and make use of the relationships among effect size, sample size, significance level, power, and the precision of our measuring instruments. Increase the precision (and hold all other parameters fixed) and we can decrease the required number of observations. Decreases in any or all of the intrinsic and extrinsic sources of variation will also result in a decrease in the required number. […] The smallest effect size of practical interest may be determined through consultation with one or more domain experts. The smaller this value, the greater the number of observations that will be required. […] Strictly speaking, the significance level and power should be chosen so as to minimize the overall cost of any project, balancing the cost of sampling with the costs expected from Type I and Type II errors. […] When determining sample size for data drawn from the binomial or any other discrete distribution, one should always display the power curve. […] As a result of inspecting the power curve by eye, you may come up with a less-expensive solution than your software. […] If the data do not come from a well-tabulated distribution, then one might use a bootstrap to estimate the power and significance level. […] Many researchers today rely on menu-driven software to do their power and sample-size calculations. Most such software comes with default settings […] — settings that are readily altered, if, that is, investigators bother to take the time.”

“The relative ease with which a program like Stata […] can produce a sample size may blind us to the fact that the number of subjects with which we begin a study may bear little or no relation to the number with which we conclude it. […] Potential subjects can and do refuse to participate. […] Worse, they may agree to participate initially, then drop out at the last minute […]. They may move without a forwarding address before a scheduled follow-up, or may simply do not bother to show up for an appointment. […] The key to a successful research program is to plan for such drop-outs in advance and to start the trials with some multiple of the number required to achieve a given power and significance level. […] it is the sample you end with, not the sample you begin with, that determines the power of your tests. […] An analysis of those who did not respond to a survey or a treatment can sometimes be as or more informative than the survey itself. […] Be sure to incorporate in your sample design and in your budget provisions for sampling nonresponders.”

“[A] randomly selected sample may not be representative of the population as a whole. For example, if a minority comprises less than 10% of a population, then a jury of 12 persons selected at random from that population will fail to contain a single member of that minority at least 28% of the time.”

“The proper starting point for the selection of the best method of estimation is with the objectives of our study: What is the purpose of our estimate? If our estimate is θ* and the actual value of the unknown parameter is θ, what losses will we be subject to? It is difficult to understand the popularity of the method of maximum likelihood and other estimation procedures that do not take these losses into consideration. The majority of losses will be monotonically nondecreasing in nature, that is, the further apart the estimate θ* and the true value θ, the larger our losses are likely to be. Typical forms of the loss function are the absolute deviation |θ* – θ|, the square deviation (θ* − θ)2, and the jump, that is, no loss if |θ* − θ| < i, and a big loss otherwise. Or the loss function may resemble the square deviation but take the form of a step function increasing in discrete increments. Desirable estimators are impartial, consistent, efficient, robust, and minimum loss. […] Interval estimates are to be preferred to point estimates; they are less open to challenge for they convey information about the estimate’s precision.”

“Estimators should be consistent, that is, the larger the sample, the greater the probability the resultant estimate will be close to the true population value. […] [A] consistent estimator […] is to be preferred to another if the first consistent estimator can provide the same degree of accuracy with fewer observations. To simplify comparisons, most statisticians focus on the asymptotic relative efficiency (ARE), defined as the limit with increasing sample size of the ratio of the number of observations required for each of two consistent statistical procedures to achieve the same degree of accuracy. […] Estimators that are perfectly satisfactory for use with symmetric, normally distributed populations may not be as desirable when the data come from nonsymmetric or heavy-tailed populations, or when there is a substantial risk of contamination with extreme values. When estimating measures of central location, one way to create a more robust estimator is to trim the sample of its minimum and maximum values […]. As information is thrown away, trimmed estimators are [however] less efficient. […] Many semiparametric estimators are not only robust but provide for high ARE with respect to their parametric counterparts. […] The accuracy of an estimate […] and the associated losses will vary from sample to sample. A minimum loss estimator is one that minimizes the losses when the losses are averaged over the set of all possible samples. Thus, its form depends upon all of the following: the loss function, the population from which the sample is drawn, and the population characteristic that is being estimated. An estimate that is optimal in one situation may only exacerbate losses in another. […] It is easy to envision situations in which we are less concerned with the average loss than with the maximum possible loss we may incur by using a particular estimation procedure. An estimate that minimizes the maximum possible loss is termed a mini–max estimator.”

“In survival studies and reliability analyses, we follow each subject and/or experiment unit until either some event occurs or the experiment is terminated; the latter observation is referred to as censored. The principal sources of error are the following:

• Lack of independence within a sample
• Lack of independence of censoring
• Too many censored values
• Wrong test employed”

“Lack of independence within a sample is often caused by the existence of an implicit factor in the data. For example, if we are measuring survival times for cancer patients, diet may be correlated with survival times. If we do not collect data on the implicit factor(s) (diet in this case), and the implicit factor has an effect on survival times, then we no longer have a sample from a single population. Rather, we have a sample that is a mixture drawn from several populations, one for each level of the implicit factor, each with a different survival distribution. Implicit factors can also affect censoring times, by affecting the probability that a subject will be withdrawn from the study or lost to follow-up. […] Stratification can be used to control for an implicit factor. […] This is similar to using blocking in analysis of variance. […] If the pattern of censoring is not independent of the survival times, then survival estimates may be too high (if subjects who are more ill tend to be withdrawn from the study), or too low (if subjects who will survive longer tend to drop out of the study and are lost to follow-up). If a loss or withdrawal of one subject could increase the probability of loss or withdrawal of other subjects, this would also lead to lack of independence between censoring and the subjects. […] A study may end up with many censored values as a result of having large numbers of subjects withdrawn or lost to follow-up, or from having the study end while many subjects are still alive. Large numbers of censored values decrease the equivalent number of subjects exposed (at risk) at later times, reducing the effective sample sizes. […] Survival tests perform better when the censoring is not too heavy, and, in particular, when the pattern of censoring is similar across the different groups.”

“Kaplan–Meier survival analysis (KMSA) is the appropriate starting point [in the type 2 censoring setting]. KMSA can estimate survival functions even in the presence of censored cases and requires minimal assumptions. If covariates other than time are thought to be important in determining duration to outcome, results reported by KMSA will represent misleading averages, obscuring important differences in groups formed by the covariates (e.g., men vs. women). Since this is often the case, methods that incorporate covariates, such as event-history models and Cox regression, may be preferred. For small samples, the permutation distributions of the Gehan–Breslow, Mantel–Cox, and Tarone–Ware survival test statistics and not the chi-square distribution should be used to compute p-values. If the hazard or survival functions are not parallel, then none of the three tests […] will be particularly good at detecting differences between the survival functions.”

## Blitz match with commentary: Eric Hansen vs Magnus Carlsen

A very enjoyable match recording by Grandmaster Eric Hansen:

## Molecules

This book is almost exclusively devoted to covering biochemistry topics. When the coverage is decent I find biochemistry reasonably interesting – for example I really liked Beer, Björk & Beardall’s photosynthesis book – and the coverage here was okay, but not more than that. I think that Ball was trying to cover a bit too much ground, or perhaps that there was really too much ground to cover for it to even make sense to try to write a book on this particular topic in a series like this. I learned a lot though.

As usual I’ve added some quotes from the coverage below, as well as some additional links to topics/concepts/people/etc. covered in the book.

“Most atoms on their own are highly reactive – they have a predisposition to join up with other atoms. Molecules are collectives of atoms, firmly welded together into assemblies that may contain anything up to many millions of them. […] By molecules, we generally mean assemblies of a discrete, countable number of atoms. […] Some pure elements adopt molecular forms; others do not. As a rough rule of thumb, metals are non-molecular […] whereas non-metals are molecular. […] molecules are the smallest units of meaning in chemistry. It is through molecules, not atoms, that one can tell stories in the sub-microscopic world. They are the words; atoms are just the letters. […] most words are distinct aggregates of several letters arranged in a particular order. We often find that longer words convey subtler and more finely nuanced meanings. And in molecules, as in words, the order in which the component parts are put together matters: ‘save’ and ‘vase’ do not mean the same thing.”

“There are something like 60,000 different varieties of protein molecule in human cells, each conducting a highly specialized task. It would generally be impossible to guess what this task is merely by looking at a protein. They are undistinguished in appearance, mostly globular in shape […] and composed primarily of carbon, hydrogen, nitrogen, oxygen, and a little sulphur. […] There are twenty varieties of amino acids in natural proteins. In the chain, one amino acid is linked to the next via a covalent bond called a peptide bond. Both molecules shed a few extraneous atoms to make this linkage, and the remainder – another link in the chain – is called a residue. The chain itself is termed a polypeptide. Any string of amino acid residues is a polypeptide. […] In a protein the order of amino acids along the chain – the sequence – is not arbitrary. It is selected […] to ensure that the chain will collapse and curl up in water into the precisely determined globular form of the protein, with all parts of the chain in the right place. This shape can be destroyed by warming the protein, a process called denaturation. But many proteins will fold up again spontaneously into the same globular structure when cooled. In other words, the chain has a kind of memory of its folded shape. The details of this folding process are still not fully understood – it is, in fact, one of the central unsolved puzzles of molecular biology. […] proteins are made not in the [cell] nucleus but in a different compartment called the endoplasmic reticulum […]. The gene is transcribed first into a molecule related to DNA, called RNA (ribonucleic acid). The RNA molecules travel from the nucleus to the endoplasmic reticulum, where they are translated to proteins. The proteins are then shipped off to where they are needed.”

[M]icrofibrils aggregate together in various ways. For example, they can gather in a staggered arrangement to form thick strands called banded fibrils. […] Banded fibrils constitute the connective tissues between cells – they are the cables that hold our flesh together. Bone consists of collagen banded fibrils sprinkled with tiny crystals of the mineral hydroxyapatite, which is basically calcium phosphate. Because of the high protein content of bone, it is flexible and resilient as well as hard. […] In contrast to the disorderly tangle of connective tissue, the eye’s cornea contains collagen fibrils packed side by side in an orderly manner. These fibrils are too small to scatter light, and so the material is virtually transparent. The basic design principle – one that recurs often in nature – is that, by tinkering with the chemical composition and, most importantly, the hierarchical arrangement of the same basic molecules, it is possible to extract several different kinds of material properties. […] cross-links determine the strength of the material: hair and fingernail are more highly cross-linked than skin. Curly or frizzy hair can be straightened by breaking some of [the] sulphur cross-links to make the hairs more pliable. […] Many of the body’s structural fabrics are proteins. Unlike enzymes, structural proteins do not have to conduct any delicate chemistry, but must simply be (for instance) tough, or flexible, or waterproof. In principle many other materials besides proteins would suffice; and indeed, plants use cellulose (a sugar-based polymer) to make their tissues.”

“In many ways, it is metabolism and not replication that provides the best working definition of life. Evolutionary biologists would say that we exist in order to reproduce – but we are not, even the most amorous of us, trying to reproduce all the time. Yet, if we stop metabolizing, even for a minute or two, we are done for. […] Whether waking or asleep, our bodies stay close to a healthy temperature of 37 °C. There is only one way of doing this: our cells are constantly pumping out heat, a by-product of metabolism. Heat is not really the point here – it is simply unavoidable, because all conversion of energy from one form to another squanders some of it this way. Our metabolic processes are primarily about making molecules. Cells cannot survive without constantly reinventing themselves: making new amino acids for proteins, new lipids for membranes, new nucleic acids so that they can divide.”

“In the body, combustion takes place in a tightly controlled, graded sequence of steps, and some chemical energy is drawn off and stored at each stage. […] A power station burns coal, oil, or gas […]. Burning is just a means to an end. The heat is used to turn water into steam; the pressure of the steam drives turbines; the turbines spin and send wire coils whirling in the arms of great magnets, which induces an electrical current in the wire. Energy is passed on, from chemical to heat to mechanical to electrical. And every plant has a barrage of regulatory and safety mechanisms. There are manual checks on pressure gauges and on the structural integrity of moving parts. Automatic sensors make the measurements. Failsafe devices avert catastrophic failure. Energy generation in the cell is every bit as complicated. […] The cell seems to have thought of everything, and has protein devices for fine-tuning it all.”

ATP is the key to the maintenance of cellular integrity and organization, and so the cell puts a great deal of effort into making as much of it as possible from each molecule of glucose that it burns. About 40 per cent of the energy released by the combustion of food is conserved in ATP molecules. ATP is rich in energy because it is like a coiled spring. It contains three phosphate groups, linked like so many train carriages. Each of these phosphate groups has a negative charge; this means that they repel one another. But because they are joined by chemical bonds, they cannot escape one another […]. Straining to get away, the phosphates pull an energetically powerful punch. […] The links between phosphates can be snipped in a reaction that involves water […] called hydrolysis (‘splitting with water’). Each time a bond is hydrolysed, energy is released. Setting free the outermost phosphate converts ATP to adenosine diphosphate (ADP); cleave the second phosphate and it becomes adenosine monophosphate (AMP). Both severances release comparable amounts of energy.”

“Burning sugar is a two-stage process, beginning with its transformation to a molecule called pyruvate in a process known as glycolysis […]. This involves a sequence of ten enzyme-catalysed steps. The first five of these split glucose in half […], powered by the consumption of ATP molecules: two of them are ‘decharged’ to ADP for every glucose molecule split. But the conversion of the fragments to pyruvate […] permits ATP to be recouped from ADP. Four ATP molecules are made this way, so that there is an overall gain of two ATP molecules per glucose molecule consumed. Thus glycolysis charges the cell’s batteries. Pyruvate then normally enters the second stage of the combustion process: the citric acid cycle, which requires oxygen. But if oxygen is scarce – that is, under anaerobic conditions – a contingency plan is enacted whereby pyruvate is instead converted to the molecule lactate. […] The first thing a mitochondrion does is convert pyruvate enzymatically to a molecule called acetyl coenzyme A (CoA). The breakdown of fatty acids and glycerides from fats also eventually generates acetyl CoA. The [citric acid] cycle is a sequence of eight enzyme-catalysed reactions that transform acetyl CoA first to citric acid and then to various other molecules, ending with […] oxaloacetate. This end is a new beginning, for oxaloacetate reacts with acetyl CoA to make citric acid. In some of the steps of the cycle, carbon dioxide is generated as a by-product. It dissolves in the bloodstream and is carried off to the lungs to be exhaled. Thus in effect the carbon in the original glucose molecules is syphoned off into the end product carbon dioxide, completing the combustion process. […] Also syphoned off from the cycle are electrons – crudely speaking, the citric acid cycle sends an electrical current to a different part of the mitochondrion. These electrons are used to convert oxygen molecules and positively charged hydrogen ions to water – an energy-releasing process. The energy is captured and used to make ATP in abundance.”

“While mammalian cells have fuel-burning factories in the form of mitochondria, the solar-power centres in the cells of plant leaves are compartments called chloroplasts […] chloroplast takes carbon dioxide and water, and from them constructs […] sugar. […] In the first part of photosynthesis, light is used to convert NADP to an electron carrier (NADPH) and to transform ADP to ATP. This is effectively a charging-up process that primes the chloroplast for glucose synthesis. In the second part, ATP and NADPH are used to turn carbon dioxide into sugar, in a cyclic sequence of steps called the Calvin–Benson cycle […] There are several similarities between the processes of aerobic metabolism and photosynthesis. Both consist of two distinct sub-processes with separate evolutionary origins: a linear sequence of reactions coupled to a cyclic sequence that regenerates the molecules they both need. The bridge between glycolysis and the citric acid cycle is the electron-ferrying NAD molecule; the two sub-processes of photosynthesis are bridged by the cycling of an almost identical molecule, NAD phosphate (NADP).”

“Despite the variety of messages that hormones convey, the mechanism by which the signal is passed from a receptor protein at the cell surface to the cell’s interior is the same in almost all cases. It involves a sequence of molecular interactions in which molecules transform one another down a relay chain. In cell biology this is called signal transduction. At the same time as relaying the message, these interactions amplify the signal so that the docking of a single hormone molecule to a receptor creates a big response inside the cell. […] The receptor proteins span the entire width of the membrane; the hormone-binding site protrudes on the outer surface, while the base of the receptor emerges from the inner surface […]. When the receptor binds its target hormone, a shape change is transmitted to the lower face of the protein, which enables it to act as an enzyme. […] The participants of all these processes [G protein, guanosine diphosphate and -triphosphate, adenylate cyclase… – figured it didn’t matter if I left out a few details – US…] are stuck to the cell wall. But cAMP floats freely in the cell’s cytoplasm, and is able to carry the signal into the cell interior. It is called a ‘second messenger’, since it is the agent that relays the signal of the ‘first messenger’ (the hormone) into the community of the cell. Cyclic AMP becomes attached to protein molecules called protein kinases, whereupon they in turn become activated as enzymes. Most protein kinases switch other enzymes on and off by attaching phosphate groups to them – a reaction called phosphorylation. […] The process might sound rather complicated, but it is really nothing more than a molecular relay. The signal is passed from the hormone to its receptor, then to the G protein, on to an enzyme and thence to the second messenger, and further on to a protein kinase, and so forth. The G-protein mechanism of signal transduction was discovered in the 1970s by Alfred Gilman and Martin Rodbell, for which they received the 1994 Nobel Prize for medicine. It represents one of the most widespread means of getting a message across a cell membrane. […] it is not just hormonal signalling that makes use of the G-protein mechanism. Our senses of vision and smell, which also involve the transmission of signals, employ the same switching process.”

“Although axon signals are electrical, they differ from those in the metal wires of electronic circuitry. The axon is basically a tubular cell membrane decorated along its length with channels that let sodium and potassium ions in and out. Some of these ion channels are permanently open; others are ‘gated’, opening or closing in response to electrical signals. And some are not really channels at all but pumps, which actively transport sodium ions out of the cell and potassium ions in. These sodium-potassium pumps can move ions […] powered by ATP. […] Drugs that relieve pain typically engage with inhibitory receptors. Morphine, the main active ingredient of opium, binds to so-called opioid receptors in the spinal cord, which inhibit the transmission of pain signals to the brain. There are also opioid receptors in the brain itself, which is why morphine and related opiate drugs have a mental as well as a somatic effect. These receptors in the brain are the binding sites of peptide molecules called endorphins, which the brain produces in response to pain. Some of these are themselves extremely powerful painkillers. […] Not all pain-relieving drugs (analgesics) work by blocking the pain signal. Some prevent the signal from ever being sent. Pain signals are initiated by peptides called prostaglandins, which are manufactured and released by distressed cells. Aspirin (acetylsalicylic acid) latches onto and inhibits one of the enzymes responsible for prostaglandin synthesis, cutting off the cry of pain at its source. Unfortunately, prostaglandins are also responsible for making the mucus that protects the stomach lining […], so one of the side effects of aspirin is the risk of ulcer formation.”

“Shape changes […] are common when a receptor binds its target. If binding alone is the objective, a big shape change is not terribly desirable, since the internal rearrangements of the receptor make heavy weather of the binding event and may make it harder to achieve. This is why many supramolecular hosts are designed so that they are ‘pre-organized’ to receive their guests, minimizing the shape change caused by binding.”

“The way that a protein chain folds up is determined by its amino-acid sequence […] so the ‘information’ for making a protein is uniquely specified by this sequence. DNA encodes this information using […] groups of three bases [to] represent each amino acid. This is the genetic code.* How a particular protein sequence determines the way its chain folds is not yet fully understood. […] Nevertheless, the principle of information flow in the cell is clear. DNA is a manual of information about proteins. We can think of each chromosome as a separate chapter, each gene as a word in that chapter (they are very long words!), and each sequential group of three bases in the gene as a character in the word. Proteins are translations of the words into another language, whose characters are amino acids. In general, only when the genetic language is translated can we understand what it means.”

“It is thought that only about 2–3 per cent of the entire human genome codes for proteins. […] Some people object to genetic engineering on the grounds that it is ethically wrong to tamper with the fundamental material of life – DNA – whether it is in bacteria, humans, tomatoes, or sheep. One can understand such objections, and it would be arrogant to dismiss them as unscientific. Nevertheless, they do sit uneasily with what we now know about the molecular basis of life. The idea that our genetic make-up is sacrosanct looks hard to sustain once we appreciate how contingent, not to say arbitrary, that make-up is. Our genomes are mostly parasite-riddled junk, full of the detritus of over three billion years of evolution.”

Roald Hoffmann.
Molecular solid.
Covalent bond.
Visible spectrum.
X-ray crystallography.
Electron microscope.
Valence (chemistry).
John Dalton.
Isomer.
Lysozyme.
Organic chemistry.
Synthetic dye industry/Alizarin.
Paul Ehrlich (staining).
Retrosynthetic analysis. [I would have added a link to ‘rational synthesis as well here if there’d been a good article on that topic, but I wasn’t able to find one. Anyway: “Organic chemists call [the] kind of procedure […] in which a starting molecule is converted systematically, bit by bit, to the desired product […] a rational synthesis.”]
Paclitaxel synthesis.
Protein.
Enzyme.
Tryptophan synthase.
Ubiquitin.
Amino acid.
Protein folding.
Peptide bond.
Hydrogen bond.
Nucleotide.
Chromosome.
Structural gene. Regulatory gene.
Operon.
Gregor Mendel.
Mitochondrial DNA.
RNA world.
Ribozyme.
Artificial gene synthesis.
Keratin.
Silk.
Vulcanization.
Aramid.
Microtubule.
Tubulin.
Carbon nanotube.
Amylase/pepsin/glycogen/insulin.
Cytochrome c oxidase.
ATP synthase.
Haemoglobin.
Thylakoid membrane.
Chlorophyll.
Liposome.
TNT.
Motor protein. Dynein. Kinesin.
Sarcomere.
Sliding filament theory of muscle action.
Photoisomerization.
Supramolecular chemistry.
Hormone. Endocrine system.
Neurotransmitter.
Ionophore.
DNA.
Mutation.
Intron. Exon.
Transposon.
Molecular electronics.

October 30, 2017

## Child psychology

I was not impressed with this book, but as mentioned in the short review it was ‘not completely devoid of observations of interest’.

Before I start my proper coverage of the book, here are some related ‘observations’ from a different book I recently read, Bellwether:

““First we’re all going to play a game. Bethany, it’s Brittany’s birthday.” She attempted a game involving balloons with pink Barbies on them and then gave up and let Brittany open her presents. “Open Sandy’s first,” Gina said, handing her the book.
“No, Caitlin, these are Brittany’s presents.”
Brittany ripped the paper off Toads and Diamonds and looked at it blankly.
“That was my favorite fairy tale when I was little,” I said. “It’s about a girl who meets a good fairy, only she doesn’t, know it because the fairy’s in disguise—” but Brittany had already tossed it aside and was ripping open a Barbie doll in a glittery dress.
“Totally Hair Barbie!” she shrieked.
“Mine,” Peyton said, and made a grab that left Brittany holding nothing but Barbie’s arm.
“She broke Totally Hair Barbie!” Brittany wailed.
Peyton’s mother stood up and said calmly, “Peyton, I think you need a time-out.”
I thought Peyton needed a good swat, or at least to have Totally Hair Barbie taken away from her and given back to Brittany, but instead her mother led her to the door of Gina’s bedroom. “You can come out when you’re in control of your feelings,” she said to Peyton, who looked like she was in control to me.
“I can’t believe you’re still using time-outs,” Chelsea’s mother said. “Everybody’s using holding now.”
“You hold the child immobile on your lap until the negative behavior stops. It produces a feeling of interceptive safety.”
“Really,” I said, looking toward the bedroom door. I would have hated trying to hold Peyton against her will.
“Holding’s been totally abandoned,” Lindsay’s mother said. “We use EE.”
“EE?” I said.
“Esteem Enhancement,” Lindsay’s mother said. “EE addresses the positive peripheral behavior no matter how negative the primary behavior is.”
“Positive peripheral behavior?” Gina said dubiously. “When Peyton took the Barbie away from Brittany just now,” Lindsay’s mother said, obviously delighted to explain, “you would have said, ‘My, Peyton, what an assertive grip you have.’”

[A little while later, during the same party:]

“My, Peyton,” Lindsay’s mother said, “what a creative thing to do with your frozen yogurt.””

Okay, on to the coverage of the book. I haven’t covered it in much detail, but I have included some observations of interest below.

“[O]ptimal development of grammar (knowledge about language structure) and phonology (knowledge about the sound elements in words) depends on the brain experiencing sufficient linguistic input. So quantity of language matters. The quality of the language used with young children is also important. The easiest way to extend the quality of language is with interactions around books. […] Natural conversations, focused on real events in the here and now, are those which are critical for optimal development. Despite this evidence, just talking to young children is still not valued strongly in many environments. Some studies find that over 60 per cent of utterances to young children are ‘empty language’ — phrases such as ‘stop that’, ‘don’t go there’, and ‘leave that alone’. […] studies of children who experience high levels of such ‘restricted language’ reveal a negative impact on later cognitive, social, and academic development.”

[Neural] plasticity is largely achieved by the brain growing connections between brain cells that are already there. Any environmental input will cause new connections to form. At the same time, connections that are not used much will be pruned. […] the consistency of what is experienced will be important in determining which connections are pruned and which are retained. […] Brains whose biology makes them less efficient in particular and measurable aspects of processing seem to be at risk in specific areas of development. For example, when auditory processing is less efficient, this can carry a risk of later language impairment.”

“Joint attention has […] been suggested to be the basis of ‘natural pedagogy’ — a social learning system for imparting cultural knowledge. Once attention is shared by adult and infant on an object, an interaction around that object can begin. That interaction usually passes knowledge from carer to child. This is an example of responsive contingency in action — the infant shows an interest in something, the carer responds, and there is an interaction which enables learning. Taking the child’s focus of attention as the starting point for the interaction is very important for effective learning. Of course, skilled carers can also engineer situations in which babies or children will become interested in certain objects. This is the basis of effective play-centred learning. Novel toys or objects are always interesting.”

“Some research suggests that the pitch and amplitude (loudness) of a baby’s cry has been developed by evolution to prompt immediate action by adults. Babies’ cries appear to be designed to be maximally stressful to hear.”

“[T]he important factors in becoming a ‘preferred attachment figure’ are proximity and consistency.”

“[A]dults modify their actions in important ways when they interact with infants. These modifications appear to facilitate learning. ‘Infant-directed action’ is characterized by greater enthusiasm, closer proximity to the infant, greater repetitiveness, and longer gaze to the face than interactions with another adult. Infant-directed action also uses simplified actions with more turn-taking. […] carers tend to use a special tone of voice to talk to babies. This is more sing-song and attention-grabbing than normal conversational speech, and is called ‘infant-directed speech’ [IDS] or ‘Parentese’. All adults and children naturally adopt this special tone when talking to a baby, and babies prefer to listen to Parentese. […] IDS […] heightens pitch, exaggerates the length of words, and uses extra stress, exaggerating the rhythmic or prosodic aspects of speech. […] the heightened prosody increases the salience of acoustic cues to where words begin and end. […] So as well as capturing attention, IDS is emphasizing key linguistic cues that help language acquisition. […] The infant brain seems to cope with the ‘learning problem’ of which sounds matter by initially being sensitive to all the sound elements used by the different world languages. Via acoustic learning during the first year of life, the brain then specializes in the sounds that matter for the particular languages that it is being exposed to.”

“While crawling makes it difficult to carry objects with you on your travels, learning to walk enables babies to carry things. Indeed, walking babies spend most of their time selecting objects and taking them to show their carer, spending on average 30–40 minutes per waking hour interacting with objects. […] Self-generated movement is seen as critical for child development. […] most falling is adaptive, as it helps infants to gain expertise. Indeed, studies show that newly walking infants fall on average 17 times per hour. From the perspective of child psychology, the importance of ‘motor milestones’ like crawling and walking is that they enable greater agency (self-initiated and self-chosen behaviour) on the part of the baby.”

“Statistical learning enables the brain to learn the statistical structure of any event or object. […] Statistical structure is learned in all sensory modalities simultaneously. For example, as the child learns about birds, the child will learn that light body weight, having feathers, having wings, having a beak, singing, and flying, all go together. Each bird that the child sees may be different, but each bird will share the features of flying, having feathers, having wings, and so on. […] The connections that form between the different brain cells that are activated by hearing, seeing, and feeling birds will be repeatedly strengthened for these shared features, thereby creating a multi-modal neural network for that particular concept. The development of this network will be dependent on everyday experiences, and the networks will be richer if the experiences are more varied. This principle of learning supports the use of multi-modal instruction and active experience in nursery and primary school. […] knowledge about concepts is distributed across the entire brain. It is not stored separately in a kind of conceptual ‘dictionary’ or distinct knowledge system. Multi-modal experiences strengthen learning across the whole brain. Accordingly, multisensory learning is the most effective kind of learning for young children.”

“Babies learn words most quickly when an adult both points to and names a new item.”

“…direct teaching of scientific reasoning skills helps children to reason logically independently of their pre-existing beliefs. This is more difficult than it sounds, as pre-existing beliefs exert strong effects. […] in many social situations we are advantaged if we reason on the basis of our pre-existing beliefs. This is one reason that stereotypes form”. [Do remember on a related note that stereotype accuracy is one of the largest and most replicable effects in all of social psychology – US].

“Some gestures have almost universal meaning, like waving goodbye. Babies begin using gestures like this quite early on. Between 10 and 18 months of age, gestures become frequent and are used extensively for communication. […] After around 18 months, the use of gesture starts declining, as vocalization becomes more and more dominant in communication. […] By [that time], most children are entering the two-word stage, when they become able to combine words. […] At this age, children often use a word that they know to refer to many different entities whose names are not yet known. They might use the word ‘bee’ for insects that are not bees, or the word ‘dog’ to refer to horses and cows. Experiments have shown that this is not a semantic confusion. Toddlers do not think that horses and cows are a type of dog. Rather, they have limited language capacities, and so they stretch their limited vocabularies to communicate as flexibly as possible. […] there is a lot of similarity across cultures at the two-word stage regarding which words are combined. Young children combine words to draw attention to objects (‘See doggie!’), to indicate ownership (‘My shoe’), to point out properties of objects (‘Big doggie’), to indicate plurality (‘Two cookie’), and to indicate recurrence (‘Other cookie’). […] It is only as children learn grammar that some divergence is found across languages. This is probably because different languages have different grammatical formats for combining words. […] grammatical learning emerges naturally from extensive language experience (of the utterances of others) and from language use (the novel utterances of the child, which are re-formulated by conversational partners if they are grammatically incorrect).”

“The social and communicative functions of language, and children’s understanding of them, are captured by pragmatics. […] pragmatic aspects of conversation include taking turns, and making sure that the other person has sufficient knowledge of the events being discussed to follow what you are saying. […] To learn about pragmatics, children need to go beyond the literal meaning of the words and make inferences about communicative intent. A conversation is successful when a child has recognized the type of social situation and applied the appropriate formula. […] Children with autism, who have difficulties with social cognition and in reading the mental states of others, find learning the pragmatics of conversation particularly difficult. […] Children with autism often show profound delays in social understanding and do not ‘get’ many social norms. These children may behave quite inappropriately in social settings […] Children with autism may also show very delayed understanding of emotions and of intentions. However, this does not make them anti-social, rather it makes them relatively ineffective at being pro-social.”

“When children have siblings, there are usually developmental advantages for social cognition and psychological understanding. […] Discussing the causes of disputes appears to be particularly important for developing social understanding. Young children need opportunities to ask questions, argue with explanations, and reflect on why other people behave in the way that they do. […] Families that do not talk about the intentions and emotions of others and that do not explicitly discuss social norms will create children with reduced social understanding.”

“[C]hildren, like adults, are more likely to act in pro-social ways to ingroup members. […] Social learning of cultural ‘ingroups’ appears to develop early in children as part of general socio-moral development. […] being loyal to one’s ‘ingroup’ is likely to make the child more popular with the other members of that group. Being in a group thus requires the development of knowledge about how to be loyal, about conforming to pressure and about showing ingroup bias. For example, children may need to make fine judgements about who is more popular within the group, so that they can favour friends who are more likely to be popular with the rest of the group. […] even children as young as 6 years will show more positive responding to the transgression of social rules by ingroup members compared to outgroup members, particularly if they have relatively well-developed understanding of emotions and intentions.”

“Good language skills improve memory, because children with better language skills are able to construct narratively coherent and extended, temporally organized representations of experienced events.”

“Once children begin reading, […] letter-sound knowledge and ‘phonemic awareness’ (the ability to divide words into the single sound elements represented by letters) become the most important predictors of reading development. […] phonemic awareness largely develops as a consequence of being taught to read and write. Research shows that illiterate adults do not have phonemic awareness. […] brain imaging shows that learning to read ‘re-maps’ phonology in the brain. We begin to hear words as sequences of ‘phonemes’ only after we learn to read.”

## Quotes

i. “…when a gift is deserved, it is not a gift but a payment.” (Gene Wolfe, The shadow of the torturer)

ii. “All the greatest blessings are a source of anxiety, and at no time is fortune less wisely trusted than when it is best […] everything that comes to us from chance is unstable, and the higher it rises, the more liable it is to fall.” (Seneca the Younger, On the shortness of life)

iii. “Debunking bad science should be constant obligation of the science community, even if it takes time away from serious research or seems to be a losing battle.” (Martin Gardner)

iv. “Happy is he that grows wise by other men’s harms.” (James Howell)

v. “The deadliest foe to virtue would be complete self-knowledge.” (F. H. Bradley)

vi. “A good book is never exhausted. It goes on whispering to you from the wall.” (Anatole Broyard)

vii. “The great writers of aphorisms read as if they had all known each other very well.” (Elias Canetti)

viii. “The story of your youth must not turn into a catalog of what became important in your later life. It must also contain the dissipation, the failure, and the waste.” (-ll-)

ix. “You keep taking note of whatever confirms your ideas — better to write down what refutes and weakens them!” (-ll-)

x. “Windbags can be right. Aphorists can be wrong. It is a tough world.” (James Fenton)

xi. “Science should be distinguished from technique and its scientific instrumentation, technology. Science is practised by scientists, and techniques by ‘engineers’ — a term that in our terminology includes physicians, lawyers, and teachers. If for the scientist knowledge and cognition are primary, it is action and construction that characterises the work of the engineer, though in fact his activity may be based on science. In history, technique often preceded science.” (Hans Freudenthal)

xii. “There are some books which cannot be adequately reviewed for twenty or thirty years after they come out.” (John Morley)

xiii. “Success depends on three things: who says it, what he says, how he says it; and of these three things, what he says is the least important.” (-ll-)

xiv. “Every uneducated person is a caricature of himself.” (Karl Wilhelm Friedrich Schlegel)

xv. “It is surely one of the strangest of our propensities to mark out those we love best for the worst usage; yet we do, all of us. We can take any freedom with a friend; we stand on no ceremony with a friend.” (Samuel Laman Blanchard)

xvi. “Everybody’s word is worth Nobody’s taking.” (-ll-)

xvii. “Credulity lives next door to Gossip.” (-ll-)

xviii. “As success converts treason into legitimacy, so belief converts fiction into fact” (-ll-)

xix. “In academia much bogus knowledge is tolerated in the name of academic freedom – which is like allowing for the sale of contaminated food in the name of free enterprise. I submit that such tolerance is suicidal: that the serious students must be protected against the “anything goes” crowd.” (Mario Bunge)

xx. “At all times pseudoprofound aphorisms have been more popular than rigorous arguments.” (-ll-)

## Words

Almost all of the words below are words which I encountered while reading the novels Flashman and the Tiger, Flashman in the Great Game, Bellwether, and Storm Front.

## A few diabetes papers of interest

“We performed a systematic review to identify which genetic variants predict response to diabetes medications.

RESEARCH DESIGN AND METHODS We performed a search of electronic databases (PubMed, EMBASE, and Cochrane Database) and a manual search to identify original, longitudinal studies of the effect of diabetes medications on incident diabetes, HbA1c, fasting glucose, and postprandial glucose in prediabetes or type 2 diabetes by genetic variation.

RESULTS Of 7,279 citations, we included 34 articles (N = 10,407) evaluating metformin (n = 14), sulfonylureas (n = 4), repaglinide (n = 8), pioglitazone (n = 3), rosiglitazone (n = 4), and acarbose (n = 4). […] Significant medication–gene interactions for glycemic outcomes included 1) metformin and the SLC22A1, SLC22A2, SLC47A1, PRKAB2, PRKAA2, PRKAA1, and STK11 loci; 2) sulfonylureas and the CYP2C9 and TCF7L2 loci; 3) repaglinide and the KCNJ11, SLC30A8, NEUROD1/BETA2, UCP2, and PAX4 loci; 4) pioglitazone and the PPARG2 and PTPRD loci; 5) rosiglitazone and the KCNQ1 and RBP4 loci; and 5) acarbose and the PPARA, HNF4A, LIPC, and PPARGC1A loci. Data were insufficient for meta-analysis.

CONCLUSIONS We found evidence of pharmacogenetic interactions for metformin, sulfonylureas, repaglinide, thiazolidinediones, and acarbose consistent with their pharmacokinetics and pharmacodynamics.”

“In this systematic review, we identified 34 articles on the pharmacogenetics of diabetes medications, with several reporting statistically significant interactions between genetic variants and medications for glycemic outcomes. Most pharmacogenetic interactions were only evaluated in a single study, did not use a control group, and/or did not report enough information to judge internal validity. However, our results do suggest specific, biologically plausible, gene–medication interactions, and we recommend confirmation of the biologically plausible interactions as a priority, including those for drug transporters, metabolizers, and targets of action. […] Given the number of comparisons reported in the included studies and the lack of accounting for multiple comparisons in approximately 53% of studies, many of the reported findings may [however] be false positives.”

“This issue of Diabetes Care includes three economic analyses. The first describes the incremental costs of diabetes over a lifetime and highlights how interventions to prevent diabetes may reduce lifetime costs (1). The second demonstrates that although an expensive, intensive lifestyle intervention for type 2 diabetes does not reduce adverse cardiovascular outcomes over 10 years, it significantly reduces the costs of non-intervention−related medical care (2). The third demonstrates that although the use of the International Association of the Diabetes and Pregnancy Study Groups (IADPSG) criteria for the screening and diagnosis of gestational diabetes mellitus (GDM) results in a threefold increase in the number of people labeled as having GDM, it reduces the risk of maternal and neonatal adverse health outcomes and reduces costs (3). The first report highlights the enormous potential value of intervening in adults at high risk for type 2 diabetes to prevent its development. The second illustrates the importance of measuring economic outcomes in addition to standard clinical outcomes to fully assess the value of new treatments. The third demonstrates the importance of rigorously weighing the costs of screening and treatment against the costs of health outcomes when evaluating new approaches to care.”

“The costs of diabetes monitoring and treatment accrue as of function of the duration of diabetes, so adults who are younger at diagnosis are more likely to survive to develop the late, expensive complications of diabetes, thus they incur higher lifetime costs attributable to diabetes. Zhuo et al. report that people with diabetes diagnosed at age 40 spend approximately \$125,000 more for medical care over their lifetimes than people without diabetes. For people diagnosed with diabetes at age 50, the discounted lifetime excess medical spending is approximately \$91,000; for those diagnosed at age 60, it is approximately \$54,000; and for those diagnosed at age 65, it is approximately \$36,000 (1).

These results are very consistent with results reported by the Diabetes Prevention Program (DPP) Research Group, which assessed the cost-effectiveness of diabetes prevention. […] In the simulated lifetime economic analysis [included in that study] the lifestyle intervention was more cost-effective in younger participants than in older participants (5). By delaying the onset of type 2 diabetes, the lifestyle intervention delayed or prevented the need for diabetes monitoring and treatment, surveillance of diabetic microvascular and neuropathic complications, and treatment of the late, expensive complications and comorbidities of diabetes, including end-stage renal disease and cardiovascular disease (5). Although this finding was controversial at the end of the randomized, controlled clinical trial, all but 1 of 12 economic analyses published by 10 research groups in nine countries have demonstrated that lifestyle intervention for the prevention of type 2 diabetes is very cost-effective, if not cost-saving, compared with a placebo intervention (6).

Empiric, within-trial economic analyses of the DPP have now demonstrated that the incremental costs of the lifestyle intervention are almost entirely offset by reductions in the costs of medical care outside the study, especially the cost of self-monitoring supplies, prescription medications, and outpatient and inpatient care (7). Over 10 years, the DPP intensive lifestyle intervention cost only ∼\$13,000 per quality-adjusted life-year gained when the analysis used an intent-to-treat approach (7) and was even more cost-effective when the analysis assessed outcomes and costs among adherent participants (8).”

“The American Diabetes Association has reported that although institutional care (hospital, nursing home, and hospice care) still account for 52% of annual per capita health care expenditures for people with diabetes, outpatient medications and supplies now account for 30% of expenditures (9). Between 2007 and 2012, annual per capita expenditures for inpatient care increased by 2%, while expenditures for medications and supplies increased by 51% (9). As the costs of diabetes medications and supplies continue to increase, it will be even more important to consider cost savings arising from the less frequent use of medications when evaluating the benefits of nonpharmacologic interventions.”

iii. The Lifetime Cost of Diabetes and Its Implications for Diabetes Prevention. (This is the Zhuo et al. paper mentioned above.)

“We aggregated annual medical expenditures from the age of diabetes diagnosis to death to determine lifetime medical expenditure. Annual medical expenditures were estimated by sex, age at diagnosis, and diabetes duration using data from 2006–2009 Medical Expenditure Panel Surveys, which were linked to data from 2005–2008 National Health Interview Surveys. We combined survival data from published studies with the estimated annual expenditures to calculate lifetime spending. We then compared lifetime spending for people with diabetes with that for those without diabetes. Future spending was discounted at 3% annually. […] The discounted excess lifetime medical spending for people with diabetes was \$124,600 (\$211,400 if not discounted), \$91,200 (\$135,600), \$53,800 (\$70,200), and \$35,900 (\$43,900) when diagnosed with diabetes at ages 40, 50, 60, and 65 years, respectively. Younger age at diagnosis and female sex were associated with higher levels of lifetime excess medical spending attributed to diabetes.

CONCLUSIONS Having diabetes is associated with substantially higher lifetime medical expenditures despite being associated with reduced life expectancy. If prevention costs can be kept sufficiently low, diabetes prevention may lead to a reduction in long-term medical costs.”

The selection criteria employed in this paper are not perfect; they excluded all individuals below the age of 30 “because they likely had type 1 diabetes”, which although true is only ‘mostly true’. Some of those individuals had(/have) type 2, but if you’re evaluating prevention schemes it probably makes sense to error on the side of caution (better to miss some type 2 patients than to include some type 1s), assuming the timing of the intervention is not too important. This gets more complicated if prevention schemes are more likely to have large and persistent effects in young people – however I don’t think that’s the case, as a counterpoint drug adherence studies often seem to find that young people aren’t particularly motivated to adhere to their treatment schedules compared to their older counterparts (who might have more advanced disease and so are more likely to achieve symptomatic relief by adhering to treatments).

A few more observations from the paper:

“The prevalence of participants with diabetes in the study population was 7.4%, of whom 54% were diagnosed between the ages of 45 and 64 years. The mean age at diagnosis was 55 years, and the mean length of time since diagnosis was 9.4 years (39% of participants with diabetes had been diagnosed for ≤5 years, 32% for 6–15 years, and 27% for ≥16 years). […] The observed annual medical spending for people with diabetes was \$13,966—more than twice that for people without diabetes.”

“Regardless of diabetes status, the survival-adjusted annual medical spending decreased after age 60 years, primarily because of a decreasing probability of survival. Because the probability of survival decreased more rapidly in people with diabetes than in those without, corresponding spending declined as people died and no longer accrued medical costs. For example, among men diagnosed with diabetes at age 40 years, 34% were expected to survive to age 80 years; among men of the same age who never developed diabetes, 55% were expected to survive to age 80 years. The expected annual expenditure for a person diagnosed with diabetes at age 40 years declined from \$8,500 per year at age 40 years to \$3,400 at age 80 years, whereas the expenses for a comparable person without diabetes declined from \$3,900 to \$3,200 over that same interval. […] People diagnosed with diabetes at age 40 years lived with the disease for an average of 34 years after diagnosis. Those diagnosed when older lived fewer years and, therefore, lost fewer years of life. […] The annual excess medical spending attributed to diabetes […] was smaller among people who were diagnosed at older ages. For men diagnosed at age 40 years, annual medical spending was \$3,700 higher than that of similar men without diabetes; spending was \$2,900 higher for those diagnosed at age 50 years; \$2,200 higher for those diagnosed at age 60 years; and \$2,000 higher for those diagnosed at age 65 years. Among women diagnosed with diabetes, the excess annual medical spending was consistently higher than for men of the same age at diagnosis.”

“Regardless of age at diagnosis, people with diabetes spent considerably more on health care after age 65 years than their nondiabetic counterparts. Health care spending attributed to diabetes after age 65 years ranged from \$23,900 to \$40,900, depending on sex and age at diagnosis. […] Of the total excess lifetime medical spending among an average diabetic patient diagnosed at age 50 years, prescription medications and inpatient care accounted for 44% and 35% of costs, respectively. Outpatient care and other medical care accounted for 17% and 4% of costs, respectively.”

“Our findings differed from those of studies of the lifetime costs of other chronic conditions. For instance, smokers have a lower average lifetime medical cost than nonsmokers (29) because of their shorter life spans. Smokers have a life expectancy about 10 years less than those who do not smoke (30); life expectancy is 16 years less for those who develop smoking-induced cancers (31). As a result, smoking cessation leads to increased lifetime spending (32). Studies of the lifetime costs for an obese person relative to a person with normal body weight show mixed results: estimated excess lifetime medical costs for people with obesity range from \$3,790 less to \$39,000 more than costs for those who are nonobese (33,34). […] obesity, when considered alone, results in much lower annual excess medical costs than diabetes (–\$940 to \$1,150 for obesity vs. \$2,000 to \$4,700 for diabetes) when compared with costs for people who are nonobese (33,34).”

“This study examines factors associated with all-cause mortality after cardiovascular complications (myocardial infarction [MI] and stroke) in patients with type 1 diabetes. In particular, we aim to determine whether a previous history of severe hypoglycemia is associated with increased mortality after a cardiovascular event in type 1 diabetic patients.

Hypoglycemia is the most common and dangerous acute complication of type 1 diabetes and can be life threatening if not promptly treated (1). The average individual with type 1 diabetes experiences about two episodes of symptomatic hypoglycemia per week, with an annual prevalence of 30–40% for hypoglycemic episodes requiring assistance for recovery (2). We define severe hypoglycemia to be an episode of hypoglycemia that requires hospitalization in this study. […] Patients with type 1 diabetes are more susceptible to hypoglycemia than those with type 2 diabetes, and therefore it is potentially of greater relevance if severe hypoglycemia is associated with mortality (6).”

“This study uses a large linked data set comprising health records from the Swedish National Diabetes Register (NDR), which were linked to administrative records on hospitalization, prescriptions, and national death records. […] [The] study is based on data from four sources: 1) risk factor data from the Swedish NDR […], 2) hospital records of inpatient episodes from the National Inpatients Register (IPR) […], 3) death records […], and 4) prescription data records […]. A study comparing registered diagnoses in the IPR with information in medical records found positive predictive values of IPR diagnoses were 85–95% for most diagnoses (8). In terms of NDR coverage, a recent study found that 91% of those aged 18–34 years and with type 1 diabetes in the Prescribed Drug Register could be matched with those in the NDR for 2007–2009 (9).”

“The outcome of the study was all-cause mortality after a major cardiovascular complication (MI or stroke). Our sample for analysis included patients with type 1 diabetes who visited a clinic after 2002 and experienced a major cardiovascular complication after this clinic visit. […] We define type 1 diabetes as diabetes diagnosed under the age of 30 years, being reported as being treated with insulin only at some clinic visit, and when alive, having had at least one prescription for insulin filled per year between 2006 and 2010 […], and not having filled a prescription for metformin at any point between July 2005 and December 2010 (under the assumption that metformin users were more likely to be type 2 diabetes patients).”

“Explanatory variables included in both models were type of complication (MI or stroke), age at complication, duration of diabetes, sex, smoking status, HbA1c, BMI, systolic blood pressure, diastolic blood pressure, chronic kidney disease status based on estimated glomerular filtration rate, microalbuminuria and macroalbuminuria status, HDL, LDL, total–to–HDL cholesterol ratio, triglycerides, lipid medication status, clinic visits within the year prior to the CVD event, and prior hospitalization events: hypoglycemia, hyperglycemia, MI, stroke, heart failure, AF, amputation, PVD, ESRD, IHD/unstable angina, PCI, and CABG. The last known value for each clinical risk factor, prior to the cardiovascular complication, was used for analysis. […] Initially, all explanatory variables were included and excluded if the variable was not statistically significant at a 5% level (P < 0.05) via stepwise backward elimination.” [Aaaaaaargh! – US. These guys are doing a lot of things right, but this is not one of them. Just to mention this one more time: “Generally, hypothesis testing is a very poor basis for model selection […] There is no statistical theory that supports the notion that hypothesis testing with a fixed α level is a basis for model selection.” (Burnham & Anderson)]

“Patients who had prior hypoglycemic events had an estimated HR for mortality of 1.79 (95% CI 1.37–2.35) in the first 28 days after a CVD event and an estimated HR of 1.25 (95% CI 1.02–1.53) of mortality after 28 days post CVD event in the backward regression model. The univariate analysis showed a similar result compared with the backward regression model, with prior hypoglycemic events having an estimated HR for mortality of 1.79 (95% CI 1.38–2.32) and 1.35 (95% CI 1.11–1.65) in the logistic and Cox regressions, respectively. Even when all explanatory factors were included in the models […], the mortality increase associated with a prior severe hypoglycemic event was still significant, and the P values and SE are similar when compared with the backward stepwise regression. Similarly, when explanatory factors were included individually, the mortality increase associated with a prior severe hypoglycemic event was also still significant.” [Again, this sort of testing scheme is probably not a good approach to getting at a good explanatory model, but it’s what they did – US]

“The 5-year cumulative estimated mortality risk for those without complications after MI and stroke were 40.1% (95% CI 35.2–45.1) and 30.4% (95% CI 26.3–34.6), respectively. Patients with prior heart failure were at the highest estimated 5-year cumulative mortality risk, with those who suffered an MI and stroke having a 56.0% (95% CI 47.5–64.5) and 44.0% (95% CI 35.8–52.2) 5-year cumulative mortality risk, respectively. Patients who had a prior severe hypoglycemic event and suffered an MI had an estimated 5-year cumulative mortality risk at age 60 years of 52.4% (95% CI 45.3–59.5), and those who suffered a stroke had a 5-year cumulative mortality risk of 39.8% (95% CI 33.4–46.3). Patients at age 60 years who suffer a major CVD complication have over twofold risk of 5-year mortality compared with the general type 1 diabetic Swedish population, who had an estimated 5-year mortality risk of 13.8% (95% CI 12.0–16.1).”

“We found evidence that prior severe hypoglycemia is associated with reduced survival after a major CVD event but no evidence that prior severe hypoglycemia is associated with an increased risk of a subsequent CVD event.

Compared with the general type 1 diabetic Swedish population, a major CVD complication increased 5-year mortality risk at age 60 years by >25% and 15% in patients with an MI and stroke, respectively. Patients with a history of a hypoglycemic event had an even higher mortality after a major CVD event, with approximately an additional 10% being dead at the 5-year mark. This risk was comparable with that in those with late-stage kidney disease. This information is useful in determining the prognosis of patients after a major cardiovascular event and highlights the need to include this as a risk factor in simulation models (18) that are used to improve decision making (19).”

“This is the first study that has found some evidence of a dose-response relationship, where patients who experienced two or more severe hypoglycemic events had higher mortality after a cardiovascular event compared with those who experienced one severe hypoglycemic event. A lack of statistical power prevented us from investigating this further when we tried to stratify by number of prior severe hypoglycemic events in our regression models. There was no evidence of a dose-response relationship between repeated episodes of severe hypoglycemia and vascular outcomes or death in previous type 2 diabetes studies (5).”

“Careful regulation of insulin dosing, dietary intake, and activity levels are essential for optimal glycemic control in individuals with type 1 diabetes. However, even with optimal treatment many children with type 1 diabetes have blood glucose levels in the hyperglycemic range for more than half the day and in the hypoglycemic range for an hour or more each day (1). Brain cells may be especially sensitive to aberrant blood glucose levels, as glucose is the brain’s principal substrate for its energy needs.

Research in animal models has shown that white matter (WM) may be especially sensitive to dysglycemia-associated insult in diabetes (24). […] Early childhood is a period of rapid myelination and brain development (6) and of increased sensitivity to insults affecting the brain (6,7). Hence, study of the developing brain is particularly important in type 1 diabetes.”

“WM structure can be measured with diffusion tensor imaging (DTI), a method based on magnetic resonance imaging (MRI) that uses the movement of water molecules to characterize WM brain structure (8,9). Results are commonly reported in terms of mathematical scalars (representing vectors in vector space) such as fractional anisotropy (FA), axial diffusivity (AD), and radial diffusivity (RD). FA reflects the degree of diffusion anisotropy of water (how diffusion varies along the three axes) within a voxel (three-dimensional pixel) and is determined by fiber diameter and density, myelination, and intravoxel fiber-tract coherence (increases in which would increase FA), as well as extracellular diffusion and interaxonal spacing (increases in which would decrease FA) (10). AD, a measure of water diffusivity along the main axis of diffusion within a voxel, is thought to reflect fiber coherence and structure of axonal membranes (increases in which would increase AD), as well as microtubules, neurofilaments, and axonal branching (increases in which would decrease AD) (11,12). RD, the mean of the diffusivities perpendicular to the vector with the largest eigenvalue, is thought to represent degree of myelination (13,14) (more myelin would decrease RD values) and axonal “leakiness” (which would increase RD). Often, however, a combination of these WM characteristics results in opposing contributions to the final observed FA/AD/RD value, and thus DTI scalars should not be interpreted globally as “good” or “bad” (15). Rather, these scalars can show between-group differences and relationships between WM structure and clinical variables and are suggestive of underlying histology. Definitive conclusions about histology of WM can only be derived from direct microscopic examination of biological tissue.”

“Children (ages 4 to <10 years) with type 1 diabetes (n = 127) and age-matched nondiabetic control subjects (n = 67) had diffusion weighted magnetic resonance imaging scans in this multisite neuroimaging study. Participants with type 1 diabetes were assessed for HbA1c history and lifetime adverse events, and glucose levels were monitored using a continuous glucose monitor (CGM) device and standardized measures of cognition.

RESULTS Between-group analysis showed that children with type 1 diabetes had significantly reduced axial diffusivity (AD) in widespread brain regions compared with control subjects. Within the type 1 diabetes group, earlier onset of diabetes was associated with increased radial diffusivity (RD) and longer duration was associated with reduced AD, reduced RD, and increased fractional anisotropy (FA). In addition, HbA1c values were significantly negatively associated with FA values and were positively associated with RD values in widespread brain regions. Significant associations of AD, RD, and FA were found for CGM measures of hyperglycemia and glucose variability but not for hypoglycemia. Finally, we observed a significant association between WM structure and cognitive ability in children with type 1 diabetes but not in control subjects. […] These results suggest vulnerability of the developing brain in young children to effects of type 1 diabetes associated with chronic hyperglycemia and glucose variability.”

“The profile of reduced overall AD in type 1 diabetes observed here suggests possible axonal damage associated with diabetes (30). Reduced AD was associated with duration of type 1 diabetes suggesting that longer exposure to diabetes worsens the insult to WM structure. However, measures of hyperglycemia and glucose variability were either not associated or were positively associated with AD values, suggesting that these measures did not contribute to the observed decreased AD in the type 1 diabetes group. A possible explanation for these observations is that several biological processes influence WM structure in type 1 diabetes. Some processes may be related to insulin insufficiency or C-peptide levels independent of glucose levels (31,32) and may affect WM coherence (and reduce AD values as observed in the between-group results). Other processes related to hyperglycemia and glucose variability may target myelin (resulting in reduced FA and increased RD) as well as reduced axonal branching (both would result in increased AD values). Alternatively, these seemingly conflicting AD observations may be due to a dominant effect of age, which could overshadow effects from dysglycemia.

Early age of onset is one of the most replicable risk factors for cognitive impairments in type 1 diabetes (33,34). It has been hypothesized that young children are especially vulnerable to brain insults resulting from episodes of chronic hyperglycemia, hypoglycemia, and acute hypoglycemic complications of type 1 diabetes (seizures and severe hypoglycemic episodes). In addition, fear of hypoglycemia often results in caregivers maintaining relatively higher blood glucose to avoid lows altogether (1), especially in very young children. However, our study suggests that this approach of aggressive hypoglycemia avoidance resulting in hyperglycemia may not be optimal and may be detrimental to WM structure in young children.

Neuronal damage (reflected in altered WM structure) may affect neuronal signal transfer and, thus, cognition (35). Cognitive domains commonly reported to be affected in children with type 1 diabetes include general intellectual ability, visuospatial abilities, attention, memory, processing speed, and executive function (3638). In our sample, even though the duration of illness was relatively short (2.9 years on average), there were modest but significant cognitive differences between children with type 1 diabetes and control subjects (24).”

“In summary, we present results from the largest study to date investigating WM structure in very young children with type 1 diabetes. We observed significant and widespread brain differences in the WM microstructure of children with type 1 diabetes compared with nondiabetic control subjects and significant associations between WM structure and measures of hyperglycemia, glucose variability, and cognitive ability in the type 1 diabetic population.”

“Polyneuropathy is a common complication in diabetes. The prevalence of neuropathy in patients with diabetes is ∼30%. During the course of the disease, up to 50% of the patients will eventually develop neuropathy (1). Its clinical features are characterized by numbness, tingling, or burning sensations and typically extend in a distinct stocking and glove pattern. Prevention plays a key role since poor glucose control is a major risk factor in the development of diabetic polyneuropathy (DPN) (1,2).

There is no clear definition for the onset of painful diabetic neuropathy. Different hypotheses have been formulated.

Hyperglycemia in diabetes can lead to osmotic swelling of the nerves, related to increased glucose conversion into sorbitol by the enzyme aldose reductase (2,3). High sorbitol concentrations might also directly cause axonal degeneration and demyelination (2). Furthermore, stiffening and thickening of ligamental structures and the plantar fascia make underlying structures more prone to biomechanical compression (46). A thicker and stiffer retinaculum might restrict movements and lead to alterations of the nerve in the tarsal tunnel.

Both swelling of the nerve and changes in the tarsal tunnel might lead to nerve damage through compression.

Furthermore, vascular changes may diminish endoneural blood flow and oxygen distribution. Decreased blood supply in the (compressed) nerve might lead to ischemic damage as well as impaired nerve regeneration.

Several studies suggest that surgical decompression of nerves at narrow anatomic sites, e.g., the tarsal tunnel, is beneficial and has a positive effect on pain, sensitivity, balance, long-term risk of ulcers and amputations, and quality of life (3,710). Since the effect of decompression of the tibial nerve in patients with DPN has not been proven with a randomized clinical trial, its contribution as treatment for patients with painful DPN is still controversial. […] In this study, we compare the mean CSA and any changes in shape of the tibial nerve before and after decompression of the tarsal tunnel using ultrasound in order to test the hypothesis that the tarsal tunnel leads to compression of the tibial nerve in patients with DPN.”

“This study, with a large sample size and standardized sonographic imaging procedure with a good reliability, is the first randomized controlled trial that evaluates the effect of decompression of the tibial nerve on the CSA. Although no effect on CSA after surgery was found, this study using ultrasound demonstrates a larger and swollen tibial nerve and thicker flexor retinaculum at the ankle in patients with DPN compared with healthy control subjects.”

I would have been interested to know if there were any observable changes in symptom relief measures post-surgery, even if such variables are less ‘objective’ than measures like CSA (less objective, but perhaps more relevant to the patient…), but the authors did not look at those kinds of variables.

“Nonalcoholic fatty liver disease (NAFLD) has reached epidemic proportions worldwide (1). Up to 30% of adults in the U.S. and Europe have NAFLD, and the prevalence of this disease is much higher in people with diabetes (1,2). Indeed, the prevalence of NAFLD on ultrasonography ranges from ∼50 to 70% in patients with type 2 diabetes (35) and ∼40 to 50% in patients with type 1 diabetes (6,7). Notably, patients with diabetes and NAFLD are also more likely to develop more advanced forms of NAFLD that may result in end-stage liver disease (8). However, accumulating evidence indicates that NAFLD is associated not only with liver-related morbidity and mortality but also with an increased risk of developing cardiovascular disease (CVD) and other serious extrahepatic complications (810).”

“Increasing evidence indicates that NAFLD is strongly associated with an increased risk of CKD [chronic kidney disease, US] in people with and without diabetes (11). Indeed, we have previously shown that NAFLD is associated with an increased prevalence of CKD in patients with both type 1 and type 2 diabetes (1517), and that NAFLD independently predicts the development of incident CKD in patients with type 2 diabetes (18). However, many of the risk factors for CKD are different in patients with type 1 and type 2 diabetes, and to date, it is uncertain whether NAFLD is an independent risk factor for incident CKD in type 1 diabetes or whether measurement of NAFLD improves risk prediction for CKD, taking account of traditional risk factors for CKD.

Therefore, the aim of the current study was to investigate 1) whether NAFLD is associated with an increased incidence of CKD and 2) whether measurement of NAFLD improves risk prediction for CKD, adjusting for traditional risk factors, in type 1 diabetic patients.”

“Using a retrospective, longitudinal cohort study design, we have initially identified from our electronic database all Caucasian type 1 diabetic outpatients with preserved kidney function (i.e., estimated glomerular filtration rate [eGFR] ≥60 mL/min/1.73 m2) and with no macroalbuminuria (n = 563), who regularly attended our adult diabetes clinic between 1999 and 2001. Type 1 diabetes was diagnosed by the typical presentation of disease, the absolute dependence on insulin treatment for survival, the presence of undetectable fasting C-peptide concentrations, and the presence of anti–islet cell autoantibodies. […] Overall, 261 type 1 diabetic outpatients were included in the final analysis and were tested for the development of incident CKD during the follow-up period […] All participants were periodically seen (every 3–6 months) for routine medical examinations of glycemic control and chronic complications of diabetes. No participants were lost to follow-up. […] For this study, the development of incident CKD was defined as occurrence of eGFR <60 mL/min/1.73 m2 and/or macroalbuminuria (21). Both of these outcome measures were confirmed in all participants in a least two consecutive occasions (within 3–6 months after the first examination).”

“At baseline, the mean eGFRMDRD was 92 ± 23 mL/min/1.73 m2 (median 87.9 [IQR 74–104]), or eGFREPI was 98.6 ± 19 mL/min/1.73 m2 (median 99.7 [84–112]). Most patients (n = 234; 89.7%) had normal albuminuria, whereas 27 patients (10.3%) had microalbuminuria. NAFLD was present in 131 patients (50.2%). […] At baseline, patients who developed CKD at follow-up were older, more likely to be female and obese, and had a longer duration of diabetes than those who did not. These patients also had higher values of systolic blood pressure, A1C, triglycerides, serum GGT, and urinary ACR and lower values of eGFRMDRD and eGFREPI. Moreover, there was a higher percentage of patients with hypertension, metabolic syndrome, microalbuminuria, and some degree of diabetic retinopathy in patients who developed CKD at follow-up compared with those remaining free from CKD. The proportion using antihypertensive drugs (that always included the use of ACE inhibitors or angiotensin receptor blockers) was higher in those who progressed to CKD. Notably, […] this patient group also had a substantially higher frequency of NAFLD on ultrasonography.”

“During follow-up (mean duration 5.2 ± 1.7 years, range 2–10), 61 patients developed CKD using the MDRD study equation to estimate eGFR (i.e., ∼4.5% of participants progressed every year to eGFR <60 mL/min/1.73 m2 or macroalbuminuria). Of these, 28 developed an eGFRMDRD <60 mL/min/1.73 m2 with abnormal albuminuria (micro- or macroalbuminuria), 21 developed a reduced eGFRMDRD with normal albuminuria (but 9 of them had some degree of diabetic retinopathy at baseline), and 12 developed macroalbuminuria alone. None of them developed kidney failure requiring chronic dialysis. […] The annual eGFRMDRD decline for the whole cohort was 2.68 ± 3.5 mL/min/1.73 m2 per year. […] NAFLD patients had a greater annual decline in eGFRMDRD than those without NAFLD at baseline (3.28 ± 3.8 vs. 2.10 ± 3.0 mL/min/1.73 m2 per year, P < 0.005). Similarly, the frequency of a renal functional decline (arbitrarily defined as ≥25% loss of baseline eGFRMDRD) was greater among those with NAFLD than among those without the disease (26 vs. 11%, P = 0.005). […] Interestingly, BMI was not significantly associated with CKD.”

“Our novel findings indicate that NAFLD is strongly associated with an increased incidence of CKD during a mean follow-up of 5 years and that measurement of NAFLD improves risk prediction for CKD, independently of traditional risk factors (age, sex, diabetes duration, A1C, hypertension, baseline eGFR, and microalbuminuria [i.e., the last two factors being the strongest known risk factors for CKD]), in type 1 diabetic adults. Additionally, although NAFLD was strongly associated with obesity, obesity (or increased BMI) did not explain the association between NAFLD and CKD. […] The annual cumulative incidence rate of CKD in our cohort of patients (i.e., ∼4.5% per year) was essentially comparable to that previously described in other European populations with type 1 diabetes and similar baseline characteristics (∼2.5–9% of patients who progressed every year to CKD) (25,26). In line with previously published information (2528), we also found that hypertension, microalbuminuria, and lower eGFR at baseline were strong predictors of incident CKD in type 1 diabetic patients.”

“There is a pressing and unmet need to determine whether NAFLD is associated with a higher risk of CKD in people with type 1 diabetes. It has only recently been recognized that NAFLD represents an important burden of disease for type 2 diabetic patients (11,17,18), but the magnitude of the problem of NAFLD and its association with risk of CKD in type 1 diabetes is presently poorly recognized. Although there is clear evidence that NAFLD is closely associated with a higher prevalence of CKD both in those without diabetes (11) and in those with type 1 and type 2 diabetes (1517), only four prospective studies have examined the association between NAFLD and risk of incident CKD (18,2931), and only one of these studies was published in patients with type 2 diabetes (18). […] The underlying mechanisms responsible for the observed association between NAFLD and CKD are not well understood. […] The possible clinical implication for these findings is that type 1 diabetic patients with NAFLD may benefit from more intensive surveillance or early treatment interventions to decrease the risk for CKD. Currently, there is no approved treatment for NAFLD. However, NAFLD and CKD share numerous cardiometabolic risk factors, and treatment strategies for NAFLD and CKD should be similar and aimed primarily at modifying the associated cardiometabolic risk factors.”

October 25, 2017