Occupational Epidemiology (II)

Some more observations from the book below.

“RD [Retinal detachment] is the separation of the neurosensory retina from the underlying retinal pigment epithelium.1 RD is often preceded by posterior vitreous detachment — the separation of the posterior vitreous from the retina as a result of vitreous degeneration and shrinkage2 — which gives rise to the sudden appearance of floaters and flashes. Late symptoms of RD may include visual field defects (shadows, curtains) or even blindness. The success rate of RD surgery has been reported to be over 90%;3 however, a loss of visual acuity is frequently reported by patients, particularly if the macula is involved.4 Since the natural history of RD can be influenced by early diagnosis, patients experiencing symptoms of posterior vitreous detachment are advised to undergo an ophthalmic examination.5 […] Studies of the incidence of RD give estimates ranging from 6.3 to 17.9 cases per 100 000 person-years.6 […] Age is a well-known risk factor for RD. In most studies the peak incidence was recorded among subjects in their seventh decade of life. A secondary peak at a younger age (20–30 years) has been identified […] attributed to RD among highly myopic patients.6 Indeed, depending on the severity,
myopia is associated with a four- to ten-fold increase in risk of RD.7 [Diabetics with retinopathy are also at increased risk of RD, US] […] While secondary prevention of RD is current practice, no effective primary prevention strategy is available at present. The idea is widespread among practitioners that RD is not preventable, probably the consequence of our historically poor understanding of the aetiology of RD. For instance, on the website of the Mayo Clinic — one of the top-ranked hospitals for ophthalmology in the US — it is possible to read that ‘There’s no way to prevent retinal detachment’.9

“Intraocular pressure […] is influenced by physical activity. Dynamic exercise causes an acute reduction in intraocular pressure, whereas physical fitness is associated with a lower baseline value.29 Conversely, a sudden rise in intraocular pressure has been reported during the Valsalva manoeuvre.30-32 […] Occupational physical activity may […] cause both short- and long-term variations in intraocular pressure. On the one hand, physically demanding jobs may contribute to decreased baseline levels by increasing physical fitness but, on the other hand, lifting tasks may cause an important acute increase in pressure. Moreover, the eye of a manual worker who performs repeated lifting tasks involving the Valsalva manoeuvre may undergo several dramatic changes in intraocular pressure within a single working shift. […] A case-control study was carried out to test the hypothesis that repeated lifting tasks involving the Valsalva manoeuvre could be a risk factor for RD. […] heavy lifting was a strong risk factor for RD (OR 4.4, 95% CI 1.6–13). Intriguingly, body mass index (BMI) also showed a clear association with RD (top quartile: OR 6.8, 95% CI 1.6–29). […] Based on their findings, the authors concluded that heavy occupational lifting (involving the Valsalva manoeuvre) may be a relevant risk factor for RD in myopics.

“The proportion of the world’s population over 60 is forecast to double from 11.6% in 2012 to 21.8% in 2050.1 […] the International Labour Organization notes that, worldwide, just 40% of the working age population has legal pension coverage, and only 26% of the working population is effectively covered by old-age pension schemes. […] in less developed regions, labour force participation in those over 65 is much higher than in more developed regions.8 […] Longer working lives increase cumulative exposures, as well as increasing the time since exposure — important when there is a long latency period between exposure and resultant disease. Further, some exposures may have a greater effect when they occur to older workers, e.g. carcinogens that are promoters rather than initiators. […] Older workers tend to have more chronic health conditions. […] Older workers have fewer injuries, but take longer to recover. […] For some ‘knowledge workers’, like physicians, even a relatively minor cognitive decline […] might compromise their competence. […]  Most past studies have treated age as merely a confounding variable and rarely, if ever, have considered it an effect modifier. […]  Jex and colleagues24 argue that conceptually we should treat age as the variable of interest so that other variables are viewed as moderating the impact of age. […] The single best improvement to epidemiological research on ageing workers is to conduct longitudinal studies, including follow-up of workers into retirement. Cross-sectional designs almost certainly incur the healthy survivor effect, since unhealthy workers may retire early.25 […] Analyses should distinguish ageing per se, genetic factors, work exposures, and lifestyle in order to understand their relative and combined effects on health.”

“Musculoskeletal disorders have long been recognized as an important source of morbidity and disability in many occupational populations.1,2 Most musculoskeletal disorders, for most people, are characterized by recurrent episodes of pain that vary in severity and in their consequences for work. Most episodes subside uneventfully within days or weeks, often without any intervention, though about half of people continue to experience some pain and functional limitations after 12 months.3,4 In working populations, musculoskeletal disorders may lead to a spell of sickness absence. Sickness absence is increasingly used as a health parameter of interest when studying the consequences of functional limitations due to disease in occupational groups. Since duration of sickness absence contributes substantially to the indirect costs of illness, interventions increasingly address return to work (RTW).5 […] The Clinical Standards Advisory Group in the United Kingdom reported RTW within 2 weeks for 75% of all low back pain (LBP) absence episodes and suggested that approximately 50% of all work days lost due to back pain in the working population are from the 85% of people who are off work for less than 7 days.6″

Any RTW curve over time can be described with a mathematical Weibull function.15 This Weibull function is characterized by a scale parameter λ and a shape parameter k. The scale parameter λ is a function of different covariates that include the intervention effect, preferably expressed as hazard ratio (HR) between the intervention group and the reference group in a Cox’s proportional hazards regression model. The shape parameter k reflects the relative increase or decrease in survival time, thus expressing how much the RTW rate will decrease with prolonged sick leave. […] a HR as measure of effect can be introduced as a covariate in the scale parameter λ in the Weibull model and the difference in areas under the curve between the intervention model and the basic model will give the improvement in sickness absence days due to the intervention. By introducing different times of starting the intervention among those workers still on sick leave, the impact of timing of enrolment can be evaluated. Subsequently, the estimated changes in total sickness absence days can be expressed in a benefit/cost ratio (BC ratio), where benefits are the costs saved due to a reduction in sickness absence and costs are the expenditures relating to the intervention.15″

“A crucial factor in understanding why interventions are effective or not is the timing of the enrolment of workers on sick leave into the intervention. The RTW pattern over time […] has important consequences for appropriate timing of the best window for effective clinical and occupational interventions. The evidence presented by Palmer and colleagues clearly suggests that [in the context of LBP] a stepped care approach is required. In the first step of rapid RTW, most workers will return to work even without specific interventions. Simple, short interventions involving effective coordination and cooperation between primary health care and the workplace will be sufficient to help the majority of workers to achieve an early RTW. In the second step, more expensive, structured interventions are reserved for those who are having difficulties returning, typically between 4 weeks and 3 months. However, to date there is little evidence on the optimal timing of such interventions for workers on sick leave due to LBP.14,15 […] the cost-benefits of a structured RTW intervention among workers on sick leave will be determined by the effectiveness of the intervention, the natural speed of RTW in the target population, the timing of the enrolment of workers into the intervention, and the costs of both the intervention and of a day of sickness absence. […] The cost-effectiveness of a RTW intervention will be determined by the effectiveness of the intervention, the costs of the intervention and of a day of sickness absence, the natural course of RTW in the target population, the timing of the enrolment of workers into the RTW intervention, and the time lag before the intervention takes effect. The latter three factors are seldom taken into consideration in systematic reviews and guidelines for management of RTW, although their impact may easily be as important  as classical measures of effectiveness, such as effect size or HR.”

“In order to obtain information of the highest quality and utility, surveillance schemes have to be designed, set up, and managed with the same methodological rigour as high-calibre prospective cohort studies. Whether surveillance schemes are voluntary or not, considerable effort has to be invested to ensure a satisfactory and sufficient denominator, the best numerator quality, and the most complete ascertainment. Although the force of statute is relied upon in some surveillance schemes, even in these the initial and continuing motivation of the reporters (usually physicians) is paramount. […] There is a surveillance ‘pyramid’ within which the patient’s own perception is at the base, the GP is at a higher level, and the clinical specialist is close to the apex. The source of the surveillance reports affects the numerator because case severity and case mix differ according to the level in the pyramid.19 Although incidence rate estimates may be expected to be lower at the higher levels in the surveillance pyramid this is not necessarily always the case. […] Although surveillance undertaken by physicians who specialize in the organ system concerned or in occupational disease (or in both aspects) may be considered to be the medical ‘gold standard’ it can suffer from a more limited patient catchment because of various referral filters. Surveillance by GPs will capture numerator cases as close to the base of the pyramid as possible, but may suffer from greater diagnostic variation than surveillance by specialists. Limiting recruitment to GPs with a special interest, and some training, in occupational medicine is a compromise between the two levels.20

“When surveillance is part of a statutory or other compulsory scheme then incident case identification is a continuous and ongoing process. However, when surveillance is voluntary, for a research objective, it may be preferable to sample over shorter, randomly selected intervals, so as to reduce the demands associated with the data collection and ‘reporting fatigue’. Evidence so far suggests that sampling over shorter time intervals results in higher incidence estimates than continuous sampling.21 […] Although reporting fatigue is an important consideration in tempering conclusions drawn from […] multilevel models, it is possible to take account of this potential bias in various ways. For example, when evaluating interventions, temporal trends in outcomes resulting from other exposures can be used to control for fatigue.23,24 The phenomenon of reporting fatigue may be characterized by an ‘excess of zeroes’ beyond what is expected of a Poisson distribution and this effect can be quantified.27 […] There are several considerations in determining incidence from surveillance data. It is possible to calculate an incidence rate based on the general population, on the population of working age, or on the total working population,19 since these denominator bases are generally readily available, but such rates are not the most useful in determining risk. Therefore, incidence rates are usually calculated in respect of specific occupations or industries.22 […] Ideally, incidence rates should be expressed in relation to quantitative estimates of exposure but most surveillance schemes would require additional data collection as special exercises to achieve this aim.” [for much more on these topics, see also M’ikanatha & Iskander’s book.]

“Estimates of lung cancer risk attributable to occupational exposures vary considerably by geographical area and depend on study design, especially on the exposure assessment method, but may account for around 5–20% of cancers among men, but less (<5%) among women;2 among workers exposed to (suspected) lung carcinogens, the percentage will be higher. […] most exposure to known lung carcinogens originates from occupational settings and will affect millions of workers worldwide.  Although it has been established that these agents are carcinogenic, only limited evidence is available about the risks encountered at much lower levels in the general population. […] One of the major challenges in community-based occupational epidemiological studies has been valid assessment of the occupational exposures experienced by the population at large. Contrary to the detailed information usually available for an industrial population (e.g. in a retrospective cohort study in a large chemical company) that often allows for quantitative exposure estimation, community-based studies […] have to rely on less precise and less valid estimates. The choice of method of exposure assessment to be applied in an epidemiological study depends on the study design, but it boils down to choosing between acquiring self-reported exposure, expert-based individual exposure assessment, or linking self-reported job histories with job-exposure matrices (JEMs) developed by experts. […] JEMs have been around for more than three decades.14 Their main distinction from either self-reported or expert-based exposure assessment methods is that exposures are no longer assigned at the individual subject level but at job or task level. As a result, JEMs make no distinction in assigned exposure between individuals performing the same job, or even between individuals performing a similar job in different companies. […] With the great majority of occupational exposures having a rather low prevalence (<10%) in the general population it is […] extremely important that JEMs are developed aiming at a highly specific exposure assessment so that only jobs with a high likelihood (prevalence) and intensity of exposure are considered to be exposed. Aiming at a high sensitivity would be disastrous because a high sensitivity would lead to an enormous number of individuals being assigned an exposure while actually being unexposed […] Combinations of the methods just described exist as well”.

“Community-based studies, by definition, address a wider range of types of exposure and a much wider range of encountered exposure levels (e.g. relatively high exposures in primary production but often lower in downstream use, or among indirectly exposed individuals). A limitation of single community-based studies is often the relatively low number of exposed individuals. Pooling across studies might therefore be beneficial. […] Pooling projects need careful planning and coordination, because the original studies were conducted for different purposes, at different time periods, using different questionnaires. This heterogeneity is sometimes perceived as a disadvantage but also implies variations that can be studied and thereby provide important insights. Every pooling project has its own dynamics but there are several general challenges that most pooling projects confront. Creating common variables for all studies can stretch from simple re-naming of variables […] or recoding of units […] to the re-categorization of national educational systems […] into years of formal education. Another challenge is to harmonize the different classification systems of, for example, diseases (e.g. International Classification of Disease (ICD)-9 versus ICD-10), occupations […], and industries […]. This requires experts in these respective fields as well as considerable time and money. Harmonization of data may mean losing some information; for example, ISCO-68 contains more detail than ISCO-88, which makes it possible to recode ISCO-68 to ISCO-88 with only a little loss of detail, but it is not possible to recode ISCO-88 to ISCO-68 without losing one or two digits in the job code. […] Making the most of the data may imply that not all studies will qualify for all analyses. For example, if a study did not collect data regarding lung cancer cell type, it can contribute to the overall analyses but not to the cell type-specific analyses. It is important to remember that the quality of the original data is critical; poor data do not become better by pooling.”


December 6, 2017 Posted by | Books, Cancer/oncology, Demographics, Epidemiology, Health Economics, Medicine, Ophthalmology, Statistics | Leave a comment

The history of astronomy

It’s been a while since I read this book, and I was for a while strongly considering not blogging it at all. In the end I figured I ought to cover it after all in at least a little bit of detail, though when I made the decision to cover the book here I also decided not to cover it in nearly as much detail as I usually cover the books in this series.

Below some random observations from the book which I found sufficiently interesting to add here.

“The Almagest is a magisterial work that provided geometrical models and related tables by which the movements of the Sun, Moon, and the five lesser planets could be calculated for the indefinite future. […] Its catalogue contains over 1,000 fixed stars arranged in 48 constellations, giving the longitude, latitude, and apparent brightness of each. […] the Almagest would dominate astronomy like a colossus for 14 centuries […] In the universities of the later Middle Ages, students would be taught Aristotle in philosophy and a simplified Ptolemy in astronomy. From Aristotle they would learn the basic truth that the heavens rotate uniformly about the central Earth. From the simplified Ptolemy they would learn of epicycles and eccentrics that violated this basic truth by generating orbits whose centre was not the Earth; and those expert enough to penetrate deeper into the Ptolemaic models would encounter equant theories that violated the (yet more basic) truth that heavenly motion is uniform. […] with the models of the Almagest – whose parameters would be refined over the centuries to come – the astronomer, and the astrologer, could compute the future positions of the planets with economy and reasonable accuracy. There were anomalies – the Moon, for example, would vary its apparent size dramatically in the Ptolemaic model but does not do so in reality, and Venus and Mercury were kept close to the Sun in the sky by a crude ad hoc device – but as a geometrical compendium of how to grind out planetary tables, the Almagest worked, and that was what mattered.”

“The revival of astronomy – and astrology – among the Latins was stimulated around the end of the first millennium when the astrolabe entered the West from Islamic Spain. Astrology in those days had a [‘]rational[‘] basis rooted in the Aristotelian analogy between the microcosm – the individual living body – and the macrocosm, the cosmos as a whole. Medical students were taught how to track the planets, so that they would know when the time was favourable for treating the corresponding organs in their patients.” [Aaargh! – US]

“The invention of printing in the 15th century had many consequences, none more significant than the stimulus it gave to the mathematical sciences. All scribes, being human, made occasional errors in preparing a copy of a manuscript. These errors would often be transmitted to copies of the copy. But if the works were literary and the later copyists attended to the meaning of the text, they might recognize and correct many of the errors introduced by their predecessors. Such control could rarely be exercised by copyists required to reproduce texts with significant numbers of mathematical symbols. As a result, a formidable challenge faced the medieval student of a mathematical or astronomical treatise, for it was available to him only in a manuscript copy that had inevitably become corrupt in transmission. After the introduction of printing, all this changed.”

“Copernicus, like his predecessors, had been content to work with observations handed down from the past, making new ones only when unavoidable and using instruments that left much to be desired. Tycho [Brahe], whose work marks the watershed between observational astronomy ancient and modern, saw accuracy of observation as the foundation of all good theorizing. He dreamed of having an observatory where he could pursue the research and development of precision instrumentation, and where a skilled team of assistants would test the instruments even as they were compiling a treasury of observations. Exploiting his contacts at the highest level, Tycho persuaded King Frederick II of Denmark to grant him the fiefdom of the island of Hven, and there, between 1576 and 1580, he constructed Uraniborg (‘Heavenly Castle’), the first scientific research institution of the modern era. […] Tycho was the first of the modern observers, and in his catalogue of 777 stars the positions of the brightest are accurate to a minute or so of arc; but he himself was probably most proud of his cosmology, which Galileo was not alone in seeing as a retrograde compromise. Tycho appreciated the advantages of heliocentic planetary models, but he was also conscious of the objections […]. In particular, his inability to detect annual parallax even with his superb instrumentation implied that the Copernican excuse, that the stars were too far away for annual parallax to be detected, was now implausible in the extreme. The stars, he calculated, would have to be at least 700 times further away than Saturn for him to have failed for this reason, and such a vast, purposeless empty space between the planets and the stars made no sense. He therefore looked for a cosmology that would have the geometrical advantages of the heliocentric models but would retain the Earth as the body physically at rest at the centre of the cosmos. The solution seems obvious in hindsight: make the Sun (and Moon) orbit the central Earth, and make the five planets into satellites of the Sun.”

“Until the invention of the telescope, each generation of astronomers had looked at much the same sky as their predecessors. If they knew more, it was chiefly because they had more books to read, more records to mine. […] Galileo could say of his predecessors, ‘If they had seen what we see, they would have judged as we judge’; and ever since his time, the astronomers of each generation have had an automatic advantage over their predecessors, because they possess apparatus that allows them access to objects unseen, unknown, and therefore unstudied in the past. […] astronomers [for a long time] found themselves in a situation where, as telescopes improved, the two coordinates of a star’s position on the heavenly sphere were being measured with ever increasing accuracy, whereas little was known of the star’s third coordinate, distance, except that its scale was enormous. Even the assumption that the nearest stars were the brightest was […rightly, US] being called into question, as the number of known proper motions increased and it emerged that not all the fastest-moving stars were bright.”

“We know little of how Newton’s thinking developed between 1679 and the visit from Halley in 1684, except for a confused exchange of letters between Newton and the Astronomer Royal, John Flamsteed […] the visit from the suitably deferential and tactful Halley encouraged Newton to promise him written proof that elliptical orbits would result from an inverse-square force of attraction residing in the Sun. The drafts grew and grew, and eventually resulted in The Mathematical Principles of Natural Philosophy (1687), better known in its abbreviated Latin title of the Principia. […] All three of Kepler’s laws (the second in ‘area’ form), which had been derived by their author from observations, with the help of a highly dubious dynamics, were now shown to be consequences of rectilinear motion under an inverse-square force. […] As the drafts of Principia multiplied, so too did the number of phenomena that at last found their explanation. The tides resulted from the difference between the effects on the land and on the seas of the attraction of Sun and Moon. The spinning Earth bulged at the equator and was flattened at the poles, and so was not strictly spherical; as a result, the attraction of Sun and Moon caused the Earth’s axis to wobble and so generated the precession of the equinoxes first noticed by Hipparchus. […] Newton was able to use the observed motions of the moons of Earth, Jupiter, and Saturn to calculate the masses of the parent planets, and he found that Jupiter and Saturn were huge compared to Earth – and, in all probability, to Mercury, Venus, and Mars.”

December 5, 2017 Posted by | Astronomy, Books, History, Mathematics, Physics | Leave a comment

Occupational Epidemiology (I)

Below some observations from the first chapters of the book, which I called ‘very decent’ on goodreads.

“Coal workers were amongst the first occupational groups to be systematically studied in well-designed epidemiological research programmes. As a result, the causes and spectrum of non-malignant respiratory disease among coal workers have been rigorously explored and characterized.1,2 While respirable silica (quartz) in mining has long been accepted as a cause of lung disease, the important contributing role of coal mine dust was questioned until the middle of the twentieth century.3 Occupational exposure to coal mine dust has now been shown unequivocally to cause excess mortality and morbidity from non-malignant respiratory disease, including coal workers’ pneumoconiosis (CWP) and chronic obstructive pulmonary disease (COPD). The presence of respirable quartz, often a component of coal mine dust, contributes to disease incidence and severity, increasing the risk of morbidity and mortality in exposed workers.”

Coal is classified into three major coal ranks: lignite, bituminous, and anthracite from lowest to highest carbon content and heating value. […] In the US, the Bureau of Mines and the Public Health Service actively studied anthracite and bituminous coal mines and miners throughout the mid-1900s.3 These studies showed significant disease among workers with minimal silica exposure, suggesting that coal dust itself was toxic; however, these results were suppressed and not widely distributed. It was not until the 1960s that a popular movement of striking coal miners and their advocates demanded legislation to prevent, study, and compensate miners for respiratory diseases caused by coal dust exposure. […] CWP [Coal Workers’ Pneumoconiosis] is an interstitial lung disease resulting from the accumulation of coal mine dust in miners’ lungs and the tissue reaction to its presence. […] It is classified […] as simple or complicated; the latter is also known as progressive massive fibrosis (PMF) […] PMF is a progressive, debilitating disease which is predictive of disability and mortality […] A causal exposure-response relationship has been established between cumulative coal mine dust exposure and risk of developing both CWP and PMF,27-31 and with mortality from pneumoconiosis and PMF.23-26, 30 Incidence, the stage of CWP, and progression to PMF, as well as mortality, are positively associated with increasing proportion of respirable silica in the coal mine dust32 and higher coal rank. […] Not only do coal workers experience occupational mortality from CWP and PMF,12, 23-26 they also have excess mortality from COPD compared to the general population. Cross-sectional and longitudinal studies […] have demonstrated an exposure-response relationship between cumulative coal mine dust exposure and chronic bronchitis,36-40 respiratory symptoms,41 and pulmonary function even in the presence of normal radiographic findings.42 The relationship between the rate of decline of lung function and coal mine dust exposure is not linear, the greatest reduction occurring in the first few years of exposure.43

“Like most occupational cohort studies, those of coal workers are affected by the healthy worker effect. A strength of the PFR and NCS studies is the ability to use internal analysis (i.e. comparing workers by exposure level) which controls for selection bias at hire, one component of the effect.59 However, internal analyses may not fully control for ongoing selection bias if symptoms of adverse health effects are related to exposure (referred to as the healthy worker survivor effect) […] Work status is a key component of the healthy worker survivor effect, as are length of time since entering the industry and employment duration.61 Both the PFR and NCS studies have consistently found higher rates of symptoms and disease among former miners compared with current miners, consistent with a healthy worker survivor effect.62,63″

“Coal mining is rapidly expanding in the developing world. From 2007 to 2010 coal production declined in the US by 6% and Europe by 10% but increased in Eurasia by 9%, in Africa by 3%, and in Asia and Oceania by 19%.71 China saw a dramatic increase of 39% from 2007 to 2011. There have been few epidemiological studies published that characterize the disease burden among coal workers during this expansion but, in one study conducted among miners in Liaoning Province, China, rates of CWP were high.72 There are an estimated six million underground miners in China at present;73 hence even low disease rates will cause a high burden of illness and excess premature mortality.”

“Colonization with S. aureus may occur on mucous membranes of the respiratory or intestinal tract, or on other body surfaces, and is usually asymptomatic. Nasal colonization with S. aureus in the human population occurs among around 30% of individuals. Methicillin-resistant S. aureus (MRSA) are strains that have developed resistance to beta-lactam antibiotics […] and, as a result, may cause difficult-to-treat infections in humans. Nasal colonization with MRSA in the general population is low; the highest rate reported in a population-based survey was 1.5%.2,3 Infections with MRSA are associated with treatment failure and increased severity of disease.4,5 […] In 2004 a case of, at that time non-typeable, MRSA was reported in a 6-month-old girl admitted to a hospital in the Netherlands. […] Later on, this strain and some related strains appeared strongly associated with livestock production, and were labelled livestock-associated MRSA (LA-MRSA) and are nowadays referred to as MRSA ST398. […] It is common knowledge that the use of antimicrobial agents in humans, animals, and plants promotes the selection and spread of antimicrobial-resistant bacteria and resistance genes through genetic mutations and gene transfer.15 Antimicrobial agents are widely used in veterinary medicine and modern food animal production depends on the use of large amounts of antimicrobials for disease control. Use of antimicrobials probably played an important role in the emergence of MRSA ST398.”

MRSA was rarely isolated from animals before 2000. […] Since 2005 onwards, LA-MRSA has been increasingly frequently reported in different food production animals, including cattle, pigs, and poultry […] The MRSA case illustrates the rapid emergence, and transmission from animals to humans, of a new strain of resistant micro-organisms from an animal reservoir, creating risks for different occupational groups. […] High animal-to-human transmission of ST398 has been reported in pig farming, leading to an elevated prevalence of nasal MRSA carriage ranging from a few per cent in Ireland up to 86% in German pig farmers […]. One study showed a clear association between the prevalence of MRSA carriage among participants from farms with MRSA colonized pigs (50%) versus 3% on farms without colonized pigs […] MRSA prevalence is low among animals from alternative breeding systems with low use of antimicrobials, also leading to low carriage rates in farmers.71 […] Veterinarians are […] frequently in direct contact with livestock, and are clearly at elevated risk of LA-MRSA carriage when compared to the general population. […] Of all LA-MRSA carrying individuals, a fraction appear to be persistent carriers. […] Few studies have examined transmission from humans to humans. Generally, studies among family members of livestock farmers show a considerably lower prevalence than among the farmers with more intense animal contact. […] Individuals who are ST398 carriers in the general population usually have direct animal contact.43,44 On the other hand, the emergence of ST398 isolates without known risk factors for acquisition and without a link to livestock has been reported.45 In addition, a human-specific ST398 clone has recently been identified and thus the spread of LA-MRSA from occupational populations to the general population cannot be ruled out.46 Transmission dynamics, especially between humans not directly exposed to animals, remain unclear and might be changing.”

“Enterobacteriaceae that produce ESBLs are an emerging concern in public health. ESBLs inactivate beta-lactam antimicrobials by hydrolysis and therefore cause resistance to various beta-lactam antimicrobials, including penicillins and cephalosporins.54 […] The genes encoding for ESBLs are often located on plasmids which can be transferred between different bacterial species. Also, coexistence with other types of antimicrobial resistance occurs. In humans, infections with ESBL-producing Enterobacteriaceae are associated with increased burden of disease and costs.58 A variety of ESBLs have been identified in bacteria derived from food-producing animals worldwide. The occurrence of different ESBL types depends on the animal species and the geographical area. […] High use of antimicrobials and inappropriate use of cephalosporins in livestock production are considered to be associated with the emergence and high prevalence of ESBL-producers in the animals.59-60 Food-producing animals can serve as a reservoir for ESBL producing Enterobacteriaceae and ESBL genes. […] recent findings suggest that transmission from animals to humans may occur through (in)direct contact with livestock during work. This may thus pose an occupational health risk for farmers and potentially for other humans with regular contact with this working population. […] Compared to MRSA, the dynamics of ESBLs seem more complex. […] The variety of potential ESBL transmission routes makes it complex to determine the role of direct contact with livestock as an occupational risk for ESBL carriage. However, the increasing occurrence of ESBLs in livestock worldwide and the emerging insight into transmission through direct contact suggests that farmers have a higher risk of becoming a carrier of ESBLs. Until now, there have not been sufficient data available to quantify the relevant importance of this route of transmission.”

“Welders die more often from pneumonia than do their social class peers. This much has been revealed by successive analyses of occupational mortality for England and Wales. The pattern can now be traced back more than seven decades. During 1930–32, 285 deaths were observed with 171 expected;3 in 1949–53, 70 deaths versus 31 expected;4 in 1959–63, 101 deaths as compared with 54.9 expected;5 and in 1970–72, 66 deaths with 42.0 expected.6 […] The finding that risks decline after retirement is an argument against confounding by lifestyle variables such as smoking, as is the specificity of effect to lobar rather than bronchopneumonia. […] Analyses of death certificates […] support a case for a hazard that is reversible when exposure stops. […] In line with the mortality data, hospitalized pneumonia [has also] prove[n] to be more common among welders and other workers with exposure to metal fume than in workers from non-exposed jobs. Moreover, risks were confined to exposures in the previous 12 months […] Recently, inhalation experiments have confirmed that welding fume can promote bacterial growth in animals. […] A coherent body of evidence thus indicates that metal fume is a hazard for pneumonia. […] Presently, knowledge is lacking on the exposure-response relationship and what constitutes a ‘safe’ or ‘unsafe’ level or pattern of exposure to metal fume. […]  The pattern of epidemiological evidence […] is generally compatible with a hazard from iron in metal fume. Iron could promote infective risk in at least one of two ways: by acting as a growth nutrient for microorganisms, or as a cause of free radical injury. […] the Joint Committee on Vaccination and Immunisation, on behalf of the Department of Health in England, decided in November 2011 to recommend that ‘welders who have not received the pneumococcal polysaccharide vaccine (PPV23) previously should be offered a single dose of 0.5ml of PPV23 vaccine’ and that ‘employers should ensure that provision is in place for workers to receive PPV23’.”

December 2, 2017 Posted by | Books, Epidemiology, Infectious disease, Medicine | Leave a comment


i. “The party that negotiates in haste is often at a disadvantage.” (Howard Raiffa)

ii. “Advice: don’t embarrass your bargaining partner by forcing him or her to make all the concessions.” (-ll-)

iii. “Disputants often fare poorly when they each act greedily and deceptively.” (-ll-)

iv. “Each man does seek his own interest, but, unfortunately, not according to the dictates of reason.” (Kenneth Waltz)

v. “Whatever is said after I’m gone is irrelevant.” (Jimmy Savile)

vi. “Trust is an important lubricant of a social system. It is extremely efficient; it saves a lot of trouble to have a fair degree of reliance on other people’s word. Unfortunately this is not a commodity which can be bought very easily. If you have to buy it, you already have some doubts about what you have bought.” (Kenneth Arrow)

vii. “… an author never does more damage to his readers than when he hides a difficulty.” (Évariste Galois)

viii. “A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail” (Vladimir Voevodsky)

ix. “Suppose you want to teach the “cat” concept to a very young child. Do you explain that a cat is a relatively small, primarily carnivorous mammal with retractible claws, a distinctive sonic output, etc.? I’ll bet not. You probably show the kid a lot of different cats, saying “kitty” each time, until it gets the idea. To put it more generally, generalizations are best made by abstraction from experience. They should come one at a time; too many at once overload the circuits.” (Ralph P. Boas Jr.)

x. “Every author has several motivations for writing, and authors of technical books always have, as one motivation, the personal need to understand; that is, they write because they want to learn, or to understand a phenomenon, or to think through a set of ideas.” (Albert Wymore)

xi. “Great mathematics is achieved by solving difficult problems not by fabricating elaborate theories in search of a problem.” (Harold Davenport)

xii. “Is science really gaining in its assault on the totality of the unsolved? As science learns one answer, it is characteristically true that it also learns several new questions. It is as though science were working in a great forest of ignorance, making an ever larger circular clearing within which, not to insist on the pun, things are clear… But as that circle becomes larger and larger, the circumference of contact with ignorance also gets longer and longer. Science learns more and more. But there is an ultimate sense in which it does not gain; for the volume of the appreciated but not understood keeps getting larger. We keep, in science, getting a more and more sophisticated view of our essential ignorance.” (Warren Weaver)

xiii. “When things get too complicated, it sometimes makes sense to stop and wonder: Have I asked the right question?” (Enrico Bombieri)

xiv. “The mean and variance are unambiguously determined by the distribution, but a distribution is, of course, not determined by its mean and variance: A number of different distributions have the same mean and the same variance.” (Richard von Mises)

xv. “Algorithms existed for at least five thousand years, but people did not know that they were algorithmizing. Then came Turing (and Post and Church and Markov and others) and formalized the notion.” (Doron Zeilberger)

xvi. “When a problem seems intractable, it is often a good idea to try to study “toy” versions of it in the hope that as the toys become increasingly larger and more sophisticated, they would metamorphose, in the limit, to the real thing.” (-ll-)

xvii. “The kind of mathematics foisted on children in schools is not meaningful, fun, or even very useful. This does not mean that an individual child cannot turn it into a valuable and enjoyable personal game. For some the game is scoring grades; for others it is outwitting the teacher and the system. For many, school math is enjoyable in its repetitiveness, precisely because it is so mindless and dissociated that it provides a shelter from having to think about what is going on in the classroom. But all this proves is the ingenuity of children. It is not a justifications for school math to say that despite its intrinsic dullness, inventive children can find excitement and meaning in it.” (Seymour Papert)

xviii. “The optimist believes that this is the best of all possible worlds, and the pessimist fears that this might be the case.” (Ivar Ekeland)

xix. “An equilibrium is not always an optimum; it might not even be good. This may be the most important discovery of game theory.” (-ll-)

xxi. “It’s not all that rare for people to suffer from a self-hating monologue. Any good theories about what’s going on there?”

“If there’s things you don’t like about your life, you can blame yourself, or you can blame others. If you blame others and you’re of low status, you’ll be told to cut that out and start blaming yourself. If you blame yourself and you can’t solve the problems, self-hate is the result.” (Nancy Lebovitz & ‘The Nybbler’)

December 1, 2017 Posted by | Mathematics, Quotes/aphorisms, Science, Statistics | 4 Comments

Concussion and Sequelae of Minor Head Trauma

Some related links:

PECARN Pediatric Head Injury/Trauma Algorithm.
Canadian CT Head Injury/Trauma Rule.
ACEP – Traumatic Brain Injury (Mild – Adult).
AANS – concussion.
Guidelines for the Management of Severe Traumatic Brain Injury – 4th edition.
Return-to-play guidelines.
Second-impact syndrome.
Repetitive Head Injury Syndrome (medscape).
Traumatic Brain Injury & Concussion (CDC).

December 1, 2017 Posted by | Lectures, Medicine, Neurology | Leave a comment

A few diabetes papers of interest

i. Mechanisms and Management of Diabetic Painful Distal Symmetrical Polyneuropathy.

“Although a number of the diabetic neuropathies may result in painful symptomatology, this review focuses on the most common: chronic sensorimotor distal symmetrical polyneuropathy (DSPN). It is estimated that 15–20% of diabetic patients may have painful DSPN, but not all of these will require therapy. […] Although the exact pathophysiological processes that result in diabetic neuropathic pain remain enigmatic, both peripheral and central mechanisms have been implicated, and extend from altered channel function in peripheral nerve through enhanced spinal processing and changes in many higher centers. A number of pharmacological agents have proven efficacy in painful DSPN, but all are prone to side effects, and none impact the underlying pathophysiological abnormalities because they are only symptomatic therapy. The two first-line therapies approved by regulatory authorities for painful neuropathy are duloxetine and pregabalin. […] All patients with DSPN are at increased risk of foot ulceration and require foot care, education, and if possible, regular podiatry assessment.”

“The neuropathies are the most common long-term microvascular complications of diabetes and affect those with both type 1 and type 2 diabetes, with up to 50% of older type 2 diabetic patients having evidence of a distal neuropathy (1). These neuropathies are characterized by a progressive loss of nerve fibers affecting both the autonomic and somatic divisions of the nervous system. The clinical features of the diabetic neuropathies vary immensely, and only a minority are associated with pain. The major portion of this review will be dedicated to the most common painful neuropathy, chronic sensorimotor distal symmetrical polyneuropathy (DSPN). This neuropathy has major detrimental effects on its sufferers, confirming an increased risk of foot ulceration and Charcot neuroarthropathy as well as being associated with increased mortality (1).

In addition to DSPN, other rarer neuropathies may also be associated with painful symptoms including acute painful neuropathy that often follows periods of unstable glycemic control, mononeuropathies (e.g., cranial nerve palsies), radiculopathies, and entrapment neuropathies (e.g., carpal tunnel syndrome). By far the most common presentation of diabetic polyneuropathy (over 90%) is typical DSPN or chronic DSPN. […] DSPN results in insensitivity of the feet that predisposes to foot ulceration (1) and/or neuropathic pain (painful DSPN), which can be disabling. […] The onset of DSPN is usually gradual or insidious and is heralded by sensory symptoms that start in the toes and then progress proximally to involve the feet and legs in a stocking distribution. When the disease is well established in the lower limbs in more severe cases, there is upper limb involvement, with a similar progression proximally starting in the fingers. As the disease advances further, motor manifestations, such as wasting of the small muscles of the hands and limb weakness, become apparent. In some cases, there may be sensory loss that the patient may not be aware of, and the first presentation may be a foot ulcer. Approximately 50% of patients with DSPN experience neuropathic symptoms in the lower limbs including uncomfortable tingling (dysesthesia), pain (burning; shooting or “electric-shock like”; lancinating or “knife-like”; “crawling”, or aching etc., in character), evoked pain (allodynia, hyperesthesia), or unusual sensations (such as a feeling of swelling of the feet or severe coldness of the legs when clearly the lower limbs look and feel fine, odd sensations on walking likened to “walking on pebbles” or “walking on hot sand,” etc.). There may be marked pain on walking that may limit exercise and lead to weight gain. Painful DSPN is characteristically more severe at night and often interferes with normal sleep (3). It also has a major impact on the ability to function normally (both mental and physical functioning, e.g., ability to maintain work, mood, and quality of life [QoL]) (3,4). […] The unremitting nature of the pain can be distressing, resulting in mood disorders including depression and anxiety (4). The natural history of painful DSPN has not been well studied […]. However, it is generally believed that painful symptoms may persist over the years (5), occasionally becoming less prominent as the sensory loss worsens (6).”

“There have been relatively few epidemiological studies that have specifically examined the prevalence of painful DSPN, which range from 10–26% (79). In a recent study of a large cohort of diabetic patients receiving community-based health care in northwest England (n = 15,692), painful DSPN assessed using neuropathy symptom and disability scores was found in 21% (7). In one population-based study from Liverpool, U.K., the prevalence of painful DSPN assessed by a structured questionnaire and examination was estimated at 16% (8). Notably, it was found that 12.5% of these patients had never reported their symptoms to their doctor and 39% had never received treatment for their pain (8), indicating that there may be considerable underdiagnosis and undertreatment of painful neuropathic symptoms compared with other aspects of diabetes management such as statin therapy and management of hypertension. Risk factors for DSPN per se have been extensively studied, and it is clear that apart from poor glycemic control, cardiovascular risk factors play a prominent role (10): risk factors for painful DSPN are less well known.”

“A broad spectrum of presentations may occur in patients with DSPN, ranging from one extreme of the patient with very severe painful symptoms but few signs, to the other when patients may present with a foot ulcer having lost all sensation without ever having any painful or uncomfortable symptoms […] it is well recognized that the severity of symptoms may not relate to the severity of the deficit on clinical examination (1). […] Because DSPN is a diagnosis of exclusion, a careful clinical history and a peripheral neurological and vascular examination of the lower limbs are essential to exclude other causes of neuropathic pain and leg/foot pain such as peripheral vascular disease, arthritis, malignancy, alcohol abuse, spinal canal stenosis, etc. […] Patients with asymmetrical symptoms and/or signs (such as loss of an ankle jerk in one leg only), rapid progression of symptoms, or predominance of motor symptoms and signs should be carefully assessed for other causes of the findings.”

“The fact that diabetes induces neuropathy and that in a proportion of patients this is accompanied by pain despite the loss of input and numbness, suggests that marked changes occur in the processes of pain signaling in the peripheral and central nervous system. Neuropathic pain is characterized by ongoing pain together with exaggerated responses to painful and nonpainful stimuli, hyperalgesia, and allodynia. […] the changes seen suggest altered peripheral signaling and central compensatory changes perhaps driven by the loss of input. […] Very clear evidence points to the key role of changes in ion channels as a consequence of nerve damage and their roles in the disordered activity and transduction in damaged and intact fibers (50). Sodium channels depolarize neurons and generate an action potential. Following damage to peripheral nerves, the normal distribution of these channels along a nerve is disrupted by the neuroma and “ectopic” activity results from the accumulation of sodium channels at or around the site of injury. Other changes in the distribution and levels of these channels are seen and impact upon the pattern of neuronal excitability in the nerve. Inherited pain disorders arise from mutated sodium channels […] and polymorphisms in this channel impact on the level of pain in patients, indicating that inherited differences in channel function might explain some of the variability in pain between patients with DSPN (53). […] Where sodium channels act to generate action potentials, potassium channels serve as the molecular brakes of excitable cells, playing an important role in modulating neuronal hyperexcitability. The drug retigabine, a potassium channel opener acting on the channel (KV7, M-current) opener, blunts behavioral hypersensitivity in neuropathic rats (56) and also inhibits C and Aδ-mediated responses in dorsal horn neurons in both naïve and neuropathic rats (57), but has yet to reach the clinic as an analgesic”.

and C fibers terminate primarily in the superficial laminae of the dorsal horn where the large majority of neurons are nociceptive specific […]. Some of these neurons gain low threshold inputs after neuropathy and these cells project predominantly to limbic brain areas […] spinal cord neurons provide parallel outputs to the affective and sensory areas of the brain. Changes induced in these neurons by repeated noxious inputs underpin central sensitization where the resultant hyperexcitability of neurons leads to greater responses to all subsequent inputs — innocuous and noxious — expanded receptive fields and enhanced outputs to higher levels of the brain […] As a consequence of these changes in the sending of nociceptive information within the peripheral nerve and then the spinal cord, the information sent to the brain becomes amplified so that pain ratings become higher. Alongside this, the persistent input into the limbic brain areas such as the amygdala are likely to be causal in the comorbidities that patients often report due to ongoing painful inputs disrupting normal function and generating fear, depression, and sleep problems […]. Of course, many patients report that their pains are worse at night, which may be due to nocturnal changes in these central pain processing areas. […] overall, the mechanisms of pain in diabetic neuropathy extend from altered channel function in peripheral nerves through enhanced spinal processing and finally to changes in many higher centers”.

Pharmacological treatment of painful DSPN is not entirely satisfactory because currently available drugs are often ineffective and complicated by adverse events. Tricyclic compounds (TCAs) have been used as first-line agents for many years, but their use is limited by frequent side effects that may be central or anticholinergic, including dry mouth, constipation, sweating, blurred vision, sedation, and orthostatic hypotension (with the risk of falls particularly in elderly patients). […] Higher doses have been associated with an increased risk of sudden cardiac death, and caution should be taken in any patient with a history of cardiovascular disease (65). […] The selective serotonin noradrenalin reuptake inhibitors (SNRI) duloxetine and venlafaxine have been used for the management of painful DSPN (65). […] there have been several clinical trials involving pregabalin in painful DSPN, and these showed clear efficacy in management of painful DSPN (69). […] The side effects include dizziness, somnolence, peripheral edema, headache, and weight gain.”

A major deficiency in the area of the treatment of neuropathic pain in diabetes is the relative lack of comparative or combination studies. Virtually all previous trials have been of active agents against placebo, whereas there is a need for more studies that compare a given drug with an active comparator and indeed lower-dose combination treatments (64). […] The European Federation of Neurological Societies proposed that first-line treatments might comprise of TCAs, SNRIs, gabapentin, or pregabalin (71). The U.K. National Institute for Health and Care Excellence guidelines on the management of neuropathic pain in nonspecialist settings proposed that duloxetine should be the first-line treatment with amitriptyline as an alternative, and pregabalin as a second-line treatment for painful DSPN (72). […] this recommendation of duloxetine as the first-line therapy was not based on efficacy but rather cost-effectiveness. More recently, the American Academy of Neurology recommended that pregabalin is “established as effective and should be offered for relief of [painful DSPN] (Level A evidence)” (73), whereas venlafaxine, duloxetine, amitriptyline, gabapentin, valproate, opioids, and capsaicin were considered to be “probably effective and should be considered for treatment of painful DSPN (Level B evidence)” (63). […] this recommendation was primarily based on achievement of greater than 80% completion rate of clinical trials, which in turn may be influenced by the length of the trials. […] the International Consensus Panel on Diabetic Neuropathy recommended TCAs, duloxetine, pregabalin, and gabapentin as first-line agents having carefully reviewed all the available literature regarding the pharmacological treatment of painful DSPN (65), the final drug choice tailored to the particular patient based on demographic profile and comorbidities. […] The initial selection of a particular first-line treatment will be influenced by the assessment of contraindications, evaluation of comorbidities […], and cost (65). […] caution is advised to start at lower than recommended doses and titrate gradually.”

ii. Sex Differences in All-Cause and Cardiovascular Mortality, Hospitalization for Individuals With and Without Diabetes, and Patients With Diabetes Diagnosed Early and Late.

“A challenge with type 2 diabetes is the late diagnosis of the disease because many individuals who meet the criteria are often asymptomatic. Approximately 183 million people, or half of those who have diabetes, are unaware they have the disease (1). Furthermore, type 2 diabetes can be present for 9 to 12 years before being diagnosed and, as a result, complications are often present at the time of diagnosis (3). […] Cardiovascular disease (CVD) is the most common comorbidity associated with diabetes, and with 50% of those with diabetes dying of CVD it is the most common cause of death (1). […] Newfoundland and Labrador has the highest age-standardized prevalence of diabetes in Canada (2), and the age-standardized mortality and hospitalization rates for CVD, AMI, and stroke are some of the highest in the country (21,22). A better understanding of mortality and hospitalizations associated with diabetes for males and females is important to support diabetes prevention and management. Therefore, the objectives of this study were to compare the risk of all-cause, CVD, AMI, and stroke mortality and hospitalizations for males and females with and without diabetes and those with early and late diagnoses of diabetes. […] We conducted a population-based retrospective cohort study including 73,783 individuals aged 25 years or older in Newfoundland and Labrador, Canada (15,152 with diabetes; 9,517 with late diagnoses). […] mean age at baseline was 60.1 years (SD, 14.3 years). […] Diabetes was classified as being diagnosed “early” and “late” depending on when diabetes-related comorbidities developed. Individuals early in the disease course would not have any diabetes-related comorbidities at the time of their case dates. On the contrary, a late-diagnosed diabetes patient would have comorbidities related to diabetes at the time of diagnosis.”

“For males, 20.5% (n = 7,751) had diabetes, whereas 20.6% (n = 7,401) of females had diabetes. […] Males and females with diabetes were more likely to die, to be younger at death, to have a shorter survival time, and to be admitted to the hospital than males and females without diabetes (P < 0.01). When admitted to the hospital, individuals with diabetes stayed longer than individuals without diabetes […] Both males and females with late diagnoses were significantly older at the time of diagnosis than those with early diagnoses […]. Males and females with late diagnoses of diabetes were more likely to be deceased at the end of the study period compared with those with early diagnoses […]. Those with early diagnoses were younger at death compared with those with late diagnoses (P < 0.01); however, median survival time for both males and females with early diagnoses was significantly longer than that of those with late diagnoses (P < 0.01). During the study period, males and females with late diabetes diagnoses were more likely to be hospitalized (P < 0.01) and have a longer length of hospital stay compared with those with early diagnoses (P < 0.01).”

“[T]he hospitalization results show that an early diagnosis […] increase the risk of all-cause, CVD, and AMI hospitalizations compared with individuals without diabetes. After adjusting for covariates, males with late diabetes diagnoses had an increased risk of all-cause and CVD mortality and hospitalizations compared with males without diabetes. Similar findings were found for females. A late diabetes diagnosis was positively associated with CVD mortality (HR 6.54 [95% CI 4.80–8.91]) and CVD hospitalizations (5.22 [4.31–6.33]) for females, and the risk was significantly higher compared with their male counterparts (3.44 [2.47–4.79] and 3.33 [2.80–3.95]).”

iii. Effect of Type 1 Diabetes on Carotid Structure and Function in Adolescents and Young Adults.

I may have discussed some of the results of this study before, but a search of the blog told me that I have not covered the study itself. I thought it couldn’t hurt to add a link and a few highlights here.

“Type 1 diabetes mellitus causes increased carotid intima-media thickness (IMT) in adults. We evaluated IMT in young subjects with type 1 diabetes. […] Participants with type 1 diabetes (N = 402) were matched to controls (N = 206) by age, sex, and race or ethnicity. Anthropometric and laboratory values, blood pressure, and IMT were measured.”

“Youth with type 1 diabetes had thicker bulb IMT, which remained significantly different after adjustment for demographics and cardiovascular risk factors. […] Because the rate of progression of IMT in healthy subjects (mean age, 40 years) in the Bogalusa Heart study was 0.017–0.020 mm/year (4), our difference of 0.016 mm suggests that our type 1 diabetic subjects had a vascular age 1 year advanced from their chronological age. […] adjustment for HbA1c ablated the case-control difference in IMT, suggesting that the thicker carotid IMT in the subjects with diabetes could be attributed to diabetes-related hyperglycemia.”

“In the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) study, progression of IMT over the course of 6 years was faster in subjects with type 1 diabetes, yielding a thicker final IMT in cases (5). There was no difference in IMT at baseline. However, DCCT/EDIC did not image the bulb, which is likely the earliest site of thickening according to the Bogalusa Heart Study […] Our analyses reinforce the importance of imaging the carotid bulb, often the site of earliest detectible subclinical atherosclerosis in youth. The DCCT/EDIC study demonstrated that the intensive treatment group had a slower progression of IMT (5) and that mean HbA1c levels explained most of the differences in IMT progression between treatment groups (12). One longitudinal study of youth found children with type 1 diabetes who had progression of IMT over the course of 2 years had higher HbA1c (13). Our data emphasize the role of diabetes-related hyperglycemia in increasing IMT in youth with type 1 diabetes. […] In summary, our study provides novel evidence that carotid thickness is increased in youth with type 1 diabetes compared with healthy controls and that this difference is not accounted for by traditional cardiovascular risk factors. Better control of diabetes-related hyperglycemia may be needed to reduce future cardiovascular disease.”

iv. Factors Associated With Microalbuminuria in 7,549 Children and Adolescents With Type 1 Diabetes in the T1D Exchange Clinic Registry.

“Elevated urinary albumin excretion is an early sign of diabetic kidney disease (DKD). The American Diabetes Association (ADA) recommends screening for microalbuminuria (MA) annually in people with type 1 diabetes after 10 years of age and 5 years of diabetes duration, with a diagnosis of MA requiring two of three tests to be abnormal (1). Early diagnosis of MA is important because effective treatments exist to limit the progression of DKD (1). However, although reduced rates of MA have been reported over the past few decades in some (24) but not all (5,6) studies, it has been suggested that the development of proteinuria has not been prevented but, rather, has been delayed by ∼10 years and that further improvements in care are needed (7).

Limited data exist on the frequency of a clinical diagnosis of MA in the pediatric population with type 1 diabetes in the U.S. Our aim was to use the data from the T1D Exchange clinic registry to assess factors associated with MA in 7,549 children and adolescents with type 1 diabetes.”

“The analysis cohort included 7,549 participants, with mean age of 13.8 ± 3.5 years (range 2 to 19), mean age at type 1 diabetes onset of 6.9 ± 3.9 years, and mean diabetes duration of 6.5 ± 3.7 years; 49% were female. The racial/ethnic distribution was 78% non-Hispanic white, 6% non-Hispanic black, 10% Hispanic, and 5% other. The average of all HbA1c levels (for up to the past 13 years) was 8.4 ± 1.3% (69 ± 13.7 mmol/mol) […]. MA was present in 329 of 7,549 (4.4%) participants, with a higher frequency associated with longer diabetes duration, higher mean glycosylated hemoglobin (HbA1c) level, older age, female sex, higher diastolic blood pressure (BP), and lower BMI […] increasing age [was] mainly associated with an increase in the frequency of MA when HbA1c was ≥9.5% (≥80 mmol/mol). […] MA was uncommon (<2%) among participants with HbA1c <7.5% (<58 mmol/mol). Of those with MA, only 36% were receiving ACEI/ARB treatment. […] Our results provide strong support for prior literature in emphasizing the importance of good glycemic and BP control, particularly as diabetes duration increases, in order to reduce the risk of DKD.

v. Secular Changes in the Age-Specific Prevalence of Diabetes Among U.S. Adults: 1988–2010.

“This study included 22,586 adults sampled in three periods of the National Health and Nutrition Examination Survey (1988–1994, 1999–2004, and 2005–2010). Diabetes was defined as having self-reported diagnosed diabetes or having a fasting plasma glucose level ≥126 mg/dL or HbA1c ≥6.5% (48 mmol/mol). […] The number of adults with diabetes increased by 75% from 1988–1994 to 2005–2010. After adjusting for sex, race/ethnicity, and education level, the prevalence of diabetes increased over the two decades across all age-groups. Younger adults (20–34 years of age) had the lowest absolute increase in diabetes prevalence of 1.0%, followed by middle-aged adults (35–64) at 2.7% and older adults (≥65) at 10.0% (all P < 0.001). Comparing 2005–2010 with 1988–1994, the adjusted prevalence ratios (PRs) by age-group were 2.3, 1.3, and 1.5 for younger, middle-aged, and older adults, respectively (all P < 0.05). After additional adjustment for body mass index (BMI), waist-to-height ratio (WHtR), or waist circumference (WC), the adjusted PR remained statistically significant only for adults ≥65 years of age.

CONCLUSIONS During the past two decades, the prevalence of diabetes increased across all age-groups, but adults ≥65 years of age experienced the largest increase in absolute change. Obesity, as measured by BMI, WHtR, or WC, was strongly associated with the increase in diabetes prevalence, especially in adults <65.”

The crude prevalence of diabetes changed from 8.4% (95% CI 7.7–9.1%) in 1988–1994 to 12.1% (11.3–13.1%) in 2005–2010, with a relative increase of 44.8% (28.3–61.3%) between the two survey periods. There was less change of prevalence of undiagnosed diabetes (P = 0.053). […] The estimated number (in millions) of adults with diabetes grew from 14.9 (95% CI 13.3–16.4) in 1988–1994 to 26.1 (23.8–28.3) in 2005–2010, resulting in an increase of 11.2 prevalent cases (a 75.5% [52.1–98.9%] increase). Younger adults contributed 5.5% (2.5–8.4%), middle-aged adults contributed 52.9% (43.4–62.3%), and older adults contributed 41.7% (31.9–51.4%) of the increased number of cases. In each survey time period, the number of adults with diabetes increased with age until ∼60–69 years; thereafter, it decreased […] the largest increase of cases occurred in middle-aged and older adults.”

vi. The Expression of Inflammatory Genes Is Upregulated in Peripheral Blood of Patients With Type 1 Diabetes.

“Although much effort has been devoted toward discoveries with respect to gene expression profiling in human T1D in the last decade (15), previous studies had serious limitations. Microarray-based gene expression profiling is a powerful discovery platform, but the results must be validated by an alternative technique such as real-time RT-PCR. Unfortunately, few of the previous microarray studies on T1D have been followed by a validation study. Furthermore, most previous gene expression studies had small sample sizes (<100 subjects in each group) that are not adequate for the human population given the expectation of large expression variations among individual subjects. Finally, the selection of appropriate reference genes for normalization of quantitative real-time PCR has a major impact on data quality. Most of the previous studies have used only a single reference gene for normalization. Ideally, gene transcription studies using real-time PCR should begin with the selection of an appropriate set of reference genes to obtain more reliable results (68).

We have previously carried out extensive microarray analysis and identified >100 genes with significantly differential expression between T1D patients and control subjects. Most of these genes have important immunological functions and were found to be upregulated in autoantibody-positive subjects, suggesting their potential use as predictive markers and involvement in T1D development (2). In this study, real-time RT-PCR was performed to validate a subset of the differentially expressed genes in a large sample set of 928 T1D patients and 922 control subjects. In addition to the verification of the gene expression associated with T1D, we also identified genes with significant expression changes in T1D patients with diabetes complications.

“Of the 18 genes analyzed here, eight genes […] had higher expression and three genes […] had lower expression in T1D patients compared with control subjects, indicating that genes involved in inflammation, immune regulation, and antigen processing and presentation are significantly altered in PBMCs from T1D patients. Furthermore, one adhesion molecule […] and three inflammatory genes mainly expressed by myeloid cells […] were significantly higher in T1D patients with complications (odds ratio [OR] 1.3–2.6, adjusted P value = 0.005–10−8), especially those patients with neuropathy (OR 4.8–7.9, adjusted P value <0.005). […] These findings suggest that inflammatory mediators secreted mainly by myeloid cells are implicated in T1D and its complications.

vii. Overexpression of Hemopexin in the Diabetic Eye – A new pathogenic candidate for diabetic macular edema.

“Diabetic retinopathy remains the leading cause of preventable blindness among working-age individuals in developed countries (1). Whereas proliferative diabetic retinopathy (PDR) is the commonest sight-threatening lesion in type 1 diabetes, diabetic macular edema (DME) is the primary cause of poor visual acuity in type 2 diabetes. Because of the high prevalence of type 2 diabetes, DME is the main cause of visual impairment in diabetic patients (2). When clinically significant DME appears, laser photocoagulation is currently indicated. However, the optimal period of laser treatment is frequently passed and, moreover, is not uniformly successful in halting visual decline. In addition, photocoagulation is not without side effects, with visual field loss and impairment of either adaptation or color vision being the most frequent. Intravitreal corticosteroids have been successfully used in eyes with persistent DME and loss of vision after the failure of conventional treatment. However, reinjections are commonly needed, and there are substantial adverse effects such as infection, glaucoma, and cataract formation. Intravitreal anti–vascular endothelial growth factor (VEGF) agents have also found an improvement of visual acuity and decrease of retinal thickness in DME, even in nonresponders to conventional treatment (3). However, apart from local side effects such as endophthalmitis and retinal detachment, the response to treatment of DME by VEGF blockade is not prolonged and is subject to significant variability. For all these reasons, new pharmacological treatments based on the understanding of the pathophysiological mechanisms of DME are needed.”

“Vascular leakage due to the breakdown of the blood-retinal barrier (BRB) is the main event involved in the pathogenesis of DME (4). However, little is known regarding the molecules primarily involved in this event. By means of a proteomic analysis, we have found that hemopexin was significantly increased in the vitreous fluid of patients with DME in comparison with PDR and nondiabetic control subjects (5). Hemopexin is the best characterized permeability factor in steroid-sensitive nephrotic syndrome (6,7). […] T cell–associated cytokines like tumor necrosis factor-α are able to enhance hemopexin production in mesangial cells in vitro, and this effect is prevented by corticosteroids (8). However, whether hemopexin also acts as a permeability factor in the BRB and its potential response to corticosteroids remains to be elucidated. […] the aims of the current study were 1) to compare hemopexin and hemopexin receptor (LDL receptor–related protein [LRP1]) levels in retina and in vitreous fluid from diabetic and nondiabetic patients, 2) to evaluate the effect of hemopexin on the permeability of outer and inner BRB in cell cultures, and 3) to determine whether anti-hemopexin antibodies and dexamethasone were able to prevent an eventual hemopexin-induced hyperpermeability.”

“In the current study, we […] confirmed our previous results obtained by a proteomic approach showing that hemopexin is higher in the vitreous fluid of diabetic patients with DME in comparison with diabetic patients with PDR and nondiabetic subjects. In addition, we provide the first evidence that hemopexin is overexpressed in diabetic eye. Furthermore, we have shown that hemopexin leads to the disruption of RPE [retinal pigment epithelium] cells, thus increasing permeability, and that this effect is prevented by dexamethasone. […] Our findings suggest that hemopexin can be considered a new candidate in the pathogenesis of DME and a new therapeutic target.”

viii. Relationship Between Overweight and Obesity With Hospitalization for Heart Failure in 20,985 Patients With Type 1 Diabetes.

“We studied patients with type 1 diabetes included in the Swedish National Diabetes Registry during 1998–2003, and they were followed up until hospitalization for HF, death, or 31 December 2009. Cox regression was used to estimate relative risks. […] Type 1 diabetes is defined in the NDR as receiving treatment with insulin only and onset at age 30 years or younger. These characteristics previously have been validated as accurate in 97% of cases (11). […] In a sample of 20,985 type 1 diabetic patients (mean age, 38.6 years; mean BMI, 25.0 kg/m2), 635 patients […] (3%) were admitted for a primary or secondary diagnosis of HF during a median follow-up of 9 years, with an incidence of 3.38 events per 1,000 patient-years (95% CI, 3.12–3.65). […] Cox regression adjusting for age, sex, diabetes duration, smoking, HbA1c, systolic and diastolic blood pressures, and baseline and intercurrent comorbidities (including myocardial infarction) showed a significant relationship between BMI and hospitalization for HF (P < 0.0001). In reference to patients in the BMI 20–25 kg/m2 category, hazard ratios (HRs) were as follows: HR 1.22 (95% CI, 0.83–1.78) for BMI <20 kg/m2; HR 0.94 (95% CI, 0.78–1.12) for BMI 25–30 kg/m2; HR 1.55 (95% CI, 1.20–1.99) for BMI 30–35 kg/m2; and HR 2.90 (95% CI, 1.92–4.37) for BMI ≥35 kg/m2.

CONCLUSIONS Obesity, particularly severe obesity, is strongly associated with hospitalization for HF in patients with type 1 diabetes, whereas no similar relation was present in overweight and low body weight.”

“In contrast to type 2 diabetes, obesity is not implicated as a causal factor in type 1 diabetes and maintaining normal weight is accordingly less of a focus in clinical practice of patients with type 1 diabetes. Because most patients with type 2 diabetes are overweight or obese and glucose levels can normalize in some patients after weight reduction, this is usually an important part of integrated diabetes care. Our findings indicate that given the substantial risk of cardiovascular disease in type 1 diabetic patients, it is crucial for clinicians to also address weight issues in type 1 diabetes. Because many patients are normal weight when diabetes is diagnosed, careful monitoring of weight with a view to maintaining normal weight is probably more essential than previously thought. Although overweight was not associated with an increased risk of HF, higher BMI levels probably increase the risk of future obesity. Our finding that 71% of patients with BMI >35 kg/m2 were women is potentially important, although this should be tested in other populations given that it could be a random finding. If not random, especially because the proportion was much higher than in the entire cohort (45%), then it may indicate that severe obesity is a greater problem in women than in men with type 1 diabetes.”

November 30, 2017 Posted by | Cardiology, Diabetes, Genetics, Nephrology, Neurology, Ophthalmology, Pharmacology, Studies | Leave a comment


Most of these words are words which I encountered while reading the Jim Butcher books White Night, Small Favour, Turn Coat, and Changes.

Propitiate. Misericord. Skirling. Idiom. Cadge. Hapless. Roil. Kibble. Viridian. Kine. Shill. Steeple. Décolletage. Kukri. Rondure. Wee. Contrail. Servitor. Pastern. Fetlock.

Coterie. Crochet. Fibrillate. Knead. Divot. Avail. Tamale. Abalone. Cupola. Tuyere. Simulacrum. Bristle. Guff. Shimmy. Prow. Warble. Cannery. Twirl. Winch. Wheelhouse.

Teriyaki. Widdershins. Kibble. Slobber. Surcease. Amble. Invocation. Gasket. Chorale. Rivulet. Choker. Grimoire. Caduceus. Fussbudget. Pate. Scrunchie. Shamble. Ficus. Deposition. Grue.

Aliquot. Nape. Emanation. Atavistic. Menhir. Scrimshaw. Burble. Pauldron. Ornate. Stolid. Wry. Stamen. Ductwork. Speleothem. Philtrum. Hassock. Incipit. Planish. Rheology. Sinter.


November 29, 2017 Posted by | Books, Language | Leave a comment


A few quotes from the book and some related links below. Here’s my very short goodreads review of the book.


“The main naturally occurring radionuclides of primordial origin are uranium-235, uranium-238, thorium-232, their decay products, and potassium-40. The average abundance of uranium, thorium, and potassium in the terrestrial crust is 2.6 parts per million, 10 parts per million, and 1% respectively. Uranium and thorium produce other radionuclides via neutron- and alpha-induced reactions, particularly deeply underground, where uranium and thorium have a high concentration. […] A weak source of natural radioactivity derives from nuclear reactions of primary and secondary cosmic rays with the atmosphere and the lithosphere, respectively. […] Accretion of extraterrestrial material, intensively exposed to cosmic rays in space, represents a minute contribution to the total inventory of radionuclides in the terrestrial environment. […] Natural radioactivity is [thus] mainly produced by uranium, thorium, and potassium. The total heat content of the Earth, which derives from this radioactivity, is 12.6 × 1024 MJ (one megajoule = 1 million joules), with the crust’s heat content standing at 5.4 × 1021 MJ. For comparison, this is significantly more than the 6.4 × 1013 MJ globally consumed for electricity generation during 2011. This energy is dissipated, either gradually or abruptly, towards the external layers of the planet, but only a small fraction can be utilized. The amount of energy available depends on the Earth’s geological dynamics, which regulates the transfer of heat to the surface of our planet. The total power dissipated by the Earth is 42 TW (one TW = 1 trillion watts): 8 TW from the crust, 32.3 TW from the mantle, 1.7 TW from the core. This amount of power is small compared to the 174,000 TW arriving to the Earth from the Sun.”

“Charged particles such as protons, beta and alpha particles, or heavier ions that bombard human tissue dissipate their energy locally, interacting with the atoms via the electromagnetic force. This interaction ejects electrons from the atoms, creating a track of electron–ion pairs, or ionization track. The energy that ions lose per unit path, as they move through matter, increases with the square of their charge and decreases linearly with their energy […] The energy deposited in the tissues and organs of your body by ionizing radiation is defined absorbed dose and is measured in gray. The dose of one gray corresponds to the energy of one joule deposited in one kilogram of tissue. The biological damage wrought by a given amount of energy deposited depends on the kind of ionizing radiation involved. The equivalent dose, measured in sievert, is the product of the dose and a factor w related to the effective damage induced into the living matter by the deposit of energy by specific rays or particles. For X-rays, gamma rays, and beta particles, a gray corresponds to a sievert; for neutrons, a dose of one gray corresponds to an equivalent dose of 5 to 20 sievert, and the factor w is equal to 5–20 (depending on the neutron energy). For protons and alpha particles, w is equal to 5 and 20, respectively. There is also another weighting factor taking into account the radiosensitivity of different organs and tissues of the body, to evaluate the so-called effective dose. Sometimes the dose is still quoted in rem, the old unit, with 100 rem corresponding to one sievert.”

“Neutrons emitted during fission reactions have a relatively high velocity. When still in Rome, Fermi had discovered that fast neutrons needed to be slowed down to increase the probability of their reaction with uranium. The fission reaction occurs with uranium-235. Uranium-238, the most common isotope of the element, merely absorbs the slow neutrons. Neutrons slow down when they are scattered by nuclei with a similar mass. The process is analogous to the interaction between two billiard balls in a head-on collision, in which the incoming ball stops and transfers all its kinetic energy to the second one. ‘Moderators’, such as graphite and water, can be used to slow neutrons down. […] When Fermi calculated whether a chain reaction could be sustained in a homogeneous mixture of uranium and graphite, he got a negative answer. That was because most neutrons produced by the fission of uranium-235 were absorbed by uranium-238 before inducing further fissions. The right approach, as suggested by Szilárd, was to use separated blocks of uranium and graphite. Fast neutrons produced by the splitting of uranium-235 in the uranium block would slow down, in the graphite block, and then produce fission again in the next uranium block. […] A minimum mass – the critical mass – is required to sustain the chain reaction; furthermore, the material must have a certain geometry. The fissile nuclides, capable of sustaining a chain reaction of nuclear fission with low-energy neutrons, are uranium-235 […], uranium-233, and plutonium-239. The last two don’t occur in nature but can be produced artificially by irradiating with neutrons thorium-232 and uranium-238, respectively – via a reaction called neutron capture. Uranium-238 (99.27%) is fissionable, but not fissile. In a nuclear weapon, the chain reaction occurs very rapidly, releasing the energy in a burst.”

“The basic components of nuclear power reactors, fuel, moderator, and control rods, are the same as in the first system built by Fermi, but the design of today’s reactors includes additional components such as a pressure vessel, containing the reactor core and the moderator, a containment vessel, and redundant and diverse safety systems. Recent technological advances in material developments, electronics, and information technology have further improved their reliability and performance. […] The moderator to slow down fast neutrons is sometimes still the graphite used by Fermi, but water, including ‘heavy water’ – in which the water molecule has a deuterium atom instead of a hydrogen atom – is more widely used. Control rods contain a neutron-absorbing material, such as boron or a combination of indium, silver, and cadmium. To remove the heat generated in the reactor core, a coolant – either a liquid or a gas – is circulating through the reactor core, transferring the heat to a heat exchanger or directly to a turbine. Water can be used as both coolant and moderator. In the case of boiling water reactors (BWRs), the steam is produced in the pressure vessel. In the case of pressurized water reactors (PWRs), the steam generator, which is the secondary side of the heat exchanger, uses the heat produced by the nuclear reactor to make steam for the turbines. The containment vessel is a one-metre-thick concrete and steel structure that shields the reactor.”

“Nuclear energy contributed 2,518 TWh of the world’s electricity in 2011, about 14% of the global supply. As of February 2012, there are 435 nuclear power plants operating in 31 countries worldwide, corresponding to a total installed capacity of 368,267 MW (electrical). There are 63 power plants under construction in 13 countries, with a capacity of 61,032 MW (electrical).”

“Since the first nuclear fusion, more than 60 years ago, many have argued that we need at least 30 years to develop a working fusion reactor, and this figure has stayed the same throughout those years.”

“[I]onizing radiation is […] used to improve many properties of food and other agricultural products. For example, gamma rays and electron beams are used to sterilize seeds, flour, and spices. They can also inhibit sprouting and destroy pathogenic bacteria in meat and fish, increasing the shelf life of food. […] More than 60 countries allow the irradiation of more than 50 kinds of foodstuffs, with 500,000 tons of food irradiated every year. About 200 cobalt-60 sources and more than 10 electron accelerators are dedicated to food irradiation worldwide. […] With the help of radiation, breeders can increase genetic diversity to make the selection process faster. The spontaneous mutation rate (number of mutations per gene, for each generation) is in the range 10-8–10-5. Radiation can increase this mutation rate to 10-5–10-2. […] Long-lived cosmogenic radionuclides provide unique methods to evaluate the ‘age’ of groundwaters, defined as the mean subsurface residence time after the isolation of the water from the atmosphere. […] Scientists can date groundwater more than a million years old, through chlorine-36, produced in the atmosphere by cosmic-ray reactions with argon.”

“Radionuclide imaging was developed in the 1950s using special systems to detect the emitted gamma rays. The gamma-ray detectors, called gamma cameras, use flat crystal planes, coupled to photomultiplier tubes, which send the digitized signals to a computer for image reconstruction. Images show the distribution of the radioactive tracer in the organs and tissues of interest. This method is based on the introduction of low-level radioactive chemicals into the body. […] More than 100 diagnostic tests based on radiopharmaceuticals are used to examine bones and organs such as lungs, intestines, thyroids, kidneys, the liver, and gallbladder. They exploit the fact that our organs preferentially absorb different chemical compounds. […] Many radiopharmaceuticals are based on technetium-99m (an excited state of technetium-99 – the ‘m’ stands for ‘metastable’ […]). This radionuclide is used for the imaging and functional examination of the heart, brain, thyroid, liver, and other organs. Technetium-99m is extracted from molybdenum-99, which has a much longer half-life and is therefore more transportable. It is used in 80% of the procedures, amounting to about 40,000 per day, carried out in nuclear medicine. Other radiopharmaceuticals include short-lived gamma-emitters such as cobalt-57, cobalt-58, gallium-67, indium-111, iodine-123, and thallium-201. […] Methods routinely used in medicine, such as X-ray radiography and CAT, are increasingly used in industrial applications, particularly in non-destructive testing of containers, pipes, and walls, to locate defects in welds and other critical parts of the structure.”

“Today, cancer treatment with radiation is generally based on the use of external radiation beams that can target the tumour in the body. Cancer cells are particularly sensitive to damage by ionizing radiation and their growth can be controlled or, in some cases, stopped. High-energy X-rays produced by a linear accelerator […] are used in most cancer therapy centres, replacing the gamma rays produced from cobalt-60. The LINAC produces photons of variable energy bombarding a target with a beam of electrons accelerated by microwaves. The beam of photons can be modified to conform to the shape of the tumour, which is irradiated from different angles. The main problem with X-rays and gamma rays is that the dose they deposit in the human tissue decreases exponentially with depth. A considerable fraction of the dose is delivered to the surrounding tissues before the radiation hits the tumour, increasing the risk of secondary tumours. Hence, deep-seated tumours must be bombarded from many directions to receive the right dose, while minimizing the unwanted dose to the healthy tissues. […] The problem of delivering the needed dose to a deep tumour with high precision can be solved using collimated beams of high-energy ions, such as protons and carbon. […] Contrary to X-rays and gamma rays, all ions of a given energy have a certain range, delivering most of the dose after they have slowed down, just before stopping. The ion energy can be tuned to deliver most of the dose to the tumour, minimizing the impact on healthy tissues. The ion beam, which does not broaden during the penetration, can follow the shape of the tumour with millimetre precision. Ions with higher atomic number, such as carbon, have a stronger biological effect on the tumour cells, so the dose can be reduced. Ion therapy facilities are [however] still very expensive – in the range of hundreds of millions of pounds – and difficult to operate.”

“About 50 million years ago, a global cooling trend took our planet from the tropical conditions at the beginning of the Tertiary to the ice ages of the Quaternary, when the Arctic ice cap developed. The temperature decrease was accompanied by a decrease in atmospheric CO2 from 2,000 to 300 parts per million. The cooling was probably caused by a reduced greenhouse effect and also by changes in ocean circulation due to plate tectonics. The drop in temperature was not constant as there were some brief periods of sudden warming. Ocean deep-water temperatures dropped from 12°C, 50 million years ago, to 6°C, 30 million years ago, according to archives in deep-sea sediments (today, deep-sea waters are about 2°C). […] During the last 2 million years, the mean duration of the glacial periods was about 26,000 years, while that of the warm periods – interglacials – was about 27,000 years. Between 2.6 and 1.1 million years ago, a full cycle of glacial advance and retreat lasted about 41,000 years. During the past 1.2 million years, this cycle has lasted 100,000 years. Stable and radioactive isotopes play a crucial role in the reconstruction of the climatic history of our planet”.


CUORE (Cryogenic Underground Observatory for Rare Events).
Lawrence Livermore National Laboratory.
Marie Curie. Pierre Curie. Henri Becquerel. Wilhelm Röntgen. Joseph Thomson. Ernest Rutherford. Hans Geiger. Ernest Marsden. Niels Bohr.
Ruhmkorff coil.
Pitchblende (uraninite).
Polonium. Becquerel.
Alpha decay. Beta decay. Gamma radiation.
Plum pudding model.
Robert Boyle. John Dalton. Dmitri Mendeleev. Frederick Soddy. James Chadwick. Enrico Fermi. Lise Meitner. Otto Frisch.
Periodic Table.
Exponential decay. Decay chain.
Particle accelerator. Cockcroft-Walton generator. Van de Graaff generator.
Barn (unit).
Nuclear fission.
Manhattan Project.
Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Electron volt.
Thermoluminescent dosimeter.
Silicon diode detector.
Enhanced geothermal system.
Chicago Pile Number 1. Experimental Breeder Reactor 1. Obninsk Nuclear Power Plant.
Natural nuclear fission reactor.
Gas-cooled reactor.
Generation I reactors. Generation II reactor. Generation III reactor. Generation IV reactor.
Nuclear fuel cycle.
Accelerator-driven subcritical reactor.
Thorium-based nuclear power.
Small, sealed, transportable, autonomous reactor.
Fusion power. P-p (proton-proton) chain reaction. CNO cycle. Tokamak. ITER (International Thermonuclear Experimental Reactor).
Sterile insect technique.
Phase-contrast X-ray imaging. Computed tomography (CT). SPECT (Single-photon emission computed tomography). PET (positron emission tomography).
Boron neutron capture therapy.
Radiocarbon dating. Bomb pulse.
Radioactive tracer.
Radithor. The Radiendocrinator.
Radioisotope heater unit. Radioisotope thermoelectric generator. Seebeck effect.
Accelerator mass spectrometry.
Atomic bombings of Hiroshima and Nagasaki. Treaty on the Non-Proliferation of Nuclear Weapons. IAEA.
Nuclear terrorism.
Swiss light source. Synchrotron.
Chronology of the universe. Stellar evolution. S-process. R-process. Red giant. Supernova. White dwarf.
Victor Hess. Domenico Pacini. Cosmic ray.
Allende meteorite.
Age of the Earth. History of Earth. Geomagnetic reversal. Uranium-lead dating. Clair Cameron Patterson.
Glacials and interglacials.
Taung child. Lucy. Ardi. Ardipithecus kadabba. Acheulean tools. Java Man. Ötzi.
Argon-argon dating. Fission track dating.

November 28, 2017 Posted by | Archaeology, Astronomy, Biology, Books, Cancer/oncology, Chemistry, Engineering, Geology, History, Medicine, Physics | Leave a comment


A decent book. Below some quotes and links.

“[A]ll mass spectrometers have three essential components — an ion source, a mass filter, and some sort of detector […] Mass spectrometers need to achieve high vacuum to allow the uninterrupted transmission of ions through the instrument. However, even high-vacuum systems contain residual gas molecules which can impede the passage of ions. Even at very high vacuum there will still be residual gas molecules in the vacuum system that present potential obstacles to the ion beam. Ions that collide with residual gas molecules lose energy and will appear at the detector at slightly lower mass than expected. This tailing to lower mass is minimized by improving the vacuum as much as possible, but it cannot be avoided entirely. The ability to resolve a small isotope peak adjacent to a large peak is called ‘abundance sensitivity’. A single magnetic sector TIMS has abundance sensitivity of about 1 ppm per mass unit at uranium masses. So, at mass 234, 1 ion in 1,000,000 will actually be 235U not 234U, and this will limit our ability to quantify the rare 234U isotope. […] AMS [accelerator mass spectrometry] instruments use very high voltages to achieve high abundance sensitivity. […] As I write this chapter, the human population of the world has recently exceeded seven billion. […] one carbon atom in 1012 is mass 14. So, detecting 14C is far more difficult than identifying a single person on Earth, and somewhat comparable to identifying an individual leaf in the Amazon rain forest. Such is the power of isotope ratio mass spectrometry.”

14C is produced in the Earth’s atmosphere by the interaction between nitrogen and cosmic ray neutrons that releases a free proton turning 147N into 146C in a process that we call an ‘n-p’ reaction […] Because the process is driven by cosmic ray bombardment, we call 14C a ‘cosmogenic’ isotope. The half-life of 14C is about 5,000 years, so we know that all the 14C on Earth is either cosmogenic or has been created by mankind through nuclear reactors and bombs — no ‘primordial’ 14C remains because any that originally existed has long since decayed. 14C is not the only cosmogenic isotope; 16O in the atmosphere interacts with cosmic radiation to produce the isotope 10Be (beryllium). […] The process by which a high energy cosmic ray particle removes several nucleons is called ‘spallation’. 10Be production from 16O is not restricted to the atmosphere but also occurs when cosmic rays impact rock surfaces. […] when cosmic rays hit a rock surface they don’t bounce off but penetrate the top 2 or 3 metres (m) — the actual ‘attenuation’ depth will vary for particles of different energy. Most of the Earth’s crust is made of silicate minerals based on bonds between oxygen and silicon. So, the same spallation process that produces 10Be in the atmosphere also occurs in rock surfaces. […] If we know the flux of cosmic rays impacting a surface, the rate of production of the cosmogenic isotopes with depth below the rock surface, and the rate of radioactive decay, it should be possible to convert the number of cosmogenic atoms into an exposure age. […] Rocks on Earth which are shielded from much of the cosmic radiation have much lower levels of isotopes like 10Be than have meteorites which, before they arrive on Earth, are exposed to the full force of cosmic radiation. […] polar scientists have used cores drilled through ice sheets in Antarctica and Greenland to compare 10Be at different depths and thereby reconstruct 10Be production through time. The 14C and 10Be records are closely correlated indicating the common response to changes in the cosmic ray flux.”

“[O]nce we have credible cosmogenic isotope production rates, […] there are two classes of applications, which we can call ‘exposure’ and ‘burial’ methodologies. Exposure studies simply measure the accumulation of the cosmogenic nuclide. Such studies are simplest when the cosmogenic nuclide is a stable isotope like 3He and 21Ne. These will just accumulate continuously as the sample is exposed to cosmic radiation. Slightly more complicated are cosmogenic isotopes that are radioactive […]. These isotopes accumulate through exposure but will also be destroyed by radioactive decay. Eventually, the isotopes achieve the condition known as ‘secular equilibrium’ where production and decay are balanced and no chronological information can be extracted. Secular equilibrium is achieved after three to four half-lives […] Imagine a boulder that has been transported from its place of origin to another place within a glacier — what we call a glacial erratic. While the boulder was deeply covered in ice, it would not have been exposed to cosmic radiation. Its cosmogenic isotopes will only have accumulated since the ice melted. So a cosmogenic isotope exposure age tells us the date at which the glacier retreated, and, by examining multiple erratics from different locations along the course of the glacier, allows us to construct a retreat history for the de-glaciation. […] Burial methodologies using cosmogenic isotopes work in situations where a rock was previously exposed to cosmic rays but is now located in a situation where it is shielded.”

“Cosmogenic isotopes are also being used extensively to recreate the seismic histories of tectonically active areas. Earthquakes occur when geological faults give way and rock masses move. A major earthquake is likely to expose new rock to the Earth’s surface. If the field geologist can identify rocks in a fault zone that (s)he is confident were brought to the surface in an earthquake, then a cosmogenic isotope exposure age would date the fault — providing, of course, that subsequent erosion can be ruled out or quantified. Precarious rocks are rock outcrops that could reasonably be expected to topple if subjected to a significant earthquake. Dating the exposed surface of precarious rocks with cosmogenic isotopes can reveal the amount of time that has elapsed since the last earthquake of a magnitude that would have toppled the rock. Constructing records of seismic history is not merely of academic interest; some of the world’s seismically active areas are also highly populated and developed.”

“One aspect of the natural decay series that acts in favour of the preservation of accurate age information is the fact that most of the intermediate isotopes are short-lived. For example, in both the U series the radon (Rn) isotopes, which might be expected to diffuse readily out of a mineral, have half-lives of only seconds or days, too short to allow significant losses. Some decay series isotopes though do have significantly long half-lives which offer the potential to be geochronometers in their own right. […] These techniques depend on the tendency of natural decay series to evolve towards a state of ‘secular equilibrium’ in which the activity of all species in the decay series is equal. […] at secular equilibrium, isotopes with long half-lives (i.e. small decay constants) will have large numbers of atoms whereas short-lived isotopes (high decay constants) will only constitute a relatively small number of atoms. Since decay constants vary by several orders of magnitude, so will the numbers of atoms of each isotope in the equilibrium decay series. […] Geochronological applications of natural decay series depend upon some process disrupting the natural decay series to introduce either a deficiency or an excess of an isotope in the series. The decay series will then gradually return to secular equilibrium and the geochronometer relies on measuring the extent to which equilibrium has been approached.”

“The ‘ring of fire’ volcanoes around the margin of the Pacific Ocean are a manifestation of subduction in which the oldest parts of the Pacific Ocean crust are being returned to the mantle below. The oldest parts of the Pacific Ocean crust are about 150 million years (Ma) old, with anything older having already disappeared into the mantle via subduction zones. The Atlantic Ocean doesn’t have a ring of fire because it is a relatively young ocean which started to form about 60 Ma ago, and its oldest rocks are not yet ready to form subduction zones. Thus, while continental crust persists for billions of years, oceanic crust is a relatively transient (in terms of geological time) phenomenon at the Earth’s surface.”

“Mantle rocks typically contain minerals such as olivine, pyroxene, spinel, and garnet. Unlike say ice, which melts to form water, mixtures of minerals do not melt in the proportions in which they occur in the rock. Rather, they undergo partial melting in which some minerals […] melt preferentially leaving a solid residue enriched in refractory minerals […]. We know this from experimentally melting mantle-like rocks in the laboratory, but also because the basalts produced by melting of the mantle are closer in composition to Ca-rich (clino-) pyroxene than to the olivine-rich rocks that dominate the solid pieces (or xenoliths) of mantle that are sometimes transferred to the surface by certain types of volcanic eruptions. […] Thirty years ago geologists fiercely debated whether the mantle was homogeneous or heterogeneous; mantle isotope geochemistry hasn’t yet elucidated all the details but it has put to rest the initial conundrum; Earth’s mantle is compositionally heterogeneous.”


Frederick Soddy.
Rutherford–Bohr model.
Isotopes of hydrogen.
Radioactive decay. Types of decay. Alpha decay. Beta decay. Electron capture decay. Branching fraction. Gamma radiation. Spontaneous fission.
Radiocarbon dating.
Hessel de Vries.
Suess effect.
Bomb pulse.
Delta notation (non-wiki link).
Isotopic fractionation.
C3 carbon fixation. C4 carbon fixation.
Nitrogen-15 tracing.
Isotopes of strontium. Strontium isotope analysis.
Mass spectrometry.
Geiger counter.
Townsend avalanche.
Gas proportional counter.
Scintillation detector.
Liquid scintillation spectometry. Photomultiplier tube.
Thallium-doped sodium iodide detectors. Semiconductor-based detectors.
Isotope separation (-enrichment).
Doubly labeled water.
Urea breath test.
Radiation oncology.
Targeted radionuclide therapy.
MIBG scan.
Single-photon emission computed tomography.
Positron emission tomography.
Inductively coupled plasma (ICP) mass spectrometry.
Secondary ion mass spectrometry.
Faraday cup (-detector).
Stadials and interstadials. Oxygen isotope ratio cycle.
Gain and phase model.
Milankovitch cycles.
Perihelion and aphelion. Precession.
Equilibrium Clumped-Isotope Effects in Doubly Substituted Isotopologues of Ethane (non-wiki link).
Age of the Earth.
Uranium–lead dating.
Cretaceous–Paleogene boundary.
Argon-argon dating.
Nuclear chain reaction. Critical mass.
Fukushima Daiichi nuclear disaster.
Natural nuclear fission reactor.
Continental crust. Oceanic crust. Basalt.
Core–mantle boundary.
Ocean Island Basalt.
Isochron dating.

November 23, 2017 Posted by | Biology, Books, Botany, Chemistry, Geology, Medicine, Physics | Leave a comment

Promoting the unknown…

November 19, 2017 Posted by | Music | Leave a comment

Materials… (II)

Some more quotes and links:

“Whether materials are stiff and strong, or hard or weak, is the territory of mechanics. […] the 19th century continuum theory of linear elasticity is still the basis of much of modern solid mechanics. A stiff material is one which does not deform much when a force acts on it. Stiffness is quite distinct from strength. A material may be stiff but weak, like a piece of dry spaghetti. If you pull it, it stretches only slightly […], but as you ramp up the force it soon breaks. To put this on a more scientific footing, so that we can compare different materials, we might devise a test in which we apply a force to stretch a bar of material and measure the increase in length. The fractional change in length is the strain; and the applied force divided by the cross-sectional area of the bar is the stress. To check that it is Hookean, we double the force and confirm that the strain has also doubled. To check that it is truly elastic, we remove the force and check that the bar returns to the same length that it started with. […] then we calculate the ratio of the stress to the strain. This ratio is the Young’s modulus of the material, a quantity which measures its stiffness. […] While we are measuring the change in length of the bar, we might also see if there is a change in its width. It is not unreasonable to think that as the bar stretches it also becomes narrower. The Poisson’s ratio of the material is defined as the ratio of the transverse strain to the longitudinal strain (without the minus sign).”

“There was much argument between Cauchy and Lamé and others about whether there are two stiffness moduli or one. […] In fact, there are two stiffness moduli. One describes the resistance of a material to shearing and the other to compression. The shear modulus is the stiffness in distortion, for example in twisting. It captures the resistance of a material to changes of shape, with no accompanying change of volume. The compression modulus (usually called the bulk modulus) expresses the resistance to changes of volume (but not shape). This is what occurs as a cube of material is lowered deep into the sea, and is squeezed on all faces by the water pressure. The Young’s modulus [is] a combination of the more fundamental shear and bulk moduli, since stretching in one direction produces changes in both shape and volume. […] A factor of about 10,000 covers the useful range of Young’s modulus in engineering materials. The stiffness can be traced back to the forces acting between atoms and molecules in the solid state […]. Materials like diamond or tungsten with strong bonds are stiff in the bulk, while polymer materials with weak intermolecular forces have low stiffness.”

“In pure compression, the concept of ‘strength’ has no meaning, since the material cannot fail or rupture. But materials can and do fail in tension or in shear. To judge how strong a material is we can go back for example to the simple tension arrangement we used for measuring stiffness, but this time make it into a torture test in which the specimen is put on the rack. […] We find […] that we reach a strain at which the material stops being elastic and is permanently stretched. We have reached the yield point, and beyond this we have damaged the material but it has not failed. After further yielding, the bar may fail by fracture […]. On the other hand, with a bar of cast iron, there comes a point where the bar breaks, noisily and without warning, and without yield. This is a failure by brittle fracture. The stress at which it breaks is the tensile strength of the material. For the ductile material, the stress at which plastic deformation starts is the tensile yield stress. Both are measures of strength. It is in metals that yield and plasticity are of the greatest significance and value. In working components, yield provides a safety margin between small-strain elasticity and catastrophic rupture. […] plastic deformation is [also] exploited in making things from metals like steel and aluminium. […] A useful feature of plastic deformation in metals is that plastic straining raises the yield stress, particularly at lower temperatures.”

“Brittle failure is not only noisy but often scary. Engineers keep well away from it. An elaborate theory of fracture mechanics has been built up to help them avoid it, and there are tough materials to hand which do not easily crack. […] Since small cracks and flaws are present in almost any engineering component […], the trick is not to avoid cracks but to avoid long cracks which exceed [a] critical length. […] In materials which can yield, the tip stress can be relieved by plastic deformation, and this is a potent toughening mechanism in some materials. […] The trick of compressing a material to suppress cracking is a powerful way to toughen materials.”

“Hardness is a property which materials scientists think of in a particular and practical way. It tells us how well a material resists being damaged or deformed by a sharp object. That is useful information and it can be obtained easily. […] Soft is sometimes the opposite of hard […] But a different kind of soft is squidgy. […] In the soft box, we find many everyday materials […]. Some soft materials such as adhesives and lubricants are of great importance in engineering. For all of them, the model of a stiff crystal lattice provides no guidance. There is usually no crystal. The units are polymer chains, or small droplets of liquids, or small solid particles, with weak forces acting between them, and little structural organization. Structures when they exist are fragile. Soft materials deform easily when forces act on them […]. They sit as a rule somewhere between rigid solids and simple liquids. Their mechanical behaviour is dominated by various kinds of plasticity.”

“In pure metals, the resistivity is extremely low […] and a factor of ten covers all of them. […] the low resistivity (or, put another way, the high conductivity) arises from the existence of a conduction band in the solid which is only partly filled. Electrons in the conduction band are mobile and drift in an applied electric field. This is the electric current. The electrons are subject to some scattering from lattice vibrations which impedes their motion and generates an intrinsic resistance. Scattering becomes more severe as the temperature rises and the amplitude of the lattice vibrations becomes greater, so that the resistivity of metals increases with temperature. Scattering is further increased by microstructural heterogeneities, such as grain boundaries, lattice distortions, and other defects, and by phases of different composition. So alloys have appreciably higher resistivities than their pure parent metals. Adding 5 per cent nickel to iron doubles the resistivity, although the resistivities of the two pure metals are similar. […] Resistivity depends fundamentally on band structure. […] Plastics and rubbers […] are usually insulators. […] Electronically conducting plastics would have many uses, and some materials [e.g. this one] are now known. […] The electrical resistivity of many metals falls to exactly zero as they are cooled to very low temperatures. The critical temperature at which this happens varies, but for pure metallic elements it always lies below 10 K. For a few alloys, it is a little higher. […] Superconducting windings provide stable and powerful magnetic fields for magnetic resonance imaging, and many industrial and scientific uses.”

“A permanent magnet requires no power. Its magnetization has its origin in the motion of electrons in atoms and ions in the solid, but only a few materials have the favourable combination of quantum properties to give rise to useful ferromagnetism. […] Ferromagnetism disappears completely above the so-called Curie temperature. […] Below the Curie temperature, ferromagnetic alignment throughout the material can be established by imposing an external polarizing field to create a net magnetization. In this way a practical permanent magnet is made. The ideal permanent magnet has an intense magnetization (a strong field) which remains after the polarizing field is switched off. It can only be demagnetized by applying a strong polarizing field in the opposite direction: the size of this field is the coercivity of the magnet material. For a permanent magnet, it should be as high as possible. […] Permanent magnets are ubiquitous but more or less invisible components of umpteen devices. There are a hundred or so in every home […]. There are also important uses for ‘soft’ magnetic materials, in devices where we want the ferromagnetism to be temporary, not permanent. Soft magnets lose their magnetization after the polarizing field is removed […] They have low coercivity, approaching zero. When used in a transformer, such a soft ferromagnetic material links the input and output coils by magnetic induction. Ideally, the magnetization should reverse during every cycle of the alternating current to minimize energy losses and heating. […] Silicon transformer steels yielded large gains in efficiency in electrical power distribution when they were first introduced in the 1920s, and they remain pre-eminent.”

“At least 50 families of plastics are produced commercially today. […] These materials all consist of linear string molecules, most with simple carbon backbones, a few with carbon-oxygen backbones […] Plastics as a group are valuable because they are lightweight and work well in wet environments, and don’t go rusty. They are mostly unaffected by acids and salts. But they burn, and they don’t much like sunlight as the ultraviolet light can break the polymer backbone. Most commercial plastics are mixed with substances which make it harder for them to catch fire and which filter out the ultraviolet light. Above all, plastics are used because they can be formed and shaped so easily. The string molecule itself is held together by strong chemical bonds and is resilient, but the forces between the molecules are weak. So plastics melt at low temperatures to produce rather viscous liquids […]. And with modest heat and a little pressure, they can be injected into moulds to produce articles of almost any shape”.

“The downward cascade of high purity to adulterated materials in recycling is a kind of entropy effect: unmixing is thermodynamically hard work. But there is an energy-driven problem too. Most materials are thermodynamically unstable (or metastable) in their working environments and tend to revert to the substances from which they were made. This is well-known in the case of metals, and is the usual meaning of corrosion. The metals are more stable when combined with oxygen than uncombined. […] Broadly speaking, ceramic materials are more stable thermodynamically, since they already contain much oxygen in chemical combination. Even so, ceramics used in the open usually fall victim to some environmental predator. Often it is water that causes damage. Water steals sodium and potassium from glass surfaces by slow leaching. The surface shrinks and cracks, so the glass loses its transparency. […] Stones and bricks may succumb to the stresses of repeated freezing when wet; limestones decay also by the chemical action of sulfur and nitrogen gasses in polluted rainwater. Even buried archaeological pots slowly react with water in a remorseless process similar to that of rock weathering.”

Ashby plot.
Alan Arnold Griffith.
Creep (deformation).
Amontons’ laws of friction.
Internal friction.
Liquid helium.
Conductor. Insulator. Semiconductor. P-type -ll-. N-type -ll-.
Hall–Héroult process.
Snell’s law.
Chromatic aberration.
Dispersion (optics).
Density functional theory.
Pilkington float process.
Ziegler–Natta catalyst.
Integrated circuit.
Negative-index metamaterial.
Titanium dioxide.
Hyperfine structure (/-interactions).
Diamond anvil cell.
Synthetic rubber.
Simon–Ehrlich wager.
Sankey diagram.

November 16, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment

A few diabetes papers of interest

i. Thirty Years of Research on the Dawn Phenomenon: Lessons to Optimize Blood Glucose Control in Diabetes.

“More than 30 years ago in Diabetes Care, Schmidt et al. (1) defined “dawn phenomenon,” the night-to-morning elevation of blood glucose (BG) before and, to a larger extent, after breakfast in subjects with type 1 diabetes (T1D). Shortly after, a similar observation was made in type 2 diabetes (T2D) (2), and the physiology of glucose homeostasis at night was studied in normal, nondiabetic subjects (35). Ever since the first description, the dawn phenomenon has been studied extensively with at least 187 articles published as of today (6). […] what have we learned from the last 30 years of research on the dawn phenomenon? What is the appropriate definition, the identified mechanism(s), the importance (if any), and the treatment of the dawn phenomenon in T1D and T2D?”

“Physiology of glucose homeostasis in normal, nondiabetic subjects indicates that BG and plasma insulin concentrations remain remarkably flat and constant overnight, with a modest, transient increase in insulin secretion just before dawn (3,4) to restrain hepatic glucose production (4) and prevent hyperglycemia. Thus, normal subjects do not exhibit the dawn phenomenon sensu strictiori because they secrete insulin to prevent it.

In T1D, the magnitude of BG elevation at dawn first reported was impressive and largely secondary to the decrease of plasma insulin concentration overnight (1), commonly observed with evening administration of NPH or lente insulins (8) (Fig. 1). Even in early studies with intravenous insulin by the “artificial pancreas” (Biostator) (2), plasma insulin decreased overnight because of progressive inactivation of insulin in the pump (9). This artifact exaggerated the dawn phenomenon, now defined as need for insulin to limit fasting hyperglycemia (2). When the overnight waning of insulin was prevented by continuous subcutaneous insulin infusion (CSII) […] or by the long-acting insulin analogs (LA-IAs) (8), it was possible to quantify the real magnitude of the dawn phenomenon — 15–25 mg/dL BG elevation from nocturnal nadir to before breakfast […]. Nocturnal spikes of growth hormone secretion are the most likely mechanism of the dawn phenomenon in T1D (13,14). The observation from early pioneering studies in T1D (1012) that insulin sensitivity is higher after midnight until 3 a.m. as compared to the period 4–8 a.m., soon translated into use of more physiological replacement of basal insulin […] to reduce risk of nocturnal hypoglycemia while targeting fasting near-normoglycemia”.

“In T2D, identification of diurnal changes in BG goes back decades, but only quite recently fasting hyperglycemia has been attributed to a transient increase in hepatic glucose production (both glycogenolysis and gluconeogenesis) at dawn in the absence of compensatory insulin secretion (1517). Monnier et al. (7) report on the overnight (interstitial) glucose concentration (IG), as measured by continuous ambulatory IG monitoring, in three groups of 248 subjects with T2D […] Importantly, the dawn phenomenon had an impact on mean daily IG and A1C (mean increase of 0.39% [4.3 mmol/mol]), which was independent of treatment. […] Two messages from the data of Monnier et al. (7) are important. First, the dawn phenomenon is confirmed as a frequent event across the heterogeneous population of T2D independent of (oral) treatment and studied in everyday life conditions, not only in the setting of specialized clinical research units. Second, the article reaffirms that the primary target of treatment in T2D is to reestablish near-normoglycemia before and after breakfast (i.e., to treat the dawn phenomenon) to lower mean daily BG and A1C (8). […] the dawn phenomenon induces hyperglycemia not only before, but, to a larger extent, after breakfast as well (7,18). Over the years, fasting (and postbreakfast) hyperglycemia in T2D worsens as result of progressively impaired pancreatic B-cell function on the background of continued insulin resistance primarily at dawn (8,1518) and independently of age (19). Because it is an early metabolic abnormality leading over time to the vicious circle of “hyperglycemia begets hyperglycemia” by glucotoxicity and lipotoxicity, the dawn phenomenon in T2D should be treated early and appropriately before A1C continues to increase (20).”

“Oral medications do not adequately control the dawn phenomenon, even when given in combination (7,18). […] The evening replacement of basal insulin, which abolishes the dawn phenomenon by restraining hepatic glucose production and lipolysis (21), is an effective treatment as it mimics the physiology of glucose homeostasis in normal, nondiabetic subjects (4). Early use of basal insulin in T2D is an add-on option treatment after failure of metformin to control A1C <7.0% (20). However, […] it would be wise to consider initiation of basal insulin […] before — not after — A1C has increased well beyond 7.0%, as usually it is done in practice currently.”

ii. Peripheral Neuropathy in Adolescents and Young Adults With Type 1 and Type 2 Diabetes From the SEARCH for Diabetes in Youth Follow-up Cohort.

“Diabetic peripheral neuropathy (DPN) is among the most distressing of all the chronic complications of diabetes and is a cause of significant disability and poor quality of life (4). Depending on the patient population and diagnostic criteria, the prevalence of DPN among adults with diabetes ranges from 30 to 70% (57). However, there are insufficient data on the prevalence and predictors of DPN among the pediatric population. Furthermore, early detection and good glycemic control have been proven to prevent or delay adverse outcomes associated with DPN (5,8,9). Near-normal control of blood glucose beginning as soon as possible after the onset of diabetes may delay the development of clinically significant nerve impairment (8,9). […] The American Diabetes Association (ADA) recommends screening for DPN in children and adolescents with type 2 diabetes at diagnosis and 5 years after diagnosis for those with type 1 diabetes, followed by annual evaluations thereafter, using simple clinical tests (10). Since subclinical signs of DPN may precede development of frank neuropathic symptoms, systematic, preemptive screening is required in order to identify DPN in its earliest stages.

There are various measures that can be used for the assessment of DPN. The Michigan Neuropathy Screening Instrument (MNSI) is a simple, sensitive, and specific tool for the screening of DPN (11). It was validated in large independent cohorts (12,13) and has been widely used in clinical trials and longitudinal cohort studies […] The aim of this pilot study was to provide preliminary estimates of the prevalence of and factors associated with DPN among children and adolescents with type 1 and type 2 diabetes.”

“A total of 399 youth (329 with type 1 and 70 with type 2 diabetes) participated in the pilot study. Youth with type 1 diabetes were younger (mean age 15.7 ± 4.3 years) and had a shorter duration of diabetes (mean duration 6.2 ± 0.9 years) compared with youth with type 2 diabetes (mean age 21.6 ± 4.1 years and mean duration 7.6 ± 1.8 years). Participants with type 2 diabetes had a higher BMI z score and waist circumference, were more likely to be smokers, and had higher blood pressure and lipid levels than youth with type 1 diabetes (all P < 0.001). A1C, however, did not significantly differ between the two groups (mean A1C 8.8 ± 1.8% [73 ± 2 mmol/mol] for type 1 diabetes and 8.5 ± 2.9% [72 ± 3 mmol/mol] for type 2 diabetes; P = 0.5) but was higher than that recommended by the ADA for this age-group (A1C ≤7.5%) (10). The prevalence of DPN (defined as the MNSIE score >2) was 8.2% among youth with type 1 diabetes and 25.7% among those with type 2 diabetes. […] Youth with DPN were older and had a longer duration of diabetes, greater central obesity (increased waist circumference), higher blood pressure, an atherogenic lipid profile (low HDL cholesterol and marginally high triglycerides), and microalbuminuria. A1C […] was not significantly different between those with and without DPN (9.0% ± 2.0 […] vs. 8.8% ± 2.1 […], P = 0.58). Although nearly 37% of youth with type 2 diabetes came from lower-income families with annual income <25,000 USD per annum (as opposed to 11% for type 1 diabetes), socioeconomic status was not significantly associated with DPN (P = 0.77).”

“In the unadjusted logistic regression model, the odds of having DPN was nearly four times higher among those with type 2 diabetes compared with youth with type 1 diabetes (odds ratio [OR] 3.8 [95% CI 1.9–7.5, P < 0.0001). This association was attenuated, but remained significant, after adjustment for age and sex (OR 2.3 [95% CI 1.1–5.0], P = 0.03). However, this association was no longer significant (OR 2.1 [95% CI 0.3–15.9], P = 0.47) when additional covariates […] were added to the model […] The loss of the association between diabetes type and DPN with addition of covariates in the fully adjusted model could be due to power loss, given the small number of youth with DPN in the sample, or indicative of stronger associations between these covariates and DPN such that conditioning on them eliminates the observed association between DPN and diabetes type.”

“The prevalence of DPN among type 1 diabetes youth in our pilot study is lower than that reported by Eppens et al. (15) among 1,433 Australian adolescents with type 1 diabetes assessed by thermal threshold testing and VPT (prevalence of DPN 27%; median age and duration 15.7 and 6.8 years, respectively). A much higher prevalence was also reported among Danish (62.5%) and Brazilian (46%) cohorts of type 1 diabetes youth (16,17) despite a younger age (mean age among Danish children 13.7 years and Brazilian cohort 12.9 years). The prevalence of DPN among youth with type 2 diabetes (26%) found in our study is comparable to that reported among the Australian cohort (21%) (15). The wide ranges in the prevalence estimates of DPN among the young cannot solely be attributed to the inherent racial/ethnic differences in this population but could potentially be due to the differing criteria and diagnostic tests used to define and characterize DPN.”

“In our study, the duration of diabetes was significantly longer among those with DPN, but A1C values did not differ significantly between the two groups, suggesting that a longer duration with its sustained impact on peripheral nerves is an important determinant of DPN. […] Cho et al. (22) reported an increase in the prevalence of DPN from 14 to 28% over 17 years among 819 Australian adolescents with type 1 diabetes aged 11–17 years at baseline, despite improvements in care and minor improvements in A1C (8.2–8.7%). The prospective Danish Study Group of Diabetes in Childhood also found no association between DPN (assessed by VPT) and glycemic control (23).”

“In conclusion, our pilot study found evidence that the prevalence of DPN in adolescents with type 2 diabetes approaches rates reported in adults with diabetes. Several CVD risk factors such as central obesity, elevated blood pressure, dyslipidemia, and microalbuminuria, previously identified as predictors of DPN among adults with diabetes, emerged as independent predictors of DPN in this young cohort and likely accounted for the increased prevalence of DPN in youth with type 2 diabetes.

iii. Disturbed Eating Behavior and Omission of Insulin in Adolescents Receiving Intensified Insulin Treatment.

“Type 1 diabetes appears to be a risk factor for the development of disturbed eating behavior (DEB) (1,2). Estimates of the prevalence of DEB among individuals with type 1 diabetes range from 10 to 49% (3,4), depending on methodological issues such as the definition and measurement of DEB. Some studies only report the prevalence of full-threshold diagnoses of anorexia nervosa, bulimia nervosa, and eating disorders not otherwise specified, whereas others also include subclinical eating disorders (1). […] Although different terminology complicates the interpretation of prevalence rates across studies, the findings are sufficiently robust to indicate that there is a higher prevalence of DEB in type 1 diabetes compared with healthy controls. A meta-analysis reported a three-fold increase of bulimia nervosa, a two-fold increase of eating disorders not otherwise specified, and a two-fold increase of subclinical eating disorders in patients with type 1 diabetes compared with controls (2). No elevated rates of anorexia nervosa were found.”

“When DEB and type 1 diabetes co-occur, rates of morbidity and mortality are dramatically increased. A Danish study of comorbid type 1 diabetes and anorexia nervosa showed that the crude mortality rate at 10-year follow-up was 2.5% for type 1 diabetes and 6.5% for anorexia nervosa, but the rate increased to 34.8% when occurring together (the standardized mortality rates were 4.06, 8.86, and 14.5, respectively) (9). The presence of DEB in general also can severely impair metabolic control and advance the onset of long-term diabetes complications (4). Insulin reduction or omission is an efficient weight loss strategy uniquely available to patients with type 1 diabetes and has been reported in up to 37% of patients (1012). Insulin restriction is associated with poorer metabolic control, and previous research has found that self-reported insulin restriction at baseline leads to a three-fold increased risk of mortality at 11-year follow-up (10).

Few population-based studies have specifically investigated the prevalence of and relationship between DEBs and insulin restriction. The generalizability of existing research remains limited by relatively small samples and a lack of males. Further, many studies have relied on generic measures of DEBs, which may not be appropriate for use in individuals with type 1 diabetes. The Diabetes Eating Problem Survey–Revised (DEPS-R) is a newly developed and diabetes-specific screening tool for DEBs. A recent study demonstrated satisfactory psychometric properties of the Norwegian version of the DEPS-R among children and adolescents with type 1 diabetes 11–19 years of age (13). […] This study aimed to assess young patients with type 1 diabetes to assess the prevalence of DEBs and frequency of insulin omission or restriction, to compare the prevalence of DEB between males and females across different categories of weight and age, and to compare the clinical features of participants with and without DEBs and participants who restrict and do not restrict insulin. […] The final sample consisted of 770 […] children and adolescents with type 1 diabetes 11–19 years of age. There were 380 (49.4%) males and 390 (50.6%) females.”

27.7% of female and 9% of male children and adolescents with type 1 diabetes receiving intensified insulin treatment scored above the predetermined cutoff on the DEPS-R, suggesting a level of disturbed eating that warrants further attention by treatment providers. […] Significant differences emerged across age and weight categories, and notable sex-specific trends were observed. […] For the youngest (11–13 years) and underweight (BMI <18.5) categories, the proportion of DEB was <10% for both sexes […]. Among females, the prevalence of DEB increased dramatically with age to ∼33% among 14 to 16 year olds and to nearly 50% among 17 to 19 year olds. Among males, the rate remained low at 7% for 14 to 16 year olds and doubled to ∼15% for 17 to 19 year olds.

A similar sex-specific pattern was detected across weight categories. Among females, the prevalence of DEB increased steadily and significantly from 9% among the underweight category to 23% for normal weight, 42% for overweight, and 53% for the obese categories, respectively. Among males, ∼6–7% of both the underweight and normal weight groups reported DEB, with rates increasing to ∼15% for both the overweight and obese groups. […] When separated by sex, females scoring above the cutoff on the DEPS-R had significantly higher HbA1c (9.2% [SD, 1.9]) than females scoring below the cutoff (8.4% [SD, 1.3]; P < 0.001). The same trend was observed among males (9.2% [SD, 1.6] vs. 8.4% [SD, 1.3]; P < 0.01). […] A total of 31.6% of the participants reported using less insulin and 6.9% reported skipping their insulin dose entirely at least occasionally after overeating. When assessing the sexes separately, we found that 36.8% of females reported restricting and 26.2% reported skipping insulin because of overeating. The rates for males were 9.4 and 4.5%, respectively.”

“The finding that DEBs are common in young patients with type 1 diabetes is in line with previous literature (2). However, because of different assessment methods and different definitions of DEB, direct comparison with other studies is complicated, especially because this is the first study to have used the DEPS-R in a prevalence study. However, two studies using the original DEPS have reported similar results, with 37.9% (23) and 53.8% (24) of the participants reporting engaging in unhealthy weight control practices. In our study, females scored significantly higher than males, which is not surprising given previous studies demonstrating an increased risk of development of DEB in nondiabetic females compared with males. In addition, the prevalence rates increased considerably by increasing age and weight. A relationship between eating pathology and older age and higher BMI also has been demonstrated in previous research conducted in both diabetic and nondiabetic adolescent populations.”

“Consistent with existent literature (1012,27), we found a high frequency of insulin restriction. For example, Bryden et al. (11) assessed 113 males and females (aged 17–25 years) with type 1 diabetes and found that a total of 37% of the females (no males) reported a history of insulin omission or reduction for weight control purposes. Peveler et al. (12) investigated 87 females with type 1 diabetes aged 11–25 years, and 36% reported intentionally reducing or omitting their insulin doses to control their weight. Finally, Goebel-Fabbri et al. (10) examined 234 females 13–60 years of age and found that 30% reported insulin restriction. Similarly, 36.8% of the participants in our study reported reducing their insulin doses occasionally or more often after overeating.”

iv. Clinical Inertia in People With Type 2 Diabetes. A retrospective cohort study of more than 80,000 people.

“Despite good-quality evidence of tight glycemic control, particularly early in the disease trajectory (3), people with type 2 diabetes often do not reach recommended glycemic targets. Baseline characteristics in observational studies indicate that both insulin-experienced and insulin-naïve people may have mean HbA1c above the recommended target levels, reflecting the existence of patients with poor glycemic control in routine clinical care (810). […] U.K. data, based on an analysis reflecting previous NICE guidelines, show that it takes a mean of 7.7 years to initiate insulin after the start of the last OAD [oral antidiabetes drugs] (in people taking two or more OADs) and that mean HbA1c is ~10% (86 mmol/mol) at the time of insulin initiation (12). […] This failure to intensify treatment in a timely manner has been termed clinical inertia; however, data are lacking on clinical inertia in the diabetes-management pathway in a real-world primary care setting, and studies that have been carried out are, relatively speaking, small in scale (13,14). This retrospective cohort analysis investigates time to intensification of treatment in people with type 2 diabetes treated with OADs and the associated levels of glycemic control, and compares these findings with recommended treatment guidelines for diabetes.”

“We used the Clinical Practice Research Datalink (CPRD) database. This is the world’s largest computerized database, representing the primary care longitudinal records of >13 million patients from across the U.K. The CPRD is representative of the U.K. general population, with age and sex distributions comparable with those reported by the U.K. National Population Census (15). All information collected in the CPRD has been subjected to validation studies and been proven to contain consistent and high-quality data (16).”

“50,476 people taking one OAD, 25,600 people taking two OADs, and 5,677 people taking three OADs were analyzed. Mean baseline HbA1c (the most recent measurement within 6 months before starting OADs) was 8.4% (68 mmol/mol), 8.8% (73 mmol/mol), and 9.0% (75 mmol/mol) in people taking one, two, or three OADs, respectively. […] In people with HbA1c ≥7.0% (≥53 mmol/mol) taking one OAD, median time to intensification with an additional OAD was 2.9 years, whereas median time to intensification with insulin was >7.2 years. Median time to insulin intensification in people with HbA1c ≥7.0% (≥53 mmol/mol) taking two or three OADs was >7.2 and >7.1 years, respectively. In people with HbA1c ≥7.5% or ≥8.0% (≥58 or ≥64 mmol/mol) taking one OAD, median time to intensification with an additional OAD was 1.9 or 1.6 years, respectively; median time to intensification with insulin was >7.1 or >6.9 years, respectively. In those people with HbA1c ≥7.5% or ≥8.0% (≥58 or ≥64 mmol/mol) and taking two OADs, median time to insulin was >7.2 and >6.9 years, respectively; and in those people taking three OADs, median time to insulin intensification was >6.1 and >6.0 years, respectively.”

“By end of follow-up, treatment of 17.5% of people with HbA1c ≥7.0% (≥53 mmol/mol) taking three OADs was intensified with insulin, treatment of 20.6% of people with HbA1c ≥7.5% (≥58 mmol/mol) taking three OADs was intensified with insulin, and treatment of 22.0% of people with HbA1c ≥8.0% (≥64 mmol/mol) taking three OADs was intensified with insulin. There were minimal differences in the proportion of patients intensified between the groups. […] In people taking one OAD, the probability of an additional OAD or initiation of insulin was 23.9% after 1 year, increasing to 48.7% by end of follow-up; in people taking two OADs, the probability of an additional OAD or initiation of insulin was 11.4% after 1 year, increasing to 30.1% after 2 years; and in people taking three OADs, the probability of an additional OAD or initiation of insulin was 5.7% after 1 year, increasing to 12.0% by the end of follow-up […] Mean ± SD HbA1c in patients taking one OAD was 8.7 ± 1.6% in those intensified with an additional OAD (n = 14,605), 9.4 ± 2.3% (n = 1,228) in those intensified with insulin, and 8.7 ± 1.7% (n = 15,833) in those intensified with additional OAD or insulin. Mean HbA1c in patients taking two OADs was 8.8 ± 1.5% (n = 3,744), 9.8 ± 1.9% (n = 1,631), and 9.1 ± 1.7% (n = 5,405), respectively. In patients taking three OADs, mean HbA1c at intensification with insulin was 9.7 ± 1.6% (n = 514).”

This analysis shows that there is a delay in intensifying treatment in people with type 2 diabetes with suboptimal glycemic control, with patients remaining in poor glycemic control for >7 years before intensification of treatment with insulin. In patients taking one, two, or three OADs, median time from initiation of treatment to intensification with an additional OAD for any patient exceeded the maximum follow-up time of 7.2–7.3 years, dependent on subcohort. […] Despite having HbA1c levels for which diabetes guidelines recommend treatment intensification, few people appeared to undergo intensification (4,6,7). The highest proportion of people with clinical inertia was for insulin initiation in people taking three OADs. Consequently, these people experienced prolonged periods in poor glycemic control, which is detrimental to long-term outcomes.”

“Previous studies in U.K. general practice have shown similar findings. A retrospective study involving 14,824 people with type 2 diabetes from 154 general practice centers contributing to the Doctors Independent Network Database (DIN-LINK) between 1995 and 2005 observed that median time to insulin initiation for people prescribed multiple OADs was 7.7 years (95% CI 7.4–8.5 years); mean HbA1c before insulin was 9.85% (84 mmol/mol), which decreased by 1.34% (95% CI 1.24–1.44%) after therapy (12). A longitudinal observational study from health maintenance organization data in 3,891 patients with type 2 diabetes in the U.S. observed that, despite continued HbA1c levels >7% (>53 mmol/mol), people treated with sulfonylurea and metformin did not start insulin for almost 3 years (21). Another retrospective cohort study, using data from the Health Improvement Network database of 2,501 people with type 2 diabetes, estimated that only 25% of people started insulin within 1.8 years of multiple OAD failure, if followed for 5 years, and that 50% of people delayed starting insulin for almost 5 years after failure of glycemic control with multiple OADs (22). The U.K. cohort of a recent, 26-week observational study examining insulin initiation in clinical practice reported a large proportion of insulin-naïve people with HbA1c >9% (>75 mmol/mol) at baseline (64%); the mean HbA1c in the global cohort was 8.9% (74 mmol/mol) (10). Consequently, our analysis supports previous findings concerning clinical inertia in both U.K. and U.S. general practice and reflects little improvement in recent years, despite updated treatment guidelines recommending tight glycemic control.

v. Small- and Large-Fiber Neuropathy After 40 Years of Type 1 Diabetes. Associations with glycemic control and advanced protein glycation: the Oslo Study.

“How hyperglycemia may cause damage to the nervous system is not fully understood. One consequence of hyperglycemia is the generation of advanced glycation end products (AGEs) that can form nonenzymatically between glucose, lipids, and amino groups. It is believed that AGEs are involved in the pathophysiology of neuropathy. AGEs tend to affect cellular function by altering protein function (11). One of the AGEs, N-ε-(carboxymethyl)lysine (CML), has been found in excessive amounts in the human diabetic peripheral nerve (12). High levels of methylglyoxal in serum have been found to be associated with painful peripheral neuropathy (13). In recent years, differentiation of affected nerves is possible by virtue of specific function tests to distinguish which fibers are damaged in diabetic polyneuropathy: large myelinated (Aα, Aβ), small thinly myelinated (Aδ), or small nonmyelinated (C) fibers. […] Our aims were to evaluate large- and small-nerve fiber function in long-term type 1 diabetes and to search for longitudinal associations with HbA1c and the AGEs CML and methylglyoxal-derived hydroimidazolone.”

“27 persons with type 1 diabetes of 40 ± 3 years duration underwent large-nerve fiber examinations, with nerve conduction studies at baseline and years 8, 17, and 27. Small-fiber functions were assessed by quantitative sensory thresholds (QST) and intraepidermal nerve fiber density (IENFD) at year 27. HbA1c was measured prospectively through 27 years. […] Fourteen patients (52%) reported sensory symptoms. Nine patients reported symptoms of a sensory neuropathy (reduced sensibility in feet or impaired balance), while three of these patients described pain. Five patients had symptoms compatible with carpal tunnel syndrome (pain or paresthesias within the innervation territory of the median nerve […]. An additional two had no symptoms but abnormal neurological tests with absent tendon reflexes and reduced sensibility. A total of 16 (59%) of the patients had symptoms or signs of neuropathy. […] No patient with symptoms of neuropathy had normal neurophysiological findings. […] Abnormal autonomic testing was observed in 7 (26%) of the patients and occurred together with neurophysiological signs of peripheral neuropathy. […] Twenty-two (81%) had small-fiber dysfunction by QST. Heat pain thresholds in the foot were associated with hydroimidazolone and HbA1c. IENFD was abnormal in 19 (70%) and significantly lower in diabetic patients than in age-matched control subjects (4.3 ± 2.3 vs. 11.2 ± 3.5 mm, P < 0.001). IENFD correlated negatively with HbA1c over 27 years (r = −0.4, P = 0.04) and CML (r = −0.5, P = 0.01). After adjustment for age, height, and BMI in a multiple linear regression model, CML was still independently associated with IENFD.”

Our study shows that small-fiber dysfunction is more prevalent than large-fiber dysfunction in diabetic neuropathy after long duration of type 1 diabetes. Although large-fiber abnormalities were less common than small-fiber abnormalities, almost 60% of the participants had their large nerves affected after 40 years with diabetes. Long-term blood glucose estimated by HbA1c measured prospectively through 27 years and AGEs predict large- and small-nerve fiber function.”

vi. Subarachnoid Hemorrhage in Type 1 Diabetes. A prospective cohort study of 4,083 patients with diabetes.

“Subarachnoid hemorrhage (SAH) is a life-threatening cerebrovascular event, which is usually caused by a rupture of a cerebrovascular aneurysm. These aneurysms are mostly found in relatively large-caliber (≥1 mm) vessels and can often be considered as macrovascular lesions. The overall incidence of SAH has been reported to be 10.3 per 100,000 person-years (1), even though the variation in incidence between countries is substantial (1). Notably, the population-based incidence of SAH is 35 per 100,000 person-years in the adult (≥25 years of age) Finnish population (2). The incidence of nonaneurysmal SAH is globally unknown, but it is commonly believed that 5–15% of all SAHs are of nonaneurysmal origin. Prospective, long-term, population-based SAH risk factor studies suggest that smoking (24), high blood pressure (24), age (2,3), and female sex (2,4) are the most important risk factors for SAH, whereas diabetes (both types 1 and 2) does not appear to be associated with an increased risk of SAH (2,3).

An increased risk of cardiovascular disease is well recognized in people with diabetes. There are, however, very few studies on the risk of cerebrovascular disease in type 1 diabetes since most studies have focused on type 2 diabetes alone or together with type 1 diabetes. Cerebrovascular mortality in the 20–39-year age-group of people with type 1 diabetes is increased five- to sevenfold in comparison with the general population but accounts only for 15% of all cardiovascular deaths (5). Of the cerebrovascular deaths in patients with type 1 diabetes, 23% are due to hemorrhagic strokes (5). However, the incidence of SAH in type 1 diabetes is unknown. […] In this prospective cohort study of 4,083 patients with type 1 diabetes, we aimed to determine the incidence and characteristics of SAH.”

“52% [of participants] were men, the mean age was 37.4 ± 11.8 years, and the duration of diabetes was 21.6 ± 12.1 years at enrollment. The FinnDiane Study is a nationwide multicenter cohort study of genetic, clinical, and environmental risk factors for microvascular and macrovascular complications in type 1 diabetes. […] all type 1 diabetic patients in the FinnDiane database with follow-up data and without a history of stroke at baseline were included. […] Fifteen patients were confirmed to have an SAH, and thus the crude incidence of SAH was 40.9 (95% CI 22.9–67.4) per 100,000 person-years. Ten out of these 15 SAHs were nonaneurysmal SAHs […] The crude incidence of nonaneurysmal SAH was 27.3 (13.1–50.1) per 100,000 person-years. None of the 10 nonaneurysmal SAHs were fatal. […] Only 3 out of 10 patients did not have verified diabetic microvascular or macrovascular complications prior to the nonaneurysmal SAH event. […] Four patients with type 1 diabetes had a fatal SAH, and all these patients died within 24 h after SAH.”

The presented study results suggest that the incidence of nonaneurysmal SAH is high among patients with type 1 diabetes. […] It is of note that smoking type 1 diabetic patients had a significantly increased risk of nonaneurysmal and all-cause SAHs. Smoking also increases the risk of microvascular complications in insulin-treated diabetic patients, and these patients more often have retinal and renal microangiopathy than never-smokers (8). […] Given the high incidence of nonaneurysmal SAH in patients with type 1 diabetes and microvascular changes (i.e., diabetic retinopathy and nephropathy), the results support the hypothesis that nonaneurysmal SAH is a microvascular rather than macrovascular subtype of stroke.”

“Only one patient with type 1 diabetes had a confirmed aneurysmal SAH. Four other patients died suddenly due to an SAH. If these four patients with type 1 diabetes and a fatal SAH had an aneurysmal SAH, which, taking into account the autopsy reports and imaging findings, is very likely, aneurysmal SAH may be an exceptionally deadly event in type 1 diabetes. Population-based evidence suggests that up to 45% of people die during the first 30 days after SAH, and 18% die at emergency rooms or outside hospitals (9). […] Contrary to aneurysmal SAH, nonaneurysmal SAH is virtually always a nonfatal event (1014). This also supports the view that nonaneurysmal SAH is a disease of small intracranial vessels, i.e., a microvascular disease. Diabetic retinopathy, a chronic microvascular complication, has been associated with an increased risk of stroke in patients with diabetes (15,16). Embryonically, the retina is an outgrowth of the brain and is similar in its microvascular properties to the brain (17). Thus, it has been suggested that assessments of the retinal vasculature could be used to determine the risk of cerebrovascular diseases, such as stroke […] Most interestingly, the incidence of nonaneurysmal SAH was at least two times higher than the incidence of aneurysmal SAH in type 1 diabetic patients. In comparison, the incidence of nonaneurysmal SAH is >10 times lower than the incidence of aneurysmal SAH in the general adult population (21).”

vii. HbA1c and the Risks for All-Cause and Cardiovascular Mortality in the General Japanese Population.

Keep in mind when looking at these data that this is type 2 data. Type 1 diabetes is very rare in Japan and the rest of East Asia.

“The risk for cardiovascular death was evaluated in a large cohort of participants selected randomly from the overall Japanese population. A total of 7,120 participants (2,962 men and 4,158 women; mean age 52.3 years) free of previous CVD were followed for 15 years. Adjusted hazard ratios (HRs) and 95% CIs among categories of HbA1c (<5.0%, 5.0–5.4%, 5.5–5.9%, 6.0–6.4%, and ≥6.5%) for participants without treatment for diabetes and HRs for participants with diabetes were calculated using a Cox proportional hazards model.

RESULTS During the study, there were 1,104 deaths, including 304 from CVD, 61 from coronary heart disease, and 127 from stroke (78 from cerebral infarction, 25 from cerebral hemorrhage, and 24 from unclassified stroke). Relations to HbA1c with all-cause mortality and CVD death were graded and continuous, and multivariate-adjusted HRs for CVD death in participants with HbA1c 6.0–6.4% and ≥6.5% were 2.18 (95% CI 1.22–3.87) and 2.75 (1.43–5.28), respectively, compared with participants with HbA1c <5.0%. Similar associations were observed between HbA1c and death from coronary heart disease and death from cerebral infarction.

CONCLUSIONS High HbA1c levels were associated with increased risk for all-cause mortality and death from CVD, coronary heart disease, and cerebral infarction in general East Asian populations, as in Western populations.”

November 15, 2017 Posted by | Cardiology, Diabetes, Epidemiology, Medicine, Neurology, Pharmacology, Studies | Leave a comment

Materials (I)…

Useful matter is a good definition of materials. […] Materials are materials because inventive people find ingenious things to do with them. Or just because people use them. […] Materials science […] explains how materials are made and how they behave as we use them.”

I recently read this book, which I liked. Below I have added some quotes from the first half of the book, with some added hopefully helpful links, as well as a collection of links at the bottom of the post to other topics covered.

“We understand all materials by knowing about composition and microstructure. Despite their extraordinary minuteness, the atoms are the fundamental units, and they are real, with precise attributes, not least size. Solid materials tend towards crystallinity (for the good thermodynamic reason that it is the arrangement of lowest energy), and they usually achieve it, though often in granular, polycrystalline forms. Processing conditions greatly influence microstructures which may be mobile and dynamic, particularly at high temperatures. […] The idea that we can understand materials by looking at their internal structure in finer and finer detail goes back to the beginnings of microscopy […]. This microstructural view is more than just an important idea, it is the explanatory framework at the core of materials science. Many other concepts and theories exist in materials science, but this is the framework. It says that materials are intricately constructed on many length-scales, and if we don’t understand the internal structure we shall struggle to explain or to predict material behaviour.”

“Oxygen is the most abundant element in the earth’s crust and silicon the second. In nature, silicon occurs always in chemical combination with oxygen, the two forming the strong Si–O chemical bond. The simplest combination, involving no other elements, is silica; and most grains of sand are crystals of silica in the form known as quartz. […] The quartz crystal comes in right- and left-handed forms. Nothing like this happens in metals but arises frequently when materials are built from molecules and chemical bonds. The crystal structure of quartz has to incorporate two different atoms, silicon and oxygen, each in a repeating pattern and in the precise ratio 1:2. There is also the severe constraint imposed by the Si–O chemical bonds which require that each Si atom has four O neighbours arranged around it at the corners of a tetrahedron, every O bonded to two Si atoms. The crystal structure which quartz adopts (which of all possibilities is the one of lowest energy) is made up of triangular and hexagonal units. But within this there are buried helixes of Si and O atoms, and a helix must be either right- or left-handed. Once a quartz crystal starts to grow as right- or left-handed, its structure templates all the other helices with the same handedness. Equal numbers of right- and left-handed crystals occur in nature, but each is unambiguously one or the other.”

“In the living tree, and in the harvested wood that we use as a material, there is a hierarchy of structural levels, climbing all the way from the molecular to the scale of branch and trunk. The stiff cellulose chains are bundled into fibrils, which are themselves bonded by other organic molecules to build the walls of cells; which in turn form channels for the transport of water and nutrients, the whole having the necessary mechanical properties to support its weight and to resist the loads of wind and rain. In the living tree, the structure allows also for growth and repair. There are many things to be learned from biological materials, but the most universal is that biology builds its materials at many structural levels, and rarely makes a distinction between the material and the organism. Being able to build materials with hierarchical architectures is still more or less out of reach in materials engineering. Understanding how materials spontaneously self-assemble is the biggest challenge in contemporary nanotechnology.”

“The example of diamond shows two things about crystalline materials. First, anything we know about an atom and its immediate environment (neighbours, distances, angles) holds for every similar atom throughout a piece of material, however large; and second, everything we know about the unit cell (its size, its shape, and its symmetry) also applies throughout an entire crystal […] and by extension throughout a material made of a myriad of randomly oriented crystallites. These two general propositions provide the basis and justification for lattice theories of material behaviour which were developed from the 1920s onwards. We know that every solid material must be held together by internal cohesive forces. If it were not, it would fly apart and turn into a gas. A simple lattice theory says that if we can work out what forces act on the atoms in one unit cell, then this should be enough to understand the cohesion of the entire crystal. […] In lattice models which describe the cohesion and dynamics of the atoms, the role of the electrons is mainly in determining the interatomic bonding and the stiffness of the bond-spring. But in many materials, and especially in metals and semiconductors, some of the electrons are free to move about within the lattice. A lattice model of electron behaviour combines a geometrical description of the lattice with a more or less mechanical view of the atomic cores, and a fully quantum theoretical description of the electrons themselves. We need only to take account of the outer electrons of the atoms, as the inner electrons are bound tightly into the cores and are not itinerant. The outer electrons are the ones that form chemical bonds, so they are also called the valence electrons.”

“It is harder to push atoms closer together than to pull them further apart. While atoms are soft on the outside, they have harder cores, and pushed together the cores start to collide. […] when we bring a trillion atoms together to form a crystal, it is the valence electrons that are disturbed as the atoms approach each other. As the atomic cores come close to the equilibrium spacing of the crystal, the electron states of the isolated atoms morph into a set of collective states […]. These collective electron states have a continuous distribution of energies up to a top level, and form a ‘band’. But the separation of the valence electrons into distinct electron-pair states is preserved in the band structure, so that we find that the collective states available to the entire population of valence electrons in the entire crystal form a set of bands […]. Thus in silicon, there are two main bands.”

“The perfect crystal has atoms occupying all the positions prescribed by the geometry of its crystal lattice. But real crystalline materials fall short of perfection […] For instance, an individual site may be unoccupied (a vacancy). Or an extra atom may be squeezed into the crystal at a position which is not a lattice position (an interstitial). An atom may fall off its lattice site, creating a vacancy and an interstitial at the same time. Sometimes a site is occupied by the wrong kind of atom. Point defects of this kind distort the crystal in their immediate neighbourhood. Vacancies free up diffusional movement, allowing atoms to hop from site to site. Larger scale defects invariably exist too. A complete layer of atoms or unit cells may terminate abruptly within the crystal to produce a line defect (a dislocation). […] There are materials which try their best to crystallize, but find it hard to do so. Many polymer materials are like this. […] The best they can do is to form small crystalline regions in which the molecules lie side by side over limited distances. […] Often the crystalline domains comprise about half the material: it is a semicrystal. […] Crystals can be formed from the melt, from solution, and from the vapour. All three routes are used in industry and in the laboratory. As a rule, crystals that grow slowly are good crystals. Geological time can give wonderful results. Often, crystals are grown on a seed, a small crystal of the same material deliberately introduced into the crystallization medium. If this is a melt, the seed can gradually be pulled out, drawing behind it a long column of new crystal material. This is the Czochralski process, an important method for making semiconductors. […] However it is done, crystals invariably grow by adding material to the surface of a small particle to make it bigger.”

“As we go down the Periodic Table of elements, the atoms get heavier much more quickly than they get bigger. The mass of a single atom of uranium at the bottom of the Table is about 25 times greater than that of an atom of the lightest engineering metal, beryllium, at the top, but its radius is only 40 per cent greater. […] The density of solid materials of every kind is fixed mainly by where the constituent atoms are in the Periodic Table. The packing arrangement in the solid has only a small influence, although the crystalline form of a substance is usually a little denser than the amorphous form […] The range of solid densities available is therefore quite limited. At the upper end we hit an absolute barrier, with nothing denser than osmium (22,590 kg/m3). At the lower end we have some slack, as we can make lighter materials by the trick of incorporating holes to make foams and sponges and porous materials of all kinds. […] in the entire catalogue of available materials there is a factor of about a thousand for ingenious people to play with, from say 20 to 20,000 kg/m3.”

“The expansion of materials as we increase their temperature is a universal tendency. It occurs because as we raise the temperature the thermal energy of the atoms and molecules increases correspondingly, and this fights against the cohesive forces of attraction. The mean distance of separation between atoms in the solid (or the liquid) becomes larger. […] As a general rule, the materials with small thermal expansivities are metals and ceramics with high melting temperatures. […] Although thermal expansion is a smooth process which continues from the lowest temperatures to the melting point, it is sometimes interrupted by sudden jumps […]. Changes in crystal structure at precise temperatures are commonplace in materials of all kinds. […] There is a cluster of properties which describe the thermal behaviour of materials. Besides the expansivity, there is the specific heat, and also the thermal conductivity. These properties show us, for example, that it takes about four times as much energy to increase the temperature of 1 kilogram of aluminium by 1°C as 1 kilogram of silver; and that good conductors of heat are usually also good conductors of electricity. At everyday temperatures there is not a huge difference in specific heat between materials. […] In all crystalline materials, thermal conduction arises from the diffusion of phonons from hot to cold regions. As they travel, the phonons are subject to scattering both by collisions with other phonons, and with defects in the material. This picture explains why the thermal conductivity falls as temperature rises”.


Materials science.
Inorganic compound.
Organic compound.
Solid solution.
Copper. Bronze. Brass. Alloy.
Electrical conductivity.
Steel. Bessemer converter. Gamma iron. Alpha iron. Cementite. Martensite.
Phase diagram.
Equation of state.
Calcite. Limestone.
Portland cement.
Laue diffraction pattern.
Silver bromide. Latent image. Photographic film. Henry Fox Talbot.
Graphene. Graphite.
Thermal expansion.
Dulong–Petit law.
Wiedemann–Franz law.


November 14, 2017 Posted by | Biology, Books, Chemistry, Engineering, Physics | Leave a comment

Common Errors in Statistics… (III)

This will be my last post about the book. I liked most of it, and I gave it four stars on goodreads, but that doesn’t mean there weren’t any observations included in the book with which I took issue/disagreed. Here’s one of the things I didn’t like:

“In the univariate [model selection] case, if the errors were not normally distributed, we could take advantage of permutation methods to obtain exact significance levels in tests of the coefficients. Exact permutation methods do not exist in the multivariable case.

When selecting variables to incorporate in a multivariable model, we are forced to perform repeated tests of hypotheses, so that the resultant p-values are no longer meaningful. One solution, if sufficient data are available, is to divide the dataset into two parts, using the first part to select variables, and the second part to test these same variables for significance.” (chapter 13)

The basic idea is to use the results of hypothesis tests to decide which variables to include in the model. This is both common- and bad practice. I found it surprising that such a piece of advice would be included in this book, as I’d figured beforehand that this would precisely be the sort of thing a book like this one would tell people not to do. I’ve said this before multiple times on this blog, but I’ll keep saying it, especially if/when I find this sort of advice in statistics textbooks: Using hypothesis testing as a basis for model selection is an invalid approach to model selection, and it’s in general a terrible idea. “There is no statistical theory that supports the notion that hypothesis testing with a fixed α level is a basis for model selection.” (Burnham & Anderson). Use information criteria, not hypothesis tests, to make your model selection decisions. (And read Burnham & Anderson’s book on these topics.)

Anyway, much of the stuff included in the book was good stuff and it’s a very decent book. I’ve added some quotes and observations from the last part of the book below.

“OLS is not the only modeling technique. To diminish the effect of outliers, and treat prediction errors as proportional to their absolute magnitude rather than their squares, one should use least absolute deviation (LAD) regression. This would be the case if the conditional distribution of the dependent variable were characterized by a distribution with heavy tails (compared to the normal distribution, increased probability of values far from the mean). One should also employ LAD regression when the conditional distribution of the dependent variable given the predictors is not symmetric and we wish to estimate its median rather than its mean value.
If it is not clear which variable should be viewed as the predictor and which the dependent variable, as is the case when evaluating two methods of measurement, then one should employ Deming or error in variable (EIV) regression.
If one’s primary interest is not in the expected value of the dependent variable but in its extremes (the number of bacteria that will survive treatment or the number of individuals who will fall below the poverty line), then one ought consider the use of quantile regression.
If distinct strata exist, one should consider developing separate regression models for each stratum, a technique known as ecological regression [] If one’s interest is in classification or if the majority of one’s predictors are dichotomous, then one should consider the use of classification and regression trees (CART) […] If the outcomes are limited to success or failure, one ought employ logistic regression. If the outcomes are counts rather than continuous measurements, one should employ a generalized linear model (GLM).”

“Linear regression is a much misunderstood and mistaught concept. If a linear model provides a good fit to data, this does not imply that a plot of the dependent variable with respect to the predictor would be a straight line, only that a plot of the dependent variable with respect to some not-necessarily monotonic function of the predictor would be a line. For example, y = A + B log[x] and y = A cos(x) + B sin(x) are both linear models whose coefficients A and B might be derived by OLS or LAD methods. Y = Ax5 is a linear model. Y = xA is nonlinear. […] Perfect correlation (ρ2 = 1) does not imply that two variables are identical but rather that one of them, Y, say, can be written as a linear function of the other, Y = a + bX, where b is the slope of the regression line and a is the intercept. […] Nonlinear regression methods are appropriate when the form of the nonlinear model is known in advance. For example, a typical pharmacological model will have the form A exp[bX] + C exp[dW]. The presence of numerous locally optimal but globally suboptimal solutions creates challenges, and validation is essential. […] To be avoided are a recent spate of proprietary algorithms available solely in software form that guarantee to find a best-fitting solution. In the words of John von Neumann, “With four parameters I can fit an elephant and with five I can make him wiggle his trunk.””

“[T]he most common errors associated with quantile regression include: 1. Failing to evaluate whether the model form is appropriate, for example, forcing linear fit through an obvious nonlinear response. (Of course, this is also a concern with mean regression, OLS, LAD, or EIV.) 2. Trying to over interpret a single quantile estimate (say 0.85) with a statistically significant nonzero slope (p < 0.05) when the majority of adjacent quantiles (say 0.5 − 0.84 and 0.86 − 0.95) are clearly zero (p > 0.20). 3. Failing to use all the information a quantile regression provides. Even if you think you are only interested in relations near maximum (say 0.90 − 0.99), your understanding will be enhanced by having estimates (and sampling variation via confidence intervals) across a wide range of quantiles (say 0.01 − 0.99).”

“Survival analysis is used to assess time-to-event data including time to recovery and time to revision. Most contemporary survival analysis is built around the Cox model […] Possible sources of error in the application of this model include all of the following: *Neglecting the possible dependence of the baseline function λ0 on the predictors. *Overmatching, that is, using highly correlated predictors that may well mask each other’s effects. *Using the parametric Breslow or Kaplan–Meier estimators of the survival function rather than the nonparametric Nelson–Aalen estimator. *Excluding patients based on post-hoc criteria. Pathology workups on patients who died during the study may reveal that some of them were wrongly diagnosed. Regardless, patients cannot be eliminated from the study as we lack the information needed to exclude those who might have been similarly diagnosed but who are still alive at the conclusion of the study. *Failure to account for differential susceptibility (frailty) of the patients”.

“In reporting the results of your modeling efforts, you need to be explicit about the methods used, the assumptions made, the limitations on your model’s range of application, potential sources of bias, and the method of validation […] Multivariable regression is plagued by the same problems univariate regression is heir to, plus many more of its own. […] If choosing the correct functional form of a model in a univariate case presents difficulties, consider that in the case of k variables, there are k linear terms (should we use logarithms? should we add polynomial terms?) and k(k − 1) first-order cross products of the form xixk. Should we include any of the k(k − 1)(k − 2) second-order cross products? A common error is to attribute the strength of a relationship to the magnitude of the predictor’s regression coefficient […] Just scale the units in which the predictor is reported to see how erroneous such an assumption is. […] One of the main problems in multiple regression is multicollinearity, which is the correlation among predictors. Even relatively weak levels of multicollinearity are enough to generate instability in multiple regression models […]. A simple solution is to evaluate the correlation matrix M among predictors, and use this matrix to choose the predictors that are less correlated. […] Test M for each predictor, using the variance inflation factor (VIF) given by (1 − R2) − 1, where R2 is the multiple coefficient of determination of the predictor against all other predictors. If VIF is large for a given predictor (>8, say) delete this predictor and reestimate the model. […] Dropping collinear variables from the analysis can result in a substantial loss of power”.

“It can be difficult to predict the equilibrium point for a supply-and-demand model, because producers change their price in response to demand and consumers change their demand in response to price. Failing to account for endogeneous variables can lead to biased estimates of the regression coefficients.
Endogeneity can arise not only as a result of omitted variables, but of measurement error, autocorrelated errors, simultaneity, and sample selection errors. One solution is to make use of instrument variables that should satisfy two conditions: 1. They should be correlated with the endogenous explanatory variables, conditional on the other covariates. 2. They should not be correlated with the error term in the explanatory equation, that is, they should not suffer from the same problem as the original predictor.
Instrumental variables are commonly used to estimate causal effects in contexts in which controlled experiments are not possible, for example in estimating the effects of past and projected government policies.”

“[T]he following errors are frequently associated with factor analysis: *Applying it to datasets with too few cases in relation to the number of variables analyzed […], without noticing that correlation coefficients have very wide confidence intervals in small samples. *Using oblique rotation to get a number of factors bigger or smaller than the number of factors obtained in the initial extraction by principal components, as a way to show the validity of a questionnaire. For example, obtaining only one factor by principal components and using the oblique rotation to justify that there were two differentiated factors, even when the two factors were correlated and the variance explained by the second factor was very small. *Confusion among the total variance explained by a factor and the variance explained in the reduced factorial space. In this way a researcher interpreted that a given group of factors explaining 70% of the variance before rotation could explain 100% of the variance after rotation.”

“Poisson regression is appropriate when the dependent variable is a count, as is the case with the arrival of individuals in an emergency room. It is also applicable to the spatial distributions of tornadoes and of clusters of galaxies.2 To be applicable, the events underlying the outcomes must be independent […] A strong assumption of the Poisson regression model is that the mean and variance are equal (equidispersion). When the variance of a sample exceeds the mean, the data are said to be overdispersed. Fitting the Poisson model to overdispersed data can lead to misinterpretation of coefficients due to poor estimates of standard errors. Naturally occurring count data are often overdispersed due to correlated errors in time or space, or other forms of nonindependence of the observations. One solution is to fit a Poisson model as if the data satisfy the assumptions, but adjust the model-based standard errors usually employed. Another solution is to estimate a negative binomial model, which allows for scalar overdispersion.”

“When multiple observations are collected for each principal sampling unit, we refer to the collected information as panel data, correlated data, or repeated measures. […] The dependency of observations violates one of the tenets of regression analysis: that observations are supposed to be independent and identically distributed or IID. Several concerns arise when observations are not independent. First, the effective number of observations (that is, the effective amount of information) is less than the physical number of observations […]. Second, any model that fails to specifically address [the] correlation is incorrect […]. Third, although the correct specification of the correlation will yield the most efficient estimator, that specification is not the only one to yield a consistent estimator.”

“The basic issue in deciding whether to utilize a fixed- or random-effects model is whether the sampling units (for which multiple observations are collected) represent the collection of most or all of the entities for which inference will be drawn. If so, the fixed-effects estimator is to be preferred. On the other hand, if those same sampling units represent a random sample from a larger population for which we wish to make inferences, then the random-effects estimator is more appropriate. […] Fixed- and random-effects models address unobserved heterogeneity. The random-effects model assumes that the panel-level effects are randomly distributed. The fixed-effects model assumes a constant disturbance that is a special case of the random-effects model. If the random-effects assumption is correct, then the random-effects estimator is more efficient than the fixed-effects estimator. If the random-effects assumption does not hold […], then the random effects model is not consistent. To help decide whether the fixed- or random-effects models is more appropriate, use the Durbin–Wu–Hausman3 test comparing coefficients from each model. […] Although fixed-effects estimators and random-effects estimators are referred to as subject-specific estimators, the GEEs available through PROC GENMOD in SAS or xtgee in Stata, are called population-averaged estimators. This label refers to the interpretation of the fitted regression coefficients. Subject-specific estimators are interpreted in terms of an effect for a given panel, whereas population-averaged estimators are interpreted in terms of an affect averaged over panels.”

“A favorite example in comparing subject-specific and population-averaged estimators is to consider the difference in interpretation of regression coefficients for a binary outcome model on whether a child will exhibit symptoms of respiratory illness. The predictor of interest is whether or not the child’s mother smokes. Thus, we have repeated observations on children and their mothers. If we were to fit a subject-specific model, we would interpret the coefficient on smoking as the change in likelihood of respiratory illness as a result of the mother switching from not smoking to smoking. On the other hand, the interpretation of the coefficient in a population-averaged model is the likelihood of respiratory illness for the average child with a nonsmoking mother compared to the likelihood for the average child with a smoking mother. Both models offer equally valid interpretations. The interpretation of interest should drive model selection; some studies ultimately will lead to fitting both types of models. […] In addition to model-based variance estimators, fixed-effects models and GEEs [Generalized Estimating Equation models] also admit modified sandwich variance estimators. SAS calls this the empirical variance estimator. Stata refers to it as the Robust Cluster estimator. Whatever the name, the most desirable property of the variance estimator is that it yields inference for the regression coefficients that is robust to misspecification of the correlation structure. […] Specification of GEEs should include careful consideration of reasonable correlation structure so that the resulting estimator is as efficient as possible. To protect against misspecification of the correlation structure, one should base inference on the modified sandwich variance estimator. This is the default estimator in SAS, but the user must specify it in Stata.”

“There are three main approaches to [model] validation: 1. Independent verification (obtained by waiting until the future arrives or through the use of surrogate variables). 2. Splitting the sample (using one part for calibration, the other for verification) 3. Resampling (taking repeated samples from the original sample and refitting the model each time).
Goodness of fit is no guarantee of predictive success. […] Splitting the sample into two parts, one for estimating the model parameters, the other for verification, is particularly appropriate for validating time series models in which the emphasis is on prediction or reconstruction. If the observations form a time series, the more recent observations should be reserved for validation purposes. Otherwise, the data used for validation should be drawn at random from the entire sample. Unfortunately, when we split the sample and use only a portion of it, the resulting estimates will be less precise. […] The proportion to be set aside for validation purposes will depend upon the loss function. If both the goodness-of-fit error in the calibration sample and the prediction error in the validation sample are based on mean-squared error, Picard and Berk [1990] report that we can minimize their sum by using between a quarter and a third of the sample for validation purposes.”

November 13, 2017 Posted by | Books, Statistics | Leave a comment

Organic Chemistry (II)

I have included some observations from the second half of the book below, as well as some links to topics covered.

“[E]nzymes are used routinely to catalyse reactions in the research laboratory, and for a variety of industrial processes involving pharmaceuticals, agrochemicals, and biofuels. In the past, enzymes had to be extracted from natural sources — a process that was both expensive and slow. But nowadays, genetic engineering can incorporate the gene for a key enzyme into the DNA of fast growing microbial cells, allowing the enzyme to be obtained more quickly and in far greater yield. Genetic engineering has also made it possible to modify the amino acids making up an enzyme. Such modified enzymes can prove more effective as catalysts, accept a wider range of substrates, and survive harsher reaction conditions. […] New enzymes are constantly being discovered in the natural world as well as in the laboratory. Fungi and bacteria are particularly rich in enzymes that allow them to degrade organic compounds. It is estimated that a typical bacterial cell contains about 3,000 enzymes, whereas a fungal cell contains 6,000. Considering the variety of bacterial and fungal species in existence, this represents a huge reservoir of new enzymes, and it is estimated that only 3 per cent of them have been investigated so far.”

“One of the most important applications of organic chemistry involves the design and synthesis of pharmaceutical agents — a topic that is defined as medicinal chemistry. […] In the 19th century, chemists isolated chemical components from known herbs and extracts. Their aim was to identify a single chemical that was responsible for the extract’s pharmacological effects — the active principle. […] It was not long before chemists synthesized analogues of active principles. Analogues are structures which have been modified slightly from the original active principle. Such modifications can often improve activity or reduce side effects. This led to the concept of the lead compound — a compound with a useful pharmacological activity that could act as the starting point for further research. […] The first half of the 20th century culminated in the discovery of effective antimicrobial agents. […] The 1960s can be viewed as the birth of rational drug design. During that period there were important advances in the design of effective anti-ulcer agents, anti-asthmatics, and beta-blockers for the treatment of high blood pressure. Much of this was based on trying to understand how drugs work at the molecular level and proposing theories about why some compounds were active and some were not.”

“[R]ational drug design was boosted enormously towards the end of the century by advances in both biology and chemistry. The sequencing of the human genome led to the identification of previously unknown proteins that could serve as potential drug targets. […] Advances in automated, small-scale testing procedures (high-throughput screening) also allowed the rapid testing of potential drugs. In chemistry, advances were made in X-ray crystallography and NMR spectroscopy, allowing scientists to study the structure of drugs and their mechanisms of action. Powerful molecular modelling software packages were developed that allowed researchers to study how a drug binds to a protein binding site. […] the development of automated synthetic methods has vastly increased the number of compounds that can be synthesized in a given time period. Companies can now produce thousands of compounds that can be stored and tested for pharmacological activity. Such stores have been called chemical libraries and are routinely tested to identify compounds capable of binding with a specific protein target. These advances have boosted medicinal chemistry research over the last twenty years in virtually every area of medicine.”

“Drugs interact with molecular targets in the body such as proteins and nucleic acids. However, the vast majority of clinically useful drugs interact with proteins, especially receptors, enzymes, and transport proteins […] Enzymes are […] important drug targets. Drugs that bind to the active site and prevent the enzyme acting as a catalyst are known as enzyme inhibitors. […] Enzymes are located inside cells, and so enzyme inhibitors have to cross cell membranes in order to reach them—an important consideration in drug design. […] Transport proteins are targets for a number of therapeutically important drugs. For example, a group of antidepressants known as selective serotonin reuptake inhibitors prevent serotonin being transported into neurons by transport proteins.”

“The main pharmacokinetic factors are absorption, distribution, metabolism, and excretion. Absorption relates to how much of an orally administered drug survives the digestive enzymes and crosses the gut wall to reach the bloodstream. Once there, the drug is carried to the liver where a certain percentage of it is metabolized by metabolic enzymes. This is known as the first-pass effect. The ‘survivors’ are then distributed round the body by the blood supply, but this is an uneven process. The tissues and organs with the richest supply of blood vessels receive the greatest proportion of the drug. Some drugs may get ‘trapped’ or sidetracked. For example fatty drugs tend to get absorbed in fat tissue and fail to reach their target. The kidneys are chiefly responsible for the excretion of drugs and their metabolites.”

“Having identified a lead compound, it is important to establish which features of the compound are important for activity. This, in turn, can give a better understanding of how the compound binds to its molecular target. Most drugs are significantly smaller than molecular targets such as proteins. This means that the drug binds to quite a small region of the protein — a region known as the binding site […]. Within this binding site, there are binding regions that can form different types of intermolecular interactions such as van der Waals interactions, hydrogen bonds, and ionic interactions. If a drug has functional groups and substituents capable of interacting with those binding regions, then binding can take place. A lead compound may have several groups that are capable of forming intermolecular interactions, but not all of them are necessarily needed. One way of identifying the important binding groups is to crystallize the target protein with the drug bound to the binding site. X-ray crystallography then produces a picture of the complex which allows identification of binding interactions. However, it is not always possible to crystallize target proteins and so a different approach is needed. This involves synthesizing analogues of the lead compound where groups are modified or removed. Comparing the activity of each analogue with the lead compound can then determine whether a particular group is important or not. This is known as an SAR study, where SAR stands for structure–activity relationships.” Once the important binding groups have been identified, the pharmacophore for the lead compound can be defined. This specifies the important binding groups and their relative position in the molecule.”

“One way of identifying the active conformation of a flexible lead compound is to synthesize rigid analogues where the binding groups are locked into defined positions. This is known as rigidification or conformational restriction. The pharmacophore will then be represented by the most active analogue. […] A large number of rotatable bonds is likely to have an adverse effect on drug activity. This is because a flexible molecule can adopt a large number of conformations, and only one of these shapes corresponds to the active conformation. […] In contrast, a totally rigid molecule containing the required pharmacophore will bind the first time it enters the binding site, resulting in greater activity. […] It is also important to optimize a drug’s pharmacokinetic properties such that it can reach its target in the body. Strategies include altering the drug’s hydrophilic/hydrophobic properties to improve absorption, and the addition of substituents that block metabolism at specific parts of the molecule. […] The drug candidate must [in general] have useful activity and selectivity, with minimal side effects. It must have good pharmacokinetic properties, lack toxicity, and preferably have no interactions with other drugs that might be taken by a patient. Finally, it is important that it can be synthesized as cheaply as possible”.

“Most drugs that have reached clinical trials for the treatment of Alzheimer’s disease have failed. Between 2002 and 2012, 244 novel compounds were tested in 414 clinical trials, but only one drug gained approval. This represents a failure rate of 99.6 per cent as against a failure rate of 81 per cent for anti-cancer drugs.”

“It takes about ten years and £160 million to develop a new pesticide […] The volume of global sales increased 47 per cent in the ten-year period between 2002 and 2012, while, in 2012, total sales amounted to £31 billion. […] In many respects, agrochemical research is similar to pharmaceutical research. The aim is to find pesticides that are toxic to ‘pests’, but relatively harmless to humans and beneficial life forms. The strategies used to achieve this goal are also similar. Selectivity can be achieved by designing agents that interact with molecular targets that are present in pests, but not other species. Another approach is to take advantage of any metabolic reactions that are unique to pests. An inactive prodrug could then be designed that is metabolized to a toxic compound in the pest, but remains harmless in other species. Finally, it might be possible to take advantage of pharmacokinetic differences between pests and other species, such that a pesticide reaches its target more easily in the pest. […] Insecticides are being developed that act on a range of different targets as a means of tackling resistance. If resistance should arise to an insecticide acting on one particular target, then one can switch to using an insecticide that acts on a different target. […] Several insecticides act as insect growth regulators (IGRs) and target the moulting process rather than the nervous system. In general, IGRs take longer to kill insects but are thought to cause less detrimental effects to beneficial insects. […] Herbicides control weeds that would otherwise compete with crops for water and soil nutrients. More is spent on herbicides than any other class of pesticide […] The synthetic agent 2,4-D […] was synthesized by ICI in 1940 as part of research carried out on biological weapons […] It was first used commercially in 1946 and proved highly successful in eradicating weeds in cereal grass crops such as wheat, maize, and rice. […] The compound […] is still the most widely used herbicide in the world.”

“The type of conjugated system present in a molecule determines the specific wavelength of light absorbed. In general, the more extended the conjugation, the higher the wavelength absorbed. For example, β-carotene […] is the molecule responsible for the orange colour of carrots. It has a conjugated system involving eleven double bonds, and absorbs light in the blue region of the spectrum. It appears red because the reflected light lacks the blue component. Zeaxanthin is very similar in structure to β-carotene, and is responsible for the yellow colour of corn. […] Lycopene absorbs blue-green light and is responsible for the red colour of tomatoes, rose hips, and berries. Chlorophyll absorbs red light and is coloured green. […] Scented molecules interact with olfactory receptors in the nose. […] there are around 400 different olfactory protein receptors in humans […] The natural aroma of a rose is due mainly to 2-phenylethanol, geraniol, and citronellol.”

“Over the last fifty years, synthetic materials have largely replaced natural materials such as wood, leather, wool, and cotton. Plastics and polymers are perhaps the most visible sign of how organic chemistry has changed society. […] It is estimated that production of global plastics was 288 million tons in 2012 […] Polymerization involves linking molecular strands called polymers […]. By varying the nature of the monomer, a huge range of different polymers can be synthesized with widely differing properties. The idea of linking small molecular building blocks into polymers is not a new one. Nature has been at it for millions of years using amino acid building blocks to make proteins, and nucleotide building blocks to make nucleic acids […] The raw materials for plastics come mainly from oil, which is a finite resource. Therefore, it makes sense to recycle or depolymerize plastics to recover that resource. Virtually all plastics can be recycled, but it is not necessarily economically feasible to do so. Traditional recycling of polyesters, polycarbonates, and polystyrene tends to produce inferior plastics that are suitable only for low-quality goods.”

Adipic acid.
Protease. Lipase. Amylase. Cellulase.
Conformational change.
Process chemistry (chemical development).
Clinical trial.
N-Methyl carbamate.
Colony collapse disorder.
Ecdysone receptor.
Quinone outside inhibitors (QoI).
11-cis retinal.
Synthetic dyes.
Methylene blue.
Artificial sweeteners.
Addition polymer.
Condensation polymer.
Polyvinyl chloride.
Bisphenol A.
Allotropes of carbon.
Carbon nanotube.
Molecular switch.

November 11, 2017 Posted by | Biology, Books, Botany, Chemistry, Medicine, Pharmacology, Zoology | Leave a comment


i. “Much of the skill in doing science resides in knowing where in the hierarchy you are looking – and, as a consequence, what is relevant and what is not.” (Philip Ball – Molecules: A very Short Introduction)

ii. “…statistical software will no more make one a statistician than a scalpel will turn one into a neurosurgeon. Allowing these tools to do our thinking is a sure recipe for disaster.” (Philip Good & James Hardin, Common Errors in Statistics (and how to avoid them))

iii. “Just as 95% of research efforts are devoted to data collection, 95% of the time remaining should be spent on ensuring that the data collected warrant analysis.” (-ll-)

iv. “One reason why many statistical models are incomplete is that they do not specify the sources of randomness generating variability among agents, i.e., they do not specify why otherwise observationally identical people make different choices and have different outcomes given the same choice.” (James J. Heckman, -ll-)

v. “If a thing is not worth doing, it is not worth doing well.” (J. W. Tukey, -ll-)

vi. “Hypocrisy is the lubricant of society.” (David Hull)

vii. “Every time I fire a linguist, the performance of our speech recognition system goes up.” (Fred Jelinek)

viii. “For most of my life, one of the persons most baffled by my own work was myself.” (Benoît Mandelbrot)

ix. “I’m afraid love is just a word.” (Harry Mulisch)

x. “The worst thing about death is that you once were, and now you are not.” (José Saramago)

xi. “Sometimes the most remarkable things seem commonplace. I mean, when you think about it, jet travel is pretty freaking remarkable. You get in a plane, it defies the gravity of an entire planet by exploiting a loophole with air pressure, and it flies across distances that would take months or years to cross by any means of travel that has been significant for more than a century or three. You hurtle above the earth at enough speed to kill you instantly should you bump into something, and you can only breathe because someone built you a really good tin can that has seams tight enough to hold in a decent amount of air. Hundreds of millions of man-hours of work and struggle and research, blood, sweat, tears, and lives have gone into the history of air travel, and it has totally revolutionized the face of our planet and societies.
But get on any flight in the country, and I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about the drinks. The drinks, people.” (Jim Butcher, Summer Knight)

xii. “The best way to keep yourself from doing something grossly self-destructive and stupid is to avoid the temptation to do it. For example, it is far easier to fend off inappropriate amorous desires if one runs screaming from the room every time a pretty girl comes in.” (Jim Butcher, Proven Guilty)

xiii. “One certain effect of war is to diminish freedom of expression. Patriotism becomes the order of the day, and those who question the war are seen as traitors, to be silenced and imprisoned.” (Howard Zinn)

xiv. “While inexact models may mislead, attempting to allow for every contingency a priori is impractical. Thus models must be built by an iterative feedback process in which an initial parsimonious model may be modified when diagnostic checks applied to residuals indicate the need.” (G. E. P. Box)

xv. “In our analysis of complex systems (like the brain and language) we must avoid the trap of trying to find master keys. Because of the mechanisms by which complex systems structure themselves, single principles provide inadequate descriptions. We should rather be sensitive to complex and self-organizing interactions and appreciate the play of patterns that perpetually transforms the system itself as well as the environment in which it operates.” (Paul Cilliers)

xvi. “The nature of the chemical bond is the problem at the heart of all chemistry.” (Bryce Crawford)

xvii. “When there’s a will to fail, obstacles can be found.” (John McCarthy)

xviii. “We understand human mental processes only slightly better than a fish understands swimming.” (-ll-)

xix. “He who refuses to do arithmetic is doomed to talk nonsense.” (-ll-)

xx. “The trouble with men is that they have limited minds. That’s the trouble with women, too.” (Joanna Russ)


November 10, 2017 Posted by | Books, Quotes/aphorisms | Leave a comment

Organic Chemistry (I)

This book‘s a bit longer than most ‘A very short introduction to…‘ publications, and it’s quite dense at times and included a lot of interesting stuff. It took me a while to finish it as I put it away a while back when I hit some of the more demanding content, but I did pick it up later and I really enjoyed most of the coverage. In the end I decided that I wouldn’t be doing the book justice if I were to limit my coverage of it to just one post, so this will be only the first of two posts of coverage of this book, covering roughly the first half of it.

As usual I have included in my post both some observations from the book (…and added a few links to these quotes where I figured they might be helpful) as well as some wiki links to topics discussed in the book.

“Organic chemistry is a branch of chemistry that studies carbon-based compounds in terms of their structure, properties, and synthesis. In contrast, inorganic chemistry covers the chemistry of all the other elements in the periodic table […] carbon-based compounds are crucial to the chemistry of life. [However] organic chemistry has come to be defined as the chemistry of carbon-based compounds, whether they originate from a living system or not. […] To date, 16 million compounds have been synthesized in organic chemistry laboratories across the world, with novel compounds being synthesized every day. […] The list of commodities that rely on organic chemistry include plastics, synthetic fabrics, perfumes, colourings, sweeteners, synthetic rubbers, and many other items that we use every day.”

“For a neutral carbon atom, there are six electrons occupying the space around the nucleus […] The electrons in the outer shell are defined as the valence electrons and these determine the chemical properties of the atom. The valence electrons are easily ‘accessible’ compared to the two electrons in the first shell. […] There is great significance in carbon being in the middle of the periodic table. Elements which are close to the left-hand side of the periodic table can lose their valence electrons to form positive ions. […] Elements on the right-hand side of the table can gain electrons to form negatively charged ions. […] The impetus for elements to form ions is the stability that is gained by having a full outer shell of electrons. […] Ion formation is feasible for elements situated to the left or the right of the periodic table, but it is less feasible for elements in the middle of the table. For carbon to gain a full outer shell of electrons, it would have to lose or gain four valence electrons, but this would require far too much energy. Therefore, carbon achieves a stable, full outer shell of electrons by another method. It shares electrons with other elements to form bonds. Carbon excels in this and can be considered chemistry’s ultimate elemental socialite. […] Carbon’s ability to form covalent bonds with other carbon atoms is one of the principle reasons why so many organic molecules are possible. Carbon atoms can be linked together in an almost limitless way to form a mind-blowing variety of carbon skeletons. […] carbon can form a bond to hydrogen, but it can also form bonds to atoms such as nitrogen, phosphorus, oxygen, sulphur, fluorine, chlorine, bromine, and iodine. As a result, organic molecules can contain a variety of different elements. Further variety can arise because it is possible for carbon to form double bonds or triple bonds to a variety of other atoms. The most common double bonds are formed between carbon and oxygen, carbon and nitrogen, or between two carbon atoms. […] The most common triple bonds are found between carbon and nitrogen, or between two carbon atoms.”

[C]hirality has huge importance. The two enantiomers of a chiral molecule behave differently when they interact with other chiral molecules, and this has important consequences in the chemistry of life. As an analogy, consider your left and right hands. These are asymmetric in shape and are non-superimposable mirror images. Similarly, a pair of gloves are non-superimposable mirror images. A left hand will fit snugly into a left-hand glove, but not into a right-hand glove. In the molecular world, a similar thing occurs. The proteins in our bodies are chiral molecules which can distinguish between the enantiomers of other molecules. For example, enzymes can distinguish between the two enantiomers of a chiral compound and catalyse a reaction with one of the enantiomers but not the other.”

“A key concept in organic chemistry is the functional group. A functional group is essentially a distinctive arrangement of atoms and bonds. […] Functional groups react in particular ways, and so it is possible to predict how a molecule might react based on the functional groups that are present. […] it is impossible to build a molecule atom by atom. Instead, target molecules are built by linking up smaller molecules. […] The organic chemist needs to have a good understanding of the reactions that are possible between different functional groups when choosing the molecular building blocks to be used for a synthesis. […] There are many […] reasons for carrying out FGTs [functional group transformations], especially when synthesizing complex molecules. For example, a starting material or a synthetic intermediate may lack a functional group at a key position of the molecular structure. Several reactions may then be required to introduce that functional group. On other occasions, a functional group may be added to a particular position then removed at a later stage. One reason for adding such a functional group would be to block an unwanted reaction at that position of the molecule. Another common situation is where a reactive functional group is converted to a less reactive functional group such that it does not interfere with a subsequent reaction. Later on, the original functional group is restored by another functional group transformation. This is known as a protection/deprotection strategy. The more complex the target molecule, the greater the synthetic challenge. Complexity is related to the number of rings, functional groups, substituents, and chiral centres that are present. […] The more reactions that are involved in a synthetic route, the lower the overall yield. […] retrosynthesis is a strategy by which organic chemists design a synthesis before carrying it out in practice. It is called retrosynthesis because the design process involves studying the target structure and working backwards to identify how that molecule could be synthesized from simpler starting materials. […] a key stage in retrosynthesis is identifying a bond that can be ‘disconnected’ to create those simpler molecules.”

“[V]ery few reactions produce the spectacular visual and audible effects observed in chemistry demonstrations. More typically, reactions involve mixing together two colourless solutions to produce another colourless solution. Temperature changes are a bit more informative. […] However, not all reactions generate heat, and monitoring the temperature is not a reliable way of telling whether the reaction has gone to completion or not. A better approach is to take small samples of the reaction solution at various times and to test these by chromatography or spectroscopy. […] If a reaction is taking place very slowly, different reaction conditions could be tried to speed it up. This could involve heating the reaction, carrying out the reaction under pressure, stirring the contents vigorously, ensuring that the reaction is carried out in a dry atmosphere, using a different solvent, using a catalyst, or using one of the reagents in excess. […] There are a large number of variables that can affect how efficiently reactions occur, and organic chemists in industry are often employed to develop the ideal conditions for a specific reaction. This is an area of organic chemistry known as chemical development. […] Once a reaction has been carried out, it is necessary to isolate and purify the reaction product. This often proves more time-consuming than carrying out the reaction itself. Ideally, one would remove the solvent used in the reaction and be left with the product. However, in most reactions this is not possible as other compounds are likely to be present in the reaction mixture. […] it is usually necessary to carry out procedures that will separate and isolate the desired product from these other compounds. This is known as ‘working up’ the reaction.”

“Proteins are large molecules (macromolecules) which serve a myriad of purposes, and are essentially polymers constructed from molecular building blocks called amino acids […]. In humans, there are twenty different amino acids having the same ‘head group’, consisting of a carboxylic acid and an amine attached to the same carbon atom […] The amino acids are linked up by the carboxylic acid of one amino acid reacting with the amine group of another to form an amide link. Since a protein is being produced, the amide bond is called a peptide bond, and the final protein consists of a polypeptide chain (or backbone) with different side chains ‘hanging off’ the chain […]. The sequence of amino acids present in the polypeptide sequence is known as the primary structure. Once formed, a protein folds into a specific 3D shape […] Nucleic acids […] are another form of biopolymer, and are formed from molecular building blocks called nucleotides. These link up to form a polymer chain where the backbone consists of alternating sugar and phosphate groups. There are two forms of nucleic acid — deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). In DNA, the sugar is deoxyribose , whereas the sugar in RNA is ribose. Each sugar ring has a nucleic acid base attached to it. For DNA, there are four different nucleic acid bases called adenine (A), thymine (T), cytosine (C), and guanine (G) […]. These bases play a crucial role in the overall structure and function of nucleic acids. […] DNA is actually made up of two DNA strands […] where the sugar-phosphate backbones are intertwined to form a double helix. The nucleic acid bases point into the centre of the helix, and each nucleic acid base ‘pairs up’ with a nucleic acid base on the opposite strand through hydrogen bonding. The base pairing is specifically between adenine and thymine, or between cytosine and guanine. This means that one polymer strand is complementary to the other, a feature that is crucial to DNA’s function as the storage molecule for genetic information. […]  [E]ach strand […] act as the template for the creation of a new strand to produce two identical ‘daughter’ DNA double helices […] [A] genetic alphabet of four letters (A, T, G, C) […] code for twenty amino acids. […] [A]n amino acid is coded, not by one nucleotide, but by a set of three. The number of possible triplet combinations using four ‘letters’ is more than enough to encode all the amino acids.”

“Proteins have a variety of functions. Some proteins, such as collagen, keratin, and elastin, have a structural role. Others catalyse life’s chemical reactions and are called enzymes. They have a complex 3D shape, which includes a cavity called the active site […]. This is where the enzyme binds the molecules (substrates) that undergo the enzyme-catalysed reaction. […] A substrate has to have the correct shape to fit an enzyme’s active site, but it also needs binding groups to interact with that site […]. These interactions hold the substrate in the active site long enough for a reaction to occur, and typically involve hydrogen bonds, as well as van der Waals and ionic interactions. When a substrate binds, the enzyme normally undergoes an induced fit. In other words, the shape of the active site changes slightly to accommodate the substrate, and to hold it as tightly as possible. […] Once a substrate is bound to the active site, amino acids in the active site catalyse the subsequent reaction.”

“Proteins called receptors are involved in chemical communication between cells and respond to chemical messengers called neurotransmitters if they are released from nerves, or hormones if they are released by glands. Most receptors are embedded in the cell membrane, with part of their structure exposed on the outer surface of the cell membrane, and another part exposed on the inner surface. On the outer surface they contain a binding site that binds the molecular messenger. An induced fit then takes place that activates the receptor. This is very similar to what happens when a substrate binds to an enzyme […] The induced fit is crucial to the mechanism by which a receptor conveys a message into the cell — a process known as signal transduction. By changing shape, the protein initiates a series of molecular events that influences the internal chemistry within the cell. For example, some receptors are part of multiprotein complexes called ion channels. When the receptor changes shape, it causes the overall ion channel to change shape. This opens up a central pore allowing ions to flow across the cell membrane. The ion concentration within the cell is altered, and that affects chemical reactions within the cell, which ultimately lead to observable results such as muscle contraction. Not all receptors are membrane-bound. For example, steroid receptors are located within the cell. This means that steroid hormones need to cross the cell membrane in order to reach their target receptors. Transport proteins are also embedded in cell membranes and are responsible for transporting polar molecules such as amino acids into the cell. They are also important in controlling nerve action since they allow nerves to capture released neurotransmitters, such that they have a limited period of action.”

“RNA […] is crucial to protein synthesis (translation). There are three forms of RNA — messenger RNA (mRNA), transfer RNA (tRNA), and ribosomal RNA (rRNA). mRNA carries the genetic code for a particular protein from DNA to the site of protein production. Essentially, mRNA is a single-strand copy of a specific section of DNA. The process of copying that information is known as transcription. tRNA decodes the triplet code on mRNA by acting as a molecular adaptor. At one end of tRNA, there is a set of three bases (the anticodon) that can base pair to a set of three bases on mRNA (the codon). An amino acid is linked to the other end of the tRNA and the type of amino acid present is related to the anticodon that is present. When tRNA with the correct anticodon base pairs to the codon on mRNA, it brings the amino acid encoded by that codon. rRNA is a major constituent of a structure called a ribosome, which acts as the factory for protein production. The ribosome binds mRNA then coordinates and catalyses the translation process.”

Organic chemistry.
Hydrogen bond.
Van der Waals forces.
Ionic bonding.
Coupling reaction.
Chemical polarity.
Elemental analysis.
NMR spectroscopy.
Miller–Urey experiment.
Vester-Ulbricht hypothesis.
RNA world.

November 9, 2017 Posted by | Biology, Books, Chemistry, Genetics | Leave a comment

A few diabetes papers of interest

i. Impact of Sex and Age at Onset of Diabetes on Mortality From Ischemic Heart Disease in Patients With Type 1 Diabetes.

“The study examined long-term IHD-specific mortality in a Finnish population-based cohort of patients with early-onset (0–14 years) and late-onset (15–29 years) T1D (n = 17,306). […] Follow-up started from the time of diagnosis of T1D and ended either at the time of death or at the end of 2011. […] ICD codes used to define patients as having T1D were 2500B–2508B, E10.0–E10.9, or O24.0. […] The median duration of diabetes was 24.4 (interquartile range 17.6–32.2) years. Over a 41-year study period totaling 433,782 person-years of follow-up, IHD accounted for 27.6% of the total 1,729 deaths. Specifically, IHD was identified as the cause of death in 478 patients, in whom IHD was the primary cause of death in 303 and a contributory cause in 175. […] Within the early-onset cohort, the average crude mortality rate in women was 33.3% lower than in men (86.3 [95% CI 65.2–112.1] vs. 128.2 [104.2–156.1] per 100,000 person-years, respectively, P = 0.02). When adjusted for duration of diabetes and the year of diabetes diagnosis, the mortality RR between women and men of 0.64 was only of borderline significance (P = 0.05) […]. In the late-onset cohort, crude mortality in women was, on average, only one-half that of men (117.2 [92.0–147.1] vs. 239.7 [210.9–271.4] per 100,000 person-years, respectively, P < 0.0001) […]. An RR of 0.43 remained highly significant after adjustment for duration of diabetes and year of diabetes diagnosis. Every year of duration of diabetes increased the risk 10–13%”

“The number of deaths from IHD in the patients with T1D were compared with the number of deaths from IHD in the background population, and the SMRs were calculated. For the total cohort (early and late onset pooled), the SMR was 7.2 (95% CI 6.4–8.0) […]. In contrast to the crude mortality rates, the SMRs were higher in women (21.6 [17.2–27.0]) than in men (5.8 [5.1–6.6]). When stratified by the age at onset of diabetes, the SMR was considerably higher in patients with early onset (16.9 [13.5–20.9]) than in those with late onset (5.9 [5.2–6.8]). In both the late- and the early-onset cohorts, there was a striking difference in the SMRs between women and men, and this was especially evident in the early-onset cohort where the SMR for women was 52.8 (36.3–74.5) compared with 12.1 (9.2–15.8) for men. This higher risk of death from IHD compared with the background population was evident in all women, regardless of age. However, the most pronounced effect was seen in women in the early-onset cohort <40 years of age, who were 83 times more likely to die of IHD than the age-matched women in the background population. This compares with a 37 times higher risk of death from IHD in women aged >40 years. The corresponding SMRs for men aged <40 and ≥40 years were 19.4 and 8.5, respectively.”

“Overall, the 40-year cumulative mortality for IHD was 8.8% (95% CI 7.9–9.7%) in all patients […] The 40-year cumulative IHD mortality in the early-onset cohort was 6.3% (4.8–7.8%) for men and 4.5% (3.1–5.9%) for women (P = 0.009 by log-rank test) […]. In the late-onset cohort, the corresponding cumulative mortality rates were 16.6% (14.3–18.7%) in men and 8.5% (6.5–10.4%) in women (P < 0.0001 by log-rank test)”

“The major findings of the current study are that women with early-onset T1D are exceptionally vulnerable to dying from IHD, which is especially evident in those receiving a T1D diagnosis during the prepubertal and pubertal years. Crude mortality rates were similar for women compared with men, highlighting the loss of cardioprotection in women. […] Although men of all ages have greater crude mortality rates than women regardless of the age at onset of T1D, the current study shows that mortality from IHD attributable to diabetes is much more pronounced in women than in men. […] it is conceivable that one of the underlying reasons for the loss of female sex as a protective factor against the development of CVD in the setting of diabetes may be the loss of ovarian hormones. Indeed, women with T1D have been shown to have reduced levels of plasma estradiol compared with age-matched nondiabetic women (23) possibly because of idiopathic ovarian failure or dysregulation of the hypothalamic-pituitary-ovarian axis.”

“One of the novelties of the present study is that the risk of death from IHD highly depends on the age at onset of T1D. The data show that the SMR was considerably higher in early-onset (0–14 years) than in late-onset (15–29 years) T1D in both sexes. […] the risk of dying from IHD is high in both women and men receiving a diagnosis of T1D at a young age.

ii. Microalbuminuria as a Risk Predictor in Diabetes: The Continuing Saga.

“The term “microalbuminuria” (MA) originated in 1964 when Professor Harry Keen first used it to signify a small amount of albumin in the urine of patients with type 1 diabetes (1). […] Whereas early research focused on the relevance of MA as a risk factor for diabetic kidney disease, research over the past 2 decades has shifted to examine whether MA is a true risk factor. To appreciate fully the contribution of MA to overall cardiorenal risk, it is important to distinguish between a risk factor and risk marker. A risk marker is a variable that identifies a pathophysiological state, such as inflammation or infection, and is not necessarily involved, directly or causally, in the genesis of a specified outcome (e.g., association of a cardiovascular [CV] event with fever, high-sensitivity C-reactive protein [hs-CRP], or MA). Conversely, a risk factor is involved clearly and consistently with the cause of a specified event (e.g., a CV event associated with persistently elevated blood pressure or elevated levels of LDL). Both a risk marker and a risk factor can predict an adverse outcome, but only one lies within the causal pathway of a disease. Moreover, a reduction (or alteration in a beneficial direction) of a risk factor (i.e., achievement of blood pressure goal) generally translates into a reduction of adverse outcomes, such as CV events; this is not necessarily true for a risk marker.”

“The data sources included in this article were all PubMed-referenced articles in English-language peer-reviewed journals since 1964. Studies selected had to have a minimum follow-up of 1 year; include at least 100 participants; be either a randomized trial, a systematic review, a meta-analysis, or a large observational cohort study in patients with any type of diabetes; or be trials of high CV risk that included at least 50% of patients with diabetes. All studies had to assess changes in MA tied to CV or CKD outcomes and not purely reflect changes in MA related to blood pressure, unless they were mechanistic studies. On the basis of these inclusion criteria, 31 studies qualified and provide the data used for this review.”

“Early studies in patients with diabetes supported the concept that as MA increases to higher levels, the risk of CKD progression and CV risk also increases […]. Moreover, evidence from epidemiological studies in patients with diabetes suggested that the magnitude of urine albumin excretion should be viewed as a continuum of CV risk, with the lower the albumin excretion, the lower the CV risk (15,16). However, MA values can vary daily up to 100% (11). These large biological variations are a result of a variety of conditions, with a central core tied to inflammation associated with factors ranging from increased blood pressure variability, high blood glucose levels, high LDL cholesterol, and high uric acid levels to high sodium ingestion, smoking, and exercise (17) […]. Additionally, any febrile illness, regardless of etiology, will increase urine albumin excretion (18). Taken together, these data support the concept that MA is highly variable and that values over a short time period (i.e., 3–6 months) are meaningless in predicting any CV or kidney disease outcome.”

“Initial studies to understand the mechanisms of MA examined changes in glomerular membrane permeability as a key determinant in patients with diabetes […]. Many factors affect the genesis and level of MA, most of which are linked to inflammatory conditions […]. A good evidence base, however, supports the concept that MA directly reflects the amount of inflammation and vascular “leakiness” present in patients with diabetes (16,18,19).

More recent studies have found a number of other factors that affect glomerular permeability by modifying cytokines that affect permeability. Increased amounts of glycated albumin reduce glomerular nephrin and increase vascular endothelial growth factor (20). Additionally, increases in sodium intake (21) as well as intraglomerular pressure secondary to high protein intake or poorly controlled blood pressure (22,23) increase glomerular permeability in diabetes and, hence, MA levels.

In individuals with diabetes, albumin is glycated and associated with the generation of reactive oxygen species. In addition, many other factors such as advanced glycation end products, reactive oxygen species, and other cellular toxins contribute to vascular injury. Once such injury occurs, the effect of pressor hormones, such as angiotensin II, is magnified, resulting in a faster progression of vascular injury. The end result is direct injury to the vascular smooth muscle cells, endothelial cells, and visceral epithelial cells (podocytes) of the glomerular capillary wall membrane as well as to the proximal tubular cells and podocyte basement membrane of the nephron (20,24,25). All these contribute to the development of MA. […] better glycemic control is associated with far lower levels of inflammatory markers (31).”

“MA is accepted as a CV risk marker for myocardial infarction and stroke, regardless of diabetes status. […] there is good evidence in those with type 2 diabetes that the presence of MA >100 mg/day is associated with higher CV events and greater likelihood of kidney disease development (6). Evidence for this association comes from many studies and meta-analyses […] a meta-analysis by Perkovic et al. (37) demonstrated a dose-response relationship between the level of albuminuria and CV risk. In this meta-analysis, individuals with MA were at 50% greater risk of coronary heart disease (risk ratio 1.47 [95% CI 1.30–1.66]) than those without. Those with macroalbuminuria (i.e., >300 mg/day) had more than a twofold risk for coronary heart disease (risk ratio 2.17 [95% CI 1.87–2.52]) (37). Despite these data indicating a higher CV risk in patients with MA regardless of diabetes status and other CV risk factors, there is no consensus that the addition of MA to conventional CV risk stratification for the general population (e.g., Framingham or Reynolds scoring systems) is of any clinical value, and that includes patients with diabetes (38).”

“Given that MA was evaluated in a post hoc manner in almost all interventional studies, it is likely that the reduction in MA simply reflects the effects of either renin-angiotensin system (RAS) blockade on endothelial function or significant blood pressure reduction rather than the MA itself being implicated as a CV disease risk factor (18). […] associations of lowering MA with angiotensin-converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs) does not prove a direct benefit on CV event lowering associated with MA reduction in diabetes. […] Four long-term, appropriately powered trials demonstrated an inverse relationship between reductions in MA and primary event rates for CV events […]. Taken together, these studies support the concept that MA is a risk marker in diabetes and is consistent with data of other inflammatory markers, such as hs-CRP [here’s a relevant link – US], such that the higher the level, the higher the risk (15,39,42). The importance of MA as a CV risk marker is exemplified further by another meta-analysis that showed that MA has a similar magnitude of CV risk as hs-CRP and is a better predictor of CV events (43). Thus, the data supporting MA as a risk marker for CV events are relatively consistent, clearly indicate that an association exists, and help to identify the presence of underlying inflammatory states, regardless of etiology.”

“In people with early stage nephropathy (i.e., stage 2 or 3a [GFR 45–89 mL/min/1.73 m2]) and MA, there is no clear benefit on slowing GFR decline by reducing MA with drugs that block the RAS independent of lowering blood pressure (16). This is exemplified by many trials […]. Thus, blood pressure lowering is the key goal for all patients with early stage nephropathy associated with normoalbuminuria or MA. […] When albuminuria levels are in the very high or macroalbuminuria range (i.e., >300 mg/day), it is accepted that the patient has CKD and is likely to progress ultimately to ESRD, unless they die of a CV event (39,52). However, only one prospective randomized trial evaluated the role of early intervention to reduce blood pressure with an ACE inhibitor versus a calcium channel blocker in CKD progression by assessing change in MA and creatinine clearance in people with type 2 diabetes (Appropriate Blood Pressure Control in Diabetes [ABCD] trial) (23). After >7 years of follow-up, there was no relationship between changes in MA and CKD progression. Moreover, there was regression to the mean of MA.”

“Many observational studies used development of MA as indicating the presence of early stage CKD. Early studies by the individual groups of Mogensen and Parving demonstrated a relationship between increases in MA and progression to nephropathy in type 1 diabetes. These groups also showed that use of ACE inhibitors, blood pressure reduction, and glucose control reduced MA (9,58,59). However, more recent studies in both type 1 and type 2 diabetes demonstrated that only a subgroup of patients progress from MA to >300 mg/day albuminuria, and this subgroup accounts for those destined to progress to ESRD (29,32,6063). Thus, the presence of MA alone is not predictive of CKD progression. […] some patients with type 2 diabetes progress to ESRD without ever having developed albuminuria levels of ≥300 mg/day (67). […] Taken together, data from outcome trials, meta-analyses, and observations demonstrate that MA [Micro-Albuminuria] alone is not synonymous with the presence of clearly defined CKD [Chronic Kidney Disease] in diabetes, although it is used as part of the criteria for the diagnosis of CKD in the most recent CKD classification and staging (71). Note that only a subgroup of ∼25–30% of people with diabetes who also have MA will likely progress to more advanced stages of CKD. Predictors of progression to ESRD, apart from family history, and many years of poor glycemic and blood pressure control are still not well defined. Although there are some genetic markers, such as CUBN and APOL1, their use in practice is not well established.”

“In the context of the data presented in this article, MA should be viewed as a risk marker associated with an increase in CV risk and for kidney disease, but its presence alone does not indicate established kidney disease, especially if the eGFR is well above 60 mL/min/1.73 m2. Increases in MA, with blood pressure and other CV risk factors controlled, are likely but not proven to portend a poor prognosis for CKD progression over time. Achieving target blood pressure (<140/80 mmHg) and target HbA1c (<7%) should be priorities in treating patients with MA. Recent guidelines from both the American Diabetes Association and the National Kidney Foundation provide a strong recommendation for using agents that block the RAS, such as ACE inhibitors and ARBs, as part of the regimen for those with albuminuria levels >300 mg/day but not MA (73). […] maximal antialbuminuric effects will [however] not be achieved with these agents unless a low-sodium diet is strictly followed.”

iii. The SEARCH for Diabetes in Youth Study: Rationale, Findings, and Future Directions.

“The SEARCH for Diabetes in Youth (SEARCH) study was initiated in 2000, with funding from the Centers for Disease Control and Prevention and support from the National Institute of Diabetes and Digestive and Kidney Diseases, to address major knowledge gaps in the understanding of childhood diabetes. SEARCH is being conducted at five sites across the U.S. and represents the largest, most diverse study of diabetes among U.S. youth. An active registry of youth diagnosed with diabetes at age <20 years allows the assessment of prevalence (in 2001 and 2009), annual incidence (since 2002), and trends by age, race/ethnicity, sex, and diabetes type. Prevalence increased significantly from 2001 to 2009 for both type 1 and type 2 diabetes in most age, sex, and race/ethnic groups. SEARCH has also established a longitudinal cohort to assess the natural history and risk factors for acute and chronic diabetes-related complications as well as the quality of care and quality of life of persons with diabetes from diagnosis into young adulthood. […] This review summarizes the study methods, describes key registry and cohort findings and their clinical and public health implications, and discusses future directions.”

“SEARCH includes a registry and a cohort study […]. The registry study identifies incident cases each year since 2002 through the present with ∼5.5 million children <20 years of age (∼6% of the U.S. population <20 years) under surveillance annually. Approximately 3.5 million children <20 years of age were under surveillance in 2001 at the six SEARCH recruitment centers, with approximately the same number at the five centers under surveillance in 2009.”

“The prevalence of all types of diabetes was 1.8/1,000 youth in 2001 and was 2.2/1,000 youth in 2009, which translated to at least 154,000 children/youth in the U.S. with diabetes in 2001 (5) and at least 192,000 in 2009 (6). Overall, between 2001 and 2009, prevalence of type 1 diabetes in youth increased by 21.1% (95% CI 15.6–27.0), with similar increases for boys and girls and in most racial/ethnic and age groups (2) […]. The prevalence of type 2 diabetes also increased significantly over the same time period by 30.5% (95% CI 17.3–45.1), with increases observed in both sexes, 10–14- and 15–19-year-olds, and among Hispanic and non-Hispanic white and African American youth (2). These data on changes in type 2 are consistent with smaller U.S. studies (711).”

“The incidence of diabetes […] in 2002 to 2003 was 24.6/100,000/year (12), representing ∼15,000 new patients every year with type 1 diabetes and 3,700 with type 2 diabetes, increasing to 18,436 newly diagnosed type 1 and 5,089 with type 2 diabetes in 2008 to 2009 (13). Among non-Hispanic white youth, the incidence of type 1 diabetes increased by 2.7% (95% CI 1.2–4.3) annually between 2002 and 2009. Significant increases were observed among all age groups except the youngest age group (0–4 years) (14). […] The underlying factors responsible for this increase have not yet been identified.”

Over 50% of youth are hospitalized at diabetes onset, and ∼30% of children newly diagnosed with diabetes present with diabetic ketoacidosis (DKA) (19). Prevalence of DKA at diagnosis was three times higher among youth with type 1 diabetes (29.4%) compared with youth with type 2 diabetes (9.7%) and was lowest in Asian/Pacific Islanders (16.2%) and highest among Hispanics (27.0%).”

“A significant proportion of youth with diabetes, particularly those with type 2 diabetes, have very poor glycemic control […]: 17% of youth with type 1 diabetes and 27% of youth with type 2 diabetes had A1C levels ≥9.5% (≥80 mmol/mol). Minority youth were significantly more likely to have higher A1C levels compared with non-Hispanic white youth, regardless of diabetes type. […] Optimal care is an important component of successful long-term management for youth with diabetes. While there are high levels of adherence for some diabetes care indicators such as blood pressure checks (95%), urinary protein tests (83%), and lipid assessments (88%), approximately one-third of youth had no documentation of eye or A1C values at appropriate intervals and therefore were not meeting the American Diabetes Association (ADA)-recommended screening for diabetic control and complications (40). Participants ≥18 years old, particularly those with type 2 diabetes, and minority youth with type 1 diabetes had fewer tests of all kinds performed. […] Despite current treatment options, the prevalence of poor glycemic control is high, particularly among minority youth. Our initial findings suggest that a substantial number of youth with diabetes will develop serious, debilitating complications early in life, which is likely to have significant implications for their quality of life, as well as economic and health care implications.”

“Because recognition of the broader spectrum of diabetes in children and adolescents is recent, there are no gold-standard definitions for differentiating the types of diabetes in this population, either for research or clinical purposes or for public health surveillance. The ADA classification of diabetes as type 1 and type 2 does not include operational definitions for the specific etiologic markers of diabetes type, such as types and numbers of diabetes autoantibodies or measures of insulin resistance, hallmarks of type 1 and 2 diabetes, respectively (43). Moreover, obese adolescents with a clinical phenotype suggestive of type 2 diabetes can present with ketoacidosis (44) or have evidence of autoimmunity (45).”

“Using the ADA framework (43), we operationalized definitions of two main etiologic markers, autoimmunity and insulin sensitivity, to identify four etiologic subgroups based on the presence or absence of markers. Autoimmunity was based on presence of one or more diabetes autoantibodies (GAD65 and IA2). Insulin sensitivity was estimated using clinical variables (A1C, triglyceride level, and waist circumference) from a formula that was highly associated with estimated insulin sensitivity measured using a euglycemic-hyperinsulinemic clamp among youth with type 1 and 2 and normal control subjects (46). Participants were categorized as insulin resistant […] and insulin sensitive (47). Using this approach, 54.5% of SEARCH cases were classified as typical type 1 (autoimmune, insulin-sensitive) diabetes, while 15.9% were classified as typical type 2 (nonautoimmune, insulin-resistant) diabetes. Cases that were classified as autoimmune and insulin-resistant likely represent individuals with type 1 autoimmune diabetes and concomitant obesity, a phenotype becoming more prevalent as a result of the recent increase in the frequency of obesity, but is unlikely to be a distinct etiologic entity.”

“Ten percent of SEARCH participants had no evidence of either autoimmunity or insulin resistance and thus require additional testing, including additional measurements of diabetes-related autoantibodies (only two antibodies were measured in SEARCH) as well as testing for monogenic forms of diabetes to clarify etiology. Among antibody-negative youth, 8% of those tested had a mutation in one or more of the hepatocyte nuclear factor-1α (HNF-1α), glucokinase, and HNF-4α genes, an estimated monogenic diabetes population prevalence of at least 1.2% (48).”

iv. Does the Prevailing Hypothesis That Small-Fiber Dysfunction Precedes Large-Fiber Dysfunction Apply to Type 1 Diabetic Patients?

The short answer is ‘yes, it does’. Some observations from the paper:

“Diabetic sensorimotor polyneuropathy (DSP) is a common complication of diabetes, affecting 28–55% of patients (1). A prospective Finnish study found evidence of probable or definite neuropathy in 8.3% of diabetic patients at the time of diagnosis, 16.7% after 5 years, and 41.9% after 10 years (2). Diabetes-related peripheral neuropathy results in serious morbidity, including chronic neuropathic pain, leg weakness and falls, sensory loss and foot ulceration, and amputation (3). Health care costs associated with diabetic neuropathy were estimated at $10.9 billion in the U.S. in 2003 (4). However, despite the high prevalence of diabetes and DSP, and the important public health implications, there is a lack of serum- or tissue-based biomarkers to diagnose and follow patients with DSP longitudinally. Moreover, numerous attempts at treatment have yielded negative results.”

“DSP is known to cause injury to both large-diameter, myelinated (Aα and Aβ) fibers and small-diameter, unmyelinated nerve (Aδ and C) fibers; however, the sequence of nerve fiber damage remains uncertain. While earlier reports seemed to indicate simultaneous loss of small- and large-diameter nerve fibers, with preserved small/large ratios (5), more recent studies have suggested the presence of early involvement of small-diameter Aδ and C fibers (611). Some suggest a temporal relationship of small-fiber impairment preceding that of large fibers. For example, impairment in the density of the small intraepidermal nerve fibers in symptomatic patients with impaired glucose tolerance (prediabetes) have been observed in the face of normal large-fiber function, as assessed by nerve conduction studies (NCSs) (9,10). In addition, surveys of patients with DSP have demonstrated an overwhelming predominance of sensory and autonomic symptoms, as compared with motor weakness. Again, this has been interpreted as indicative of preferential small-fiber dysfunction (12). Though longitudinal studies are limited, such studies have lead to the current prevailing hypothesis for the natural history of DSP that measures of small-fiber morphology and function decline prior to those of large fibers. One implication of this hypothesis is that small-fiber testing could serve as an earlier, subclinical primary end point in clinical trials investigating interventions for DSP (13).

The hypothesis described above has been investigated exclusively in type 2 diabetic or prediabetic patients. Through the study of a cohort of healthy volunteers and type 1 diabetic subjects […], we had the opportunity to evaluate in cross-sectional analysis the relationship between measures of large-fiber function and small-fiber structure and function. Under the hypothesis that small-fiber abnormalities precede large-fiber dysfunction in the natural history of DSP, we sought to determine if: 1) the majority of subjects who meet criteria for large-fiber dysfunction have concurrent evidence of small-fiber dysfunction and 2) the subset of patients without DSP includes a spectrum with normal small-fiber tests (indicating lack of initiation of nerve injury) as well as abnormal small-fiber tests (indicating incipient DSP).”

“Overall, 57 of 131 (43.5%) type 1 diabetic patients met DSP criteria, and 74 of 131 (56.5%) did not meet DSP criteria. Abnormality of CCM [link] was present in 30 of 57 (52.6%) DSP patients and 6 of 74 (8.1%) type 1 diabetic patients without DSP. Abnormality of CDT [Cooling Detection Thresholds, relevant link] was present in 47 of 56 (83.9%) DSP patients and 17 of 73 (23.3%) without DSP. Abnormality of LDIflare [laser Doppler imaging of heat-evoked flare] was present in 30 of 57 (52.6%) DSP patients and 20 of 72 (27.8%) without DSP. Abnormality of HRV [Heart Rate Variability] was present in 18 of 45 (40.0%) DSP patients and 6 of 70 (8.6%) without DSP. […] sensitivity analysis […] revealed that abnormality of any one of the four small-fiber measures was present in 55 of 57 (96.5%) DSP patients […] and 39 of 74 (52.7%) type 1 diabetic patients without DSP. Similarly, abnormality of any two of the four small-fiber measures was present in 43 of 57 (75.4%) DSP patients […] and 9 of 74 (12.2%) without DSP. Finally, abnormality of either CDT or CCM (with these two tests selected based on their high reliability) was noted in 53 of 57 (93.0%) DSP patients and 21 of 74 (28.4%) patients without DSP […] When DSP was defined based on symptoms and signs plus abnormal sural SNAP [sensory nerve action potential] amplitude or conduction velocity, there were 68 of 131 patients who met DSP criteria and 63 of 131 who did not. Abnormality of any one of the four small-fiber measures was present in 63 of 68 (92.6%) DSP patients and 31 of 63 (49.2%) type 1 diabetic patients without DSP. […] Finally, if DSP was defined based on clinical symptoms and signs alone, with TCNS ≥5, there were 68 of 131 patients who met DSP criteria and 63 of 131 who did not. Abnormality of any one of the four small-fiber measures was present in 62 of 68 (91.2%) DSP patients and 32 of 63 (50.8%) type 1 diabetic patients without DSP.”

“Qualitative analysis of contingency tables shows that the majority of patients with DSP have concurrent evidence of small-fiber dysfunction, and patients without DSP include a spectrum with normal small-fiber tests (indicating lack of initiation of nerve injury) as well as abnormal small-fiber tests. Evidence of isolated large-fiber injury was much less frequent […]. These findings suggest that small-fiber damage may herald the onset of DSP in type 1 diabetes. In addition, the above findings remained true when alternative definitions of DSP were explored in a sensitivity analysis. […] The second important finding was the linear relationships noted between small-fiber structure and function tests (CDT, CNFL, LDIflare, and HRV) […] and the number of NCS abnormalities (a marker of large-fiber function). This might indicate that once the process of large-fiber nerve injury in DSP has begun, damage to large and small nerve fibers occurs simultaneously.”

v. Long-Term Complications and Mortality in Young-Onset Diabetes.

“Records from the Royal Prince Alfred Hospital Diabetes Clinical Database, established in 1986, were matched with the Australian National Death Index to establish mortality outcomes for all subjects until June 2011. Clinical and mortality outcomes in 354 patients with T2DM, age of onset between 15 and 30 years (T2DM15–30), were compared with T1DM in several ways but primarily with 470 patients with T1DM with a similar age of onset (T1DM15–30) to minimize the confounding effect of age on outcome.

RESULTS For a median observation period of 21.4 (interquartile range 14–30.7) and 23.4 (15.7–32.4) years for the T2DM and T1DM cohorts, respectively, 71 of 824 patients (8.6%) died. A significant mortality excess was noted in T2DM15–30 (11 vs. 6.8%, P = 0.03), with an increased hazard for death (hazard ratio 2.0 [95% CI 1.2–3.2], P = 0.003). Death for T2DM15–30 occurred after a significantly shorter disease duration (26.9 [18.1–36.0] vs. 36.5 [24.4–45.4] years, P = 0.01) and at a relatively young age. There were more cardiovascular deaths in T2DM15–30 (50 vs. 30%, P < 0.05). Despite equivalent glycemic control and shorter disease duration, the prevalence of albuminuria and less favorable cardiovascular risk factors were greater in the T2DM15–30 cohort, even soon after diabetes onset. Neuropathy scores and macrovascular complications were also increased in T2DM15–30 (P < 0.0001).

CONCLUSIONS Young-onset T2DM is the more lethal phenotype of diabetes and is associated with a greater mortality, more diabetes complications, and unfavorable cardiovascular disease risk factors when compared with T1DM.

“Only a few previous studies have looked at comparative mortality in T1DM and T2DM onset in patients <30 years of age. In a Swedish study of patients with diabetes aged 15–34 years compared with a general population, the standardized mortality ratio was higher for the T2DM than for the T1DM cohort (2.9 vs. 1.8) (17). […] Recently, Dart et al. (19) examined survival in youth aged 1–18 years with T2DM versus T1DM. Kaplan-Meier analysis revealed a statistically significant lower survival probability for the youth with T2DM, although the number at risk was low after 10 year’s duration. Taken together, these findings are in keeping with the present observations and are supportive evidence for a higher mortality in young-onset T2DM than in T1DM. The majority of deaths appear to be from cardiovascular causes and significantly more so for young T2DM.”

“Although the age of onset of T1DM diabetes is usually in little doubt because of a more abrupt presentation, it is possible that the age of onset of T2DM was in fact earlier than recognized. With a previously published method for estimating time delay until diagnosis of T2DM (26) by plotting the prevalence of retinopathy against duration and extrapolating to a point of zero retinopathy, we found that there is no difference in the slope and intercept of this relationship between the T2DM and the T1DM cohorts […] delay in diagnosis is unlikely to be an explanation for the differences in observed outcome.”

vi. Cardiovascular Risk Factors Are Associated With Increased Arterial Stiffness in Youth With Type 1 Diabetes.

“Increased arterial stiffness independently predicts all-cause and CVD mortality (3), and higher pulse pressure predicts CVD mortality, incidence, and end-stage renal disease development among adults with type 1 diabetes (1,4,5). Several reports have shown that youth and adults with type 1 diabetes have elevated arterial stiffness, though the mechanisms are largely unknown (6). The etiology of advanced atherosclerosis in type 1 diabetes is likely multifactorial, involving metabolic, behavioral, and diabetes-specific cardiovascular (CV) risk factors. Aging, high blood pressure (BP), obesity, the metabolic syndrome (MetS), and type 2 diabetes are the main contributors of sustained increased arterial stiffness in adults (7,8). However, the natural history, the age-related progression, and the possible determinants of increased arterial stiffness in youth with type 1 diabetes have not been studied systematically. […] There are currently no data examining the impact of CV risk factors and their clustering in youth with type 1 diabetes on subsequent CVD morbidity and mortality […]. Thus, the aims of this report were: 1) to describe the progression of arterial stiffness, as measured by pulse wave velocity (PWV), over time, among youth with type 1 diabetes, and 2) to explore the association of CV risk factors and their clustering as MetS with PWV in this cohort.”

“Youth were age 14.5 years (SD 2.8) and had an average disease duration of 4.8 (3.8) years at baseline, 46.3% were female, and 87.6% were of NHW race/ethnicity. At baseline, 10.0% had high BP, 10.9% had a large waist circumference, 11.6% had HDL-c ≤40 mg/dL, 10.9% had a TG level ≥110 mg/dL, and 7.0% had at least two of the above CV risk factors (MetS). In addition, 10.3% had LDL-c ≥130 mg/dL, 72.0% had an HbA1c ≥7.5% (58 mmol/mol), and 9.2% had ACR ≥30 μg/mL. Follow-up measures were obtained on average at age 19.2 years, when the average duration of diabetes was 10.1 (3.9) years.”

“Over an average follow-up period of ∼5 years, there was a statistically significant increase of 0.7 m/s in PWV (from 5.2 to 5.9 m/s), representing an annual increase of 2.8% or 0.145 m/s. […] Based on our data, if this rate of change is stable over time, the estimated average PWV by the time these youth enter their third decade of life will be 11.3 m/s, which was shown to be associated with a threefold increased hazard for major CV events (26). There are no similar studies in youth to compare these findings. In adults, the rate of change in PWV was 0.081 m/s/year in nondiabetic normotensive patients, although it was higher in hypertensive adults (0.147 m/s/year) (7). We also showed that the presence of central adiposity and elevated BP at baseline, as well as clustering of at least two CV risk factors, was associated with significantly worse PWV over time, although these baseline factors did not significantly influence the rate of change in PWV over this period of time. Changes in CV risk factors, specifically increases in central adiposity, LDL-c levels, and worsening glucose control, were independently associated with worse PWV over time. […] Our inability to detect a difference in the rate of change in PWV in our youth with MetS (vs. those without MetS) may be due to several factors, including a combination of a relatively small sample size, short period of follow-up, and young age of the cohort (thus with lower baseline PWV levels).”


November 8, 2017 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Medicine, Nephrology, Neurology, Studies | Leave a comment


Most of the words below are words which I encountered while reading the Jim Butcher novels: Fool Moon, Grave Peril, Summer Knight, Death Masks, Blood Rites, Dead Beat, and Proven Guilty.

Gobbet. Corrugate. Whuff. Wino. Shinny. Ruff. Rubberneck. Pastel. Sidhe. Appellation. Tine. Clomp. Susurration. Bier. Pucker. Haft. Topiary. Tendril. Pommel. Swath.

Chitter. Wispy. Flinders. Ewer. Incongruous. Athame. Bole. Chitin. Prancy. Doily. Garland. Heft. Hod. Klaxon. Ravening. Juke. Schlep. Pew. Gaggle. Passel.

Scourge. Coven. Wetwork. Gofer. Hinky. Pratfall. Parti-color(ed). Clawhammer. Mesquite. Scion. Traction. Kirtle. Avaunt. Imbibe. Betimes. Dinky. Rebar. Maw. Strident. Mangel.

GeodePanacheLuminance. WickSusurrus. ChuffWhammy. Cuss. Ripsaw. Scrunch. Fain. Hygroscopicity. Anasarca. Bitumen. Lingula. Diaphoretic. Ketch. Callipygian. Defalcation. Serried.

November 7, 2017 Posted by | Books, Language | Leave a comment

Things I learn from my patients…

Here’s the link. Some sample quotes below:

“When attempting a self-circumcision do not use dry ice to numb the area… and when the dry ice sticks to the… a…. area, do not attempt to remove the ice with boiling water.”

“When your 97-year old mother trips and falls on the floor and doesn’t say anything or really seem to move at all, you should [definitely] wait 5-6 days before calling EMS. If she starts to feel cold (even though she hasn’t said that she’s cold), just cover her with blankets and surround her with space heaters. She’s probably just sleeping and will get up when she’s good and ready. Nevermind the smell and the roaches.”

“….the vagina is not the best place to store those pieces of broken glass you were collecting.”

don’t allow someone with a known poorly controlled seizure disorder to perform oral sex on you…”

“Only Santa will actually fit down a chimney. All the way down, anyway.
And Santa can get back out without the rescue squad.”

“Broken glass is not the ideal surgical tool for self-castration”

“If you come into the ER with a chief complaint of “Falling down because I’m drunk” don’t cause a scene 20 minutes later when we tell you you’re too drunk to sign yourself out AMA.”

“When transported by EMS for syncopal episode after drinking a case of beer with your buddies, don’t keep bothering the doctor about when you are going to be discharged. Especially when the reason you want to leave so bad is because you drive an 18-wheeler and you have to be in Texas by midnight. Your doc will begin inventing reasons to admit you.”

“If your family/doctor/government whatever has taken away your drivers license because you have frequent seizures and refuse to take your pheno, please use a riding lawn-mower as your primary means of transportation. Chances are, you won’t seize, hit a telephone pole, burn your leg and scalp on the mower as you fall off of it, and cause a power outage in your surrounding area.”

“If you happen to be driving drunk and feeling that you can’t stay awake anymore, you shouldn’t turn off your lights when you park in the middle of the interstate to take your nap. ”

“While yelling at the top of your lungs that you are having chest pain and you need a doctor to see you immediately, it is best to quit masterbating once said doctor enters the room to evaluate you. Your doctor really doesn’t want to see you do that.”

“Don’t stick things in your rectum. A good general rule. Should you break this rule be sure that you are not a 14 year old boy who has swiped your mom’s vibrator. Once the vibrator disappears and doesn’t come out for 3 days you will have to come to the ER and go for an EUA/removal. Trying to explain this string of events to your dad is significantly more akward than, say, explaining how you wrecked the car.”

“Sitting on the porch minding your own business is the #1 cause of knife wounds.”

“1) If you fall off a three-story high ladder, you should definitely drink a fifth of vodka in your buddy’s car on the way to the ED.
2) Walking in and announcing “I just fell three stories” will make the triage staff move almost as quickly as they move when someone says “I have free cookies.””

“If you’ve been stabbed in the head and blood is jetting out of your temporal artery taking a shower to wash off the blood before coming to the ER won’t help.”

“If you steal someone’s prescription pad, be aware that “Mofine” isn’t usually prescribed by the unit “pound” (as in “A pound of Mofine”)”

“If you have sex with a girl, and your frat brother tells you right after you come downstairs that she has herpes, pouring bleach all over your privates will not take care of ANY of your problems!”

“If you are 17 and very drunk and are brought to the ER with a face that looks like hamburger and an upper lip that needs to be put back together, please just say you got into a fight. We would prefer not to know that someone bet you $20 that you couldn’t punch yourself unconscious (and you won).”

“Me: Do you have any health problems.
Pt: No.
Pt’s wife: He’s never been sick a day in his life.
Me: That looks like a heart bypass scar.
Pt: Yes, it is.
Pt’s wife: He got that after his diabetes gave him a heart attack.”

“When you cut off your penis to show your ex-girlfriend you won’t take her dumping you lying down, please tell us where we can locate said appendage BEFORE you try and puke up the answer.”

“No, young lady, being on top will not protect you from getting pregnant.”

“When a patient insists that her oral contraceptive is a CIA mind control drug and that she’s the reincarnation of Joan of Arc… don’t ring me. RING PSYCH!!”

“If you are planning to huff carburator cleaner try to get the stuff that’s just hydrocarbons. You’ll get a hydrocarbon pneumonitis but that’s about it. If you pick the stuff that has methanol in it you’ll get to experience the miracle of dialysis.”

“If you present with “M’ahmbroke” because you got drunk and fell, fracturing your distal humerus, beware the orthopedic consult. They’ll try to trick you into consenting to surgical repair. Luckily, you’re too clever to fall for that. While they’re busy reading the radiographs, remove your IV and head for the door. When they try to stop you, declare loudly and repeatedly “you ain’t cuttin m’ahmoff!” Negotiate for a cast instead.”

“If you’re coming in to the ED to see your “friend”, it would be wise if you knew what his last name was, and not to change your story about his injury from “gunshot wound” to “car accident” to “gash in leg” when you speak to myself and two other people.”

“If your mom really likes to party and never met a drug she didn’t like you probably shouldn’t leave her alone in your house. When you come back 4 hours later she will have gone through all your booze, pot, crack, meth, and I think PCP and then completely trashed your house while for some reason smearing feces all over your living room. Then she will be brought to my ER where she will fall only one drug short (opiates) of a perfect score on my tox screen.”

“A huge number of the patients I see don’t have a PMD because they “just” moved here (sometime within the last year). Here are some medical tips for your move [:]
-You can’t quit dialysis just because you moved. Your kidneys won’t work any better in Vegas than they did in LA.
-If you move here and run out of insulin (after 2 months) and call your doctor back in Jersey and tell him you feel bad and can’t quit puking (’cause you’re in DKA) he’ll tell you to go to the ER. I’ll tell you you’re an idiot.”

“self injecting boiling crisco into the urethra will not resolve an erectile dysfunction… no matter how many times your best friend assures you.”

“If you’re a 65+ male, if you start feeling chest pain, do not try to cure these pains over a period of one week by placing hot coins in various places on your chest roughly corresponding to the location of the pain (over sternum and left chest in the direction of the left shoulder), not even if you hate doctors and hospitals.”

“The statement “I’m not hearing voices. They’re just talking to themselves.” will still result in a psych hold.”

“If you have a 20 year history of uncontrolled diabetes and a severe case of peripheral neuropathy, it is probably not a good idea to sleep with your dog, even if it is only a chihuahua. It’s gotta be a bit of a shock to wake up and see that the dog has gnawed off the tip of your big toe while you were napping, and you couldn’t feel a thing.”

November 6, 2017 Posted by | Medicine, Random stuff | Leave a comment