Econstudentlog

Links and random stuff

i. Pulmonary Aspects of Exercise and Sports.

“Although the lungs are a critical component of exercise performance, their response to exercise and other environmental stresses is often overlooked when evaluating pulmonary performance during high workloads. Exercise can produce capillary leakage, particularly when left atrial pressure increases related to left ventricular (LV) systolic or diastolic failure. Diastolic LV dysfunction that results in elevated left atrial pressure during exercise is particularly likely to result in pulmonary edema and capillary hemorrhage. Data from race horses, endurance athletes, and triathletes support the concept that the lungs can react to exercise and immersion stress with pulmonary edema and pulmonary hemorrhage. Immersion in water by swimmers and divers can also increase stress on pulmonary capillaries and result in pulmonary edema.”

“Zavorsksy et al. studied individuals under several different workloads and performed lung imaging to document the presence or absence of lung edema. Radiographic image readers were blinded to the exposures and reported visual evidence of lung fluid. In individuals undergoing a diagnostic graded exercise test, no evidence of lung edema was noted. However, 15% of individuals who ran on a treadmill at 70% of maximum capacity for 2 hours demonstrated evidence of pulmonary edema, as did 65% of those who ran at maximum capacity for 7 minutes. Similar findings were noted in female athletes. Pingitore et al. examined 48 athletes before and after completing an iron man triathlon. They used ultrasound to detect lung edema and reported the incidence of ultrasound lung comets. None of the athletes had evidence of lung edema before the event, while 75% showed evidence of pulmonary edema immediately post-race, and 42% had persistent findings of pulmonary edema 12 hours post-race. Their data and several case reports have demonstrated that extreme exercise can result in pulmonary edema”

Conclusions

Sports and recreational participation can result in lung injury caused by high pulmonary pressures and increased blood volume that raises intracapillary pressure and results in capillary rupture with subsequent pulmonary edema and hemorrhage. High-intensity exercise can result in accumulation of pulmonary fluid and evidence of pulmonary edema. Competitive swimming can result in both pulmonary edema related to fluid shifts into the thorax from immersion and elevated LV end diastolic pressure related to diastolic dysfunction, particularly in the presence of high-intensity exercise. […] The most important approach to many of these disorders is prevention. […] Prevention strategies include avoiding extreme exercise, avoiding over hydration, and assuring that inspiratory resistance is minimized.”

ii. Some interesting thoughts on journalism and journalists from a recent SSC Open Thread by user ‘Well’ (quotes from multiple comments). His/her thoughts seem to line up well with my own views on these topics, and one of the reasons why I don’t follow the news is that my own answer to the first question posed below is quite briefly that, ‘…well, I don’t’:

“I think a more fundamental problem is the irrational expectation that newsmedia are supposed to be a reliable source of information in the first place. Why do we grant them this make-believe power?

The English and Acting majors who got together to put on the shows in which they pose as disinterested arbiters of truth use lots of smoke and mirror techniques to appear authoritative: they open their programs with regal fanfare, they wear fancy suits, they make sure to talk or write in a way that mimics the disinterestedness of scholarly expertise, they appear with spinning globes or dozens of screens behind them as if they’re omniscient, they adorn their publications in fancy black-letter typefaces and give them names like “Sentinel” and “Observer” and “Inquirer” and “Plain Dealer”, they invented for themselves the title of “journalists” as if they take part in some kind of peer review process… But why do these silly tricks work? […] what makes the press “the press” is the little game of make-believe we play where an English or Acting major puts on a suit, talks with a funny cadence in his voice, sits in a movie set that looks like God’s Control Room, or writes in a certain format, using pseudo-academic language and symbols, and calls himself a “journalist” and we all pretend this person is somehow qualified to tell us what is going on in the world.

Even when the “journalist” is saying things we agree with, why do we participate in this ridiculous charade? […] I’m not against punditry or people putting together a platform to talk about things that happen. I’m against people with few skills other than “good storyteller” or “good writer” doing this while painting themselves as “can be trusted to tell you everything you need to know about anything”. […] Inasumuch as what I’m doing can be called “defending” them, I’d “defend” them not because they are providing us with valuable facts (ha!) but because they don’t owe us facts, or anything coherent, in the first place. It’s not like they’re some kind of official facts-providing service. They just put on clothes to look like one.”

iii. Chatham house rule.

iv. Sex Determination: Why So Many Ways of Doing It?

“Sexual reproduction is an ancient feature of life on earth, and the familiar X and Y chromosomes in humans and other model species have led to the impression that sex determination mechanisms are old and conserved. In fact, males and females are determined by diverse mechanisms that evolve rapidly in many taxa. Yet this diversity in primary sex-determining signals is coupled with conserved molecular pathways that trigger male or female development. Conflicting selection on different parts of the genome and on the two sexes may drive many of these transitions, but few systems with rapid turnover of sex determination mechanisms have been rigorously studied. Here we survey our current understanding of how and why sex determination evolves in animals and plants and identify important gaps in our knowledge that present exciting research opportunities to characterize the evolutionary forces and molecular pathways underlying the evolution of sex determination.”

v. So Good They Can’t Ignore You.

“Cal Newport’s 2012 book So Good They Can’t Ignore You is a career strategy book designed around four ideas.

The first idea is that ‘follow your passion’ is terrible career advice, and people who say this should be shot don’t know what they’re talking about. […] The second idea is that instead of believing in the passion hypothesis, you should adopt what Newport calls the ‘craftsman mindset’. The craftsman mindset is that you should focus on gaining rare and valuable skills, since this is what leads to good career outcomes.

The third idea is that autonomy is the most important component of a ‘dream’ job. Newport argues that when choosing between two jobs, there are compelling reasons to ‘always’ pick the one with higher autonomy over the one with lower autonomy.

The fourth idea is that having a ‘mission’ or a ‘higher purpose’ in your job is probably a good idea, and is really nice if you can find it. […] the book structure is basically: ‘following your passion is bad, instead go for Mastery[,] Autonomy and Purpose — the trio of things that have been proven to motivate knowledge workers’.” […]

“Newport argues that applying deliberate practice to your chosen skill market is your best shot at becoming ‘so good they can’t ignore you’. The key is to stretch — you want to practice skills that are just above your current skill level, so that you experience discomfort — but not too much discomfort that you’ll give up.” […]

“Newport thinks that if your job has one or more of the following qualities, you should leave your job in favour of another where you can build career capital:

  • Your job presents few opportunities to distinguish yourself by developing relevant skills that are rare and valuable.
  • Your job focuses on something you think is useless or perhaps even actively bad for the world.
  • Your job forces you to work with people you really dislike.

If you’re in a job with any of these traits, your ability to gain rare and valuable skills would be hampered. So it’s best to get out.”

vi. Structural brain imaging correlates of general intelligence in UK Biobank.

“The association between brain volume and intelligence has been one of the most regularly-studied—though still controversial—questions in cognitive neuroscience research. The conclusion of multiple previous meta-analyses is that the relation between these two quantities is positive and highly replicable, though modest (Gignac & Bates, 2017; McDaniel, 2005; Pietschnig, Penke, Wicherts, Zeiler, & Voracek, 2015), yet its magnitude remains the subject of debate. The most recent meta-analysis, which included a total sample size of 8036 participants with measures of both brain volume and intelligence, estimated the correlation at r = 0.24 (Pietschnig et al., 2015). A more recent re-analysis of the meta-analytic data, only including healthy adult samples (N = 1758), found a correlation of r = 0.31 (Gignac & Bates, 2017). Furthermore, the correlation increased as a function of intelligence measurement quality: studies with better-quality intelligence tests—for instance, those including multiple measures and a longer testing time—tended to produce even higher correlations with brain volume (up to 0.39). […] Here, we report an analysis of data from a large, single sample with high-quality MRI measurements and four diverse cognitive tests. […] We judge that the large N, study homogeneity, and diversity of cognitive tests relative to previous large scale analyses provides important new evidence on the size of the brain structure-intelligence correlation. By investigating the relations between general intelligence and characteristics of many specific regions and subregions of the brain in this large single sample, we substantially exceed the scope of previous meta-analytic work in this area. […]

“We used a large sample from UK Biobank (N = 29,004, age range = 44–81 years). […] This preregistered study provides a large single sample analysis of the global and regional brain correlates of a latent factor of general intelligence. Our study design avoids issues of publication bias and inconsistent cognitive measurement to which meta-analyses are susceptible, and also provides a latent measure of intelligence which compares favourably with previous single-indicator studies of this type. We estimate the correlation between total brain volume and intelligence to be r = 0.276, which applies to both males and females. Multiple global tissue measures account for around double the variance in g in older participants, relative to those in middle age. Finally, we find that associations with intelligence were strongest in frontal, insula, anterior and medial temporal, lateral occipital and paracingulate cortices, alongside subcortical volumes (especially the thalamus) and the microstructure of the thalamic radiations, association pathways and forceps minor.”

vii. Another IQ study: Low IQ as a predictor of unsuccessful educational and occupational achievement: A register-based study of 1,098,742 men in Denmark 1968–2016.

“Intelligence test score is a well-established predictor of educational and occupational achievement worldwide […]. Longitudinal studies typically report cor-relation coefficients of 0.5–0.6 between intelligence and educational achievement as assessed by educational level or school grades […], correlation coefficients of 0.4–0.5 between intelligence and occupational level […] and cor-relation coefficients of 0.2–0.4 between intelligence and income […]. Although the above-mentioned associations are well-established, low intelligence still seems to be an overlooked problem among young people struggling to complete an education or gain a foothold in the labour market […] Due to contextual differences with regard to educational system and flexibility and security on the labour market as well as educational and labour market policies, the role of intelligence in predicting unsuccessful educational and occupational courses may vary among countries. As Denmark has free admittance to education at all levels, state financed student grants for all students, and a relatively high support of students with special educational needs, intelligence might be expected to play a larger role – as socioeconomic factors might be of less importance – with regard to educational and occupational achievement compared with countries outside Scandinavia. The aim of this study was therefore to investigate the role of IQ in predicting a wide range of indicators of unsuccessful educational and occupational achievement among young people born across five decades in Denmark.”

“Individuals who differed in IQ score were found to differ with regard to all indicators of unsuccessful educational and occupational achievement such that low IQ was associated with a higher proportion of unsuccessful educational and occupational achievement. For example, among the 12.1% of our study population who left lower secondary school without receiving a certificate, 39.7% had an IQ < 80 and 23.1% had an IQ of 80–89, although these individuals only accounted for 7.8% and 13.1% of the total study population. The main analyses showed that IQ was inversely associated with all indicators of unsuccessful educational and occupational achievement in young adulthood after adjustment for covariates […] With regard to unsuccessful educational achievement, […] the probabilities of no school leaving certificate, no youth education at age 25, and no vocational qualification at age 30 decreased with increasing IQ in a cubic relation, suggesting essentially no or only weak associations at superior IQ levels. IQ had the strongest influence on the probability of no school leaving certificate. Although the probabilities of the three outcome indicators were almost the same among individuals with extremely low IQ, the probability of no school leaving certificate approached zero among individuals with an IQ of 100 or above whereas the probabilities of no youth education at age 25 and no vocational qualification at age 30 remained notably higher. […] individuals with an IQ of 70 had a median gross income of 301,347 DKK, individuals with an IQ of 100 had a median gross income of 331,854, and individuals with an IQ of 130 had a median gross income of 363,089 DKK – in the beginning of June 2018 corresponding to about 47,856 USD, 52,701 USD, and 57,662 USD, respectively. […] The results showed that among individuals undergoing education, low IQ was associated with a higher hazard rate of passing to employment, unemployment, sickness benefits receipt and welfare benefits receipt […]. This indicates that individuals with low IQ tend to leave the educational system to find employment at a younger age than individuals with high IQ, but that this early leave from the educational system often is associated with a transition into unemployment, sickness benefits receipt and welfare benefits receipt.”

Fig 1

Conclusions
This study of 1,098,742 Danish men followed in national registers from 1968 to 2016 found that low IQ was a strong and consistent predictor of 10 indicators of unsuccessful educational and occupational achievement in young adulthood. Overall, it seemed that IQ had the strongest influence on the risk of unsuccessful educational achievement and on the risk of disability pension, and that the influence of IQ on educational achievement was strongest in the early educational career and decreased over time. At the community level our findings suggest that intelligence should be considered when planning interventions to reduce the rates of early school leaving and the unemployment rates and at the individual level our findings suggest that assessment of intelligence may provide crucial information for the counselling of poor-functioning schoolchildren and adolescents with regard to both the immediate educational goals and the more distant work-related future.”

Advertisements

September 15, 2019 Posted by | Biology, Medicine, Psychology | Leave a comment

Dyslexia (I)

A few years back I started out on another publication edited by the same author, the Wiley-Blackwell publication The Science of Reading: A Handbook. That book is dense and in the end I decided it wasn’t worth it to finish it – but I also learned from reading it that Snowling, the author of this book, probably knows her stuff. This book only covers a limited range of the literature on reading, but an interesting one.

I have added some quotes and links from the first chapters of the book below.

“Literacy difficulties, when they are not caused by lack of education, are known as dyslexia. Dyslexia can be defined as a problem with learning which primarily affects the development of reading accuracy and fluency and spelling skills. Dyslexia frequently occurs together with other difficulties, such as problems in attention, organization, and motor skills (movement) but these are not in and of themselves indicators of dyslexia. […] at the core of the problem is a difficulty in decoding words for reading and encoding them for spelling. Fluency in these processes is never achieved. […] children with specific reading difficulties show a poor response to reading instruction […] ‘response to intervention’ has been proposed as a better way of identifying likely dyslexic difficulties than measured reading skills. […] To this day, there is tension between the medical model of ‘dyslexia’ and the understanding of ‘specific learning difficulties’ in educational circles. The nub of the problem for the concept of dyslexia is that, unlike measles or chicken pox, it is not a disorder with a clear diagnostic profile. Rather, reading skills are distributed normally in the population […] dyslexia is like high blood pressure, there is no precise cut-off between high blood pressure and ‘normal’ blood pressure, but if high blood pressure remains untreated, the risk of complications is high. Hence, a diagnosis of ‘hypertension’ is warranted […] this book will show that there is remarkable agreement among researchers regarding the risk factors for poor reading and a growing number of evidence-based interventions: dyslexia definitely exists and we can do a great deal to ameliorate its effects”.

“An obvious though not often acknowledged fact is that literacy builds on a foundation of spoken language—indeed, an assumption of all education systems is that, when a child starts school, their spoken language is sufficient to support reading development. […] many children start school with considerable knowledge about books: they know that print runs from left to right (at least if you are reading English) and that you read from the front to the back of the book; and they are familiar with at least some letter names or sounds. At a basic level, reading involves translating printed symbols into pronunciations—a task referred to as decoding, which requires mapping across modalities from vision (written forms) to audition (spoken sounds). Beyond knowing letters, the beginning reader has to discover how printed words relate to spoken words and a major aim of reading instruction is to help the learner to ‘crack’ this code. To decode in English (and other alphabetic languages) requires learning about ‘grapheme–phoneme’ correspondences—literally the way in which letters or letter combinations relate to the speech sounds of spoken words: this is not a trivial task. When children use language naturally, they have only implicit knowledge of the words they use and they do not pay attention to their sounds; but this is precisely what they need to do in order to learn to decode. Indeed, they have to become ‘aware’ that words can be broken down into constituent parts like the syllable […] and that, in turn, syllables can be segmented into phonemes […]. Phonemes are the smallest sounds which differentiate words; for example, ‘pit’ and ‘bit’ differ by a single phoneme [b]-[p] (in fact, both are referred to as ‘stop consonants’ and they differ only by a single phonemic feature, namely the timing of the voicing onset of the consonant). In the English writing system, phonemes are the units which are coded in the grapheme-correspondences that make up the orthographic code.”

“The term ‘phoneme awareness‘ refers to the ability to reflect on and manipulate the speech sounds in words. It is a metalinguistic skill (a skill requiring conscious control of language) which develops after the ability to segment words into syllables and into rhyming parts […]. There has been controversy over whether phoneme awareness is a cause or a consequence of learning to read. […] In general, letters are easier to learn (being concrete) than phoneme awareness is to acquire (being an abstract skill). […] The acquisition of ‘phoneme awareness’ is a critical step in the development of decoding skills. A typical reader who possesses both letter knowledge and phoneme awareness can readily ‘sound out’ letters and blend the sounds together to read words or even meaningless but pronounceable letter strings (nonwords); conversely, they can split up words (segment them) into sounds for spelling. When these building blocks are in place, a child has developed ‘alphabetic competence’ and the task of becoming a reader can begin properly. […[ Another factor which is important in promoting reading fluency is the size of a child’s vocabulary. […] children with poor oral language skills, specifically limited semantic knowledge of words, [have e.g. been shown to have] particular difficulty in reading irregular words. […] Essentially, reading is a ‘big data’ problem—the task of learning involves extracting the statistical relationships between spelling (orthography) and sound (phonology) and using these to develop an algorithm for reading which is continually refined as further words are encountered.”

“It is commonly believed that spelling is simply the reverse of reading. It is not. As a consequence, learning to read does not always bring with it spelling proficiency. One reason is that the correspondences between letters and sounds used for reading (grapheme–phoneme correspondences) are not just the same as the sound-to-letter rules used for writing (phoneme–grapheme correspondences). Indeed, in English, the correspondences used in reading are generally more consistent than those used in spelling […] many of the early spelling errors children make replicate errors observed in speech development […] Children with dyslexia often struggle to spell words phonetically […] The relationship between phoneme awareness and letter knowledge at age 4 and phonological accuracy of spelling attempts at age 5 has been studied longitudinally with the aim of understanding individual differences in children’s spelling skills. As expected, these two components of alphabetic knowledge predicted the phonological accuracy of children’s early spelling. In turn, children’s phonological spelling accuracy along with their reading skill at this early stage predicted their spelling proficiency after three years in school. The findings suggest that the ability to transcode phonologically provides a foundation for the development of orthographic representations for spelling but this alone is not enough—information acquired from reading experience is required to ensure spellings are conventionally correct. […] for spelling as for reading, practice is important.”

“Irrespective of the language, reading involves mapping between the visual symbols of words and their phonological forms. What differs between languages is the nature of the symbols and the phonological units. Indeed, the mappings which need to be created are at different levels of ‘grain size’ in different languages (fine-grained in alphabets which connect letters and sounds like German or Italian, and more coarse-grained in logographic systems like Chinese that map between characters and syllabic units). Languages also differ in the complexity of their morphology and how this maps to the orthography. Among the alphabetic languages, English is the least regular, particularly for spelling; the most regular is Finnish with a completely transparent system of mappings between letters and phonemes […]. The term ‘orthographic depth’ is used to describe the level of regularity which is observed between languages — English is opaque (or deep), followed by Danish and French which also contain many irregularities, while Spanish and Italian rank among the more regular, transparent (or shallow) orthographies. Over the years, there has been much discussion as to whether children learning to read English have a particularly tough task and there is frequent speculation that dyslexia is more prevalent in English than in other languages. There is no evidence that this is the case. But what is clear is that it takes longer to become a fluent reader of English than of a more transparent language […] There are reasons other than orthographic consistency which make languages easier or harder to learn. One of these is the number of symbols in the writing system: the European languages have fewer than 35 while others have as many as 2,500. For readers of languages with extensive symbolic systems like Chinese, which has more than 2,000 characters, learning can be expected to continue through the middle and high school years. The visual-spatial complexity of the symbols may add further to the burden of learning. […] when there are more symbols in a writing system, the learning demands increase. […] Languages also differ importantly in the ways they represent phonology and meaning.”

“Given the many differences between languages and writing systems, there is remarkable similarity between the predictors of individual differences in reading across languages. The ELDEL study showed that for children reading alphabetic languages there are three significant predictors of growth in reading in the early years of schooling. These are letter knowledge, phoneme awareness, and rapid naming (a test in which the names of colours or objects have to be produced as quickly as possible in response to a random array of such items). Researchers have shown that a similar set of skills predict reading in Chinese […] However, there are also additional predictors that are language-specific. […] visual memory and visuo-spatial skills are stronger predictors of learning to read in a visually complex writing system, such as Chinese or Kannada, than they are for English. Moreover, there is emerging evidence of reciprocal relations – that learning to read in a complex orthography hones visuo-spatial abilities just as phoneme awareness improves as English children learn to read.”

“Children differ in the rate at which they learn to read and spell and children with dyslexia are typically the slowest to do so, assuming standard instruction for all. Indeed, it is clear from the outset that they have more difficulty in learning letters (by name or by sound) than their peers. As we have seen, letter knowledge is a crucial component of alphabetic competence and also offers a way into spelling. So for the dyslexic child with poor letter knowledge, learning to read and spell is compromised from the outset. In addition, there is a great deal of evidence that children with dyslexia have problems with phonological aspects of language from an early age and specifically, acquiring phonological awareness. […] The result is usually a significant problem in decoding—in fact, poor decoding is the hallmark of dyslexia, the signature of which is a nonword reading deficit. In the absence of remediation, this decoding difficulty persists and for many reading becomes something to be avoided. […] the most common pattern of reading deficit in dyslexia is an inability to read ‘new’ or unfamiliar words in the face of better developed word-reading skills — sometimes referred to as ‘phonological dyslexia’. […] Spelling poses a significant challenge to children with dyslexia. This seems inevitable, given their problems with phoneme awareness and decoding. The early spelling attempts of children with dyslexia are typically not phonetic in the way that their peers’ attempts are; rather, they are often difficult to decipher and best described as bizarre. […] errors continue to reflect profound difficulties in representing the sounds of words […] most people with dyslexia continue to show poor spelling through development and there is a very high correlation between (poor) spelling in the teenage years and (poor) spelling in middle age. […] While poor decoding can be a barrier to reading comprehension, many children and adults with dyslexia can read with adequate understanding when this is required but it takes them considerable time to do so, and they tend to avoid writing when it is possible to do so.”

Links:

Phonics.
History of dyslexia research. Samuel Orton. Rudolf Berlin. Anna Gillingham. Orton-Gillingham(-Stillman) approach. Thomas Richard Miles.
Seidenberg & McClelland’s triangle model.
“The Simple View of Reading”.
The lexical quality hypothesis (Perfetti & Hart). Matthew effect.
ELDEL project.
Diacritical mark.
Hiragana.
Phonetic radicals.
Morphogram.

September 15, 2019 Posted by | Books, Language, Psychology | Leave a comment

Words

Many of the words below I encountered while reading the books One of our Thursdays is missing, The secret of our success, Bowling alone, Thief of Time, and The Major Works of Samuel Johnson.

Damson. Greengage. Ingle. Marchioness. Tuberose. Flue. Titushky. Cowling. Soteriology. Piazza. Rake-off. Rusk. Babbittry. Aeolipile. SpallationLeister. Weir. Puffin. Omnipercipient. Shiv.

Vociferation. Ebriety. Playbill. Surtout. Outvie. Copiousness. Animadvert. Vendible. Silvicolous. Leveret. Novitiate. Commodious. Appellative. Preterite. Apostasize. Commixture. Sepulture. Desiccative. Siccity. Philology.

Incivism. Prorogation. Metonym. Apologue. Altricial(ity). Palilalia. Macaroon. Compositionality. Alloparental. Pizzle. Cholo. Epizeuxis. Cursorial. Misprision. Terrestriality. Pranny. Epistrophe. Analepsis. Corvid. Zorbing.

Polyptoton. Antanaclasis. Kern. Scrumtrulescent. Cotillion. Confute. Pinner. Declension. Piscatory. Jointure. Vulnerary. Subtilize. Sublunary. Ebullition. Affright. Exorbitance. Impost. Judicature. Fulminate. Cogency.

September 7, 2019 Posted by | Books, Language | Leave a comment

Quotes

i. “The advantage of living is not measured by length, but by use; some men have lived long, and lived little; attend to it while you are in it. It lies in your will, not in the number of years, for you to have lived enough.” (Michel de Montaigne)

ii. “All of the days go toward death and the last one arrives there.” (-ll-)

iii. “Nothing is so firmly believed as that which we least know.” (-ll-) (Variant: “Men are most apt to believe what they least understand.”)

iv. “The plague of man is boasting of his knowledge.” (-ll-)

v. “Saying is one thing and doing is another.” (-ll-)

vi. “Let no man be ashamed to speak what he is not ashamed to think.” (-ll-)

vii. “Few men have been admired by their own households.” (-ll-)

viii. “There is no wish more natural than the wish to know.” (-ll-)

ix. “It is not without good reason said, that he who has not a good memory should never take upon him the trade of lying.” (-ll-)

x. “Religion abhors the competition for truth. Science can’t live without it.” (Scott Atran, In gods we trust)

xi. “Imagination and intelligence enter into our existence in the part of servants of the primary instincts.” (Albert Einstein, Out of My Later Years (1950)), as quoted in Scott Atran’s In Gods we trust)

xii. “…yes, we are smart, but not because we stand on the shoulders of giants or are giants ourselves. We stand on the shoulders of a very large pyramid of hobbits. The hobbits do get a bit taller as the pyramid ascends, but it’s still the number of hobbits, not the height of particular hobbits, that’s allowing us to see farther.” (Joseph Heinrich, The Secret of Our Success)

xiii. “Underlying these failures is the assumption that we, as humans, all perceive the world similarly, want the same things, pursue these things based on our beliefs (the “facts” about the world), and process new information and experience in the same way. We already know all these assumptions are wrong. […] Different societies possess quite different social norms, institutions, languages, and technologies, and consequently they possess different ways of reasoning, mental heuristics, motivations, and emotional reactions. […] Culture, social norms, and institutions all shape our brains, biology, and hormones, as well as our perceptions, motivations, and judgments. We can’t pick our underlying cultural perceptions and motivations any more than we can suddenly speak a new language.” (-ll-)

xiv. “One of the debates in this literature involves opposing “innate” and “learned” in explaining our abilities and behaviors. [However,] much behavior is both 100% innate and 100% learned. For example, humans have clearly evolved to walk on two legs, and it’s one of our species’ behavioral signatures. Yet we also clearly learn to walk. […] showing that something is learned only tells us about the developmental process but not about whether it was favored by natural selection acting on genes.” (-ll-)

xv. “People always talk about the body as a beautiful well-oiled machine. But sometimes the body communicates with itself by messages written with radioactive ink on asbestos-laced paper, in the hopes that it’s killing itself slightly more slowly than it’s killing anyone who tries to send it fake messages. Honestly it is a miracle anybody manages to stay alive at all.” (Scott Alexander)

xvi. “It is better to be hated for what you are than to be loved for what you are not.” (André Gide)

xvii. “No matter how full a reservoir of maxims one may possess, and no matter how good one’s sentiments may be, if one have not taken advantage of every concrete opportunity to act, one’s character may remain entirely unaffected for the better.” (William James, Principles of Psychology)

xviii. “It is the duty of every man to endeavour that something may be added by his industry to the hereditary aggregate of knowledge and happiness. To add much can indeed be the lot of few, but to add something, however little, every one may hope” (Samuel Johnson, The Major Works of Samuel Johnson)

xix. ” …we should always wish to preserve the dignity of virtue by adorning her with graces which wickedness cannot assume.” (-ll-)

xx. “Let pain deserved without complaint be borne.” (“Leniter ex merito quicquid patiare ferendum est”) (Ovid, as quoted in -ll-)

 

August 20, 2019 Posted by | Anthropology, Books, culture, Quotes/aphorisms | Leave a comment

A few diabetes papers of interest

i. Identical and Nonidentical Twins: Risk and Factors Involved in Development of Islet Autoimmunity and Type 1 Diabetes.

Some observations from the paper:

“Type 1 diabetes is preceded by the presence of preclinical, persistent islet autoantibodies (1). Autoantibodies against insulin (IAA) (2), GAD (GADA), insulinoma-associated antigen 2 (IA-2A) (3), and/or zinc transporter 8 (ZnT8A) (4) are typically present prior to development of symptomatic hyperglycemia and progression to clinical disease. These autoantibodies may develop many years before onset of type 1 diabetes, and increasing autoantibody number and titers have been associated with increased risk of progression to disease (57).

Identical twins have an increased risk of progression of islet autoimmunity and type 1 diabetes after one twin is diagnosed, although reported rates have been highly variable (30–70%) (811). This risk is increased if the proband twin develops diabetes at a young age (12). Concordance rates for type 1 diabetes in monozygotic twins with long-term follow-up is >50% (13). Risk for development of islet autoimmunity and type 1 diabetes for nonidentical twins is thought to be similar to non-twin siblings (risk of 6–10% for diabetes) (14). Full siblings who inherit both high-risk HLA (HLA DQA1*05:01 DR3/4*0302) haplotypes identical to their proband sibling with type 1 diabetes have a much higher risk for development of diabetes than those who share only one or zero haplotypes (55% vs. 5% by 12 years of age, respectively; P = 0.03) (15). Despite sharing both HLA haplotypes with their proband, siblings without the HLA DQA1*05:01 DR3/4*0302 genotype had only a 25% risk for type 1 diabetes by 12 years of age (15).”

“The TrialNet Pathway to Prevention Study (previously the TrialNet Natural History Study; 16) has been screening relatives of patients with type 1 diabetes since 2004 and follows these subjects with serial autoantibody testing for the development of islet autoantibodies and type 1 diabetes. The study offers longitudinal monitoring for autoantibody-positive subjects through HbA1c testing and oral glucose tolerance tests (OGTTs).”

“The purpose of this study was to evaluate the prevalence of islet autoantibodies and analyze a logistic regression model to test the effects of genetic factors and common twin environment on the presence or absence of islet autoantibodies in identical twins, nonidentical twins, and full siblings screened in the TrialNet Pathway to Prevention Study. In addition, this study analyzed the presence of islet autoantibodies (GADA, IA-2A, and IAA) and risk of type 1 diabetes over time in identical twins, nonidentical twins, and full siblings followed in the TrialNet Pathway to Prevention Study. […] A total of 48,051 sibling subjects were initially screened (288 identical twins, 630 nonidentical twins, and 47,133 full siblings). Of these, 48,026 had an initial screening visit with GADA, IA2A, and IAA results (287 identical twins, 630 nonidentical twins, and 47,109 full siblings). A total of 17,226 participants (157 identical twins, 283 nonidentical twins and 16,786 full siblings) were followed for a median of 2.1 years (25th percentile 1.1 year and 75th percentile 4.0 years), with follow-up defined as at least ≥12 months follow-up after initial screening visit.”

“At the initial screening visit, GADA was present in 20.2% of identical twins (58 out of 287), 5.6% of nonidentical twins (35 out of 630), and 4.7% of full siblings (2,205 out of 47,109) (P < 0.0001). Additionally, IA-2A was present primarily in identical twins (9.4%; 27 out of 287) and less so in nonidentical twins (3.3%; 21 out of 630) and full siblings (2.2%; 1,042 out of 47,109) (P = 0.0001). Nearly 12% of identical twins (34 out of 287) were positive for IAA at initial screen, whereas 4.6% of nonidentical twins (29 out of 630) and 2.5% of full siblings (1,152 out of 47,109) were initially IAA positive (P < 0.0001).”

“At 3 years of follow-up, the risk for development of GADA was 16% for identical twins, 5% for nonidentical twins, and 4% for full siblings (P < 0.0001) (Fig. 1A). The risk for development of IA-2A by 3 years of follow-up was 7% for identical twins, 4% for nonidentical twins, and 2% for full siblings (P = 0.0005) (Fig. 1B). At 3 years of follow-up, the risk of development of IAA was 10% for identical twins, 5% for nonidentical twins, and 4% for full siblings (P = 0.006) […] In initially autoantibody-negative subjects, 1.5% of identical twins, 0% of nonidentical twins, and 0.5% of full siblings progressed to diabetes at 3 years of follow-up (P = 0.18) […] For initially single autoantibody–positive subjects, at 3 years of follow-up, 69% of identical twins, 13% of nonidentical twins, and 12% of full siblings developed type 1 diabetes (P < 0.0001) […] Subjects who were positive for multiple autoantibodies at screening had a higher risk of developing type 1 diabetes at 3 years of follow-up with 69% of identical twins, 72% of nonidentical twins, and 47% of full siblings developing type 1 diabetes (P = 0.079)”

“Because TrialNet is not a birth cohort and the median age at screening visit was 11 years overall, this study would not capture subjects who had initial seroconversion at a young age and then progressed through the intermediate stage of multiple antibody positivity before developing diabetes.”

“This study of >48,000 siblings of patients with type 1 diabetes shows that at initial screening, identical twins were more likely to have at least one positive autoantibody and be positive for GADA, IA-2A, and IAA than either nonidentical twins or full siblings. […] risk for development of type 1 diabetes at 3 years of follow-up was high for both single and multiple autoantibody–positive identical twins (62–69%) and multiple autoantibody–positive nonidentical twins (72%) compared with 47% for initially multiple autoantibody–positive full siblings and 12–13% for initially single autoantibody–positive nonidentical twins and full siblings. To our knowledge, this is the largest prediagnosis study to evaluate the effects of genetic factors and common twin environment on the presence or absence of islet autoantibodies.

In this study, younger age, male sex, and genetic factors were significantly associated with expression of IA-2A, IAA, more than one autoantibody, and more than two autoantibodies, whereas only genetic factors were significant for GADA. An influence of common twin environment (E) was not seen. […] Previous studies have shown that identical twin siblings of patients with type 1 diabetes have a higher concordance rate for development of type 1 diabetes compared with nonidentical twins, although reported rates for identical twins have been highly variable (30–70%) […]. Studies from various countries (Australia, Denmark, Finland, Great Britain, and U.S.) have reported concordance rates for nonidentical twins ∼5–15% […]. Concordance rates have been higher when the proband was diagnosed at a younger age (8), which may explain the variability in these reported rates. In this study, autoantibody-negative nonidentical and identical twins had a low risk of type 1 diabetes by 3 years of follow-up. In contrast, once twins developed autoantibodies, risk for type 1 diabetes was high for multiple autoantibody nonidentical twins and both single and multiple autoantibody identical twins.”

ii. A Type 1 Diabetes Genetic Risk Score Can Identify Patients With GAD65 Autoantibody–Positive Type 2 Diabetes Who Rapidly Progress to Insulin Therapy.

This is another paper in the ‘‘ segment from the February edition of Diabetes Care – multiple other papers on related topics were also included in that edition, so if you’re interested in the genetics of diabetes it may be worth checking out.

Some observations from the paper:

“Type 2 diabetes is a progressive disease due to a gradual reduction in the capacity of the pancreatic islet cells (β-cells) to produce insulin (1). The clinical course of this progression is highly variable, with some patients progressing very rapidly to requiring insulin treatment, whereas others can be successfully treated with lifestyle changes or oral agents for many years (1,2). Being able to identify patients likely to rapidly progress may have clinical utility in prioritization monitoring and treatment escalation and in choice of therapy.

It has previously been shown that many patients with clinical features of type 2 diabetes have positive GAD65 autoantibodies (GADA) and that the presence of this autoantibody is associated with faster progression to insulin (3,4). This is often termed latent autoimmune diabetes in adults (LADA) (5,6). However, the predictive value of GADA testing is limited in a population with clinical type 2 diabetes, with many GADA-positive patients not requiring insulin treatment for many years (4,7). Previous research has suggested that genetic variants in the HLA region associated with type 1 diabetes are associated with more rapid progression to insulin in patients with clinically defined type 2 diabetes and positive GADA (8).

We have recently developed a type 1 diabetes genetic risk score (T1D GRS), which provides an inexpensive ($70 in our local clinical laboratory and <$20 where DNA has been previously extracted), integrated assessment of a person’s genetic susceptibility to type 1 diabetes (9). The score is composed of 30 type 1 diabetes risk variants weighted for effect size and aids discrimination of type 1 diabetes from type 2 diabetes. […] We aimed to determine if the T1D GRS could predict rapid progression to insulin (within 5 years of diagnosis) over and above GADA testing in patients with a clinical diagnosis of type 2 diabetes treated without insulin at diagnosis.”

“We examined the relationship between GADA, T1D GRS, and progression to insulin therapy using survival analysis in 8,608 participants with clinical type 2 diabetes initially treated without insulin therapy. […] In this large study of participants with a clinical diagnosis of type 2 diabetes, we have found that type 1 genetic susceptibility alters the clinical implications of a positive GADA when predicting rapid time to insulin. GADA-positive participants with high T1D GRS were more likely to require insulin within 5 years of diagnosis, with 48% progressing to insulin in this time in contrast to only 18% in participants with low T1D GRS. The T1D GRS was independent of and additive to participant’s age of diagnosis and BMI. However, T1D GRS was not associated with rapid insulin requirement in participants who were GADA negative.”

“Our findings have clear implications for clinical practice. The T1D GRS represents a novel clinical test that can be used to enhance the prognostic value of GADA testing. For predicting future insulin requirement in patients with apparent type 2 diabetes who are GADA positive, T1D GRS may be clinically useful and can be used as an additional test in the screening process. However, in patients with type 2 diabetes who are GADA negative, there is no benefit gained from genetic testing. This is unsurprising, as the prevalence of underlying autoimmunity in patients with a clinical phenotype of type 2 diabetes who are GADA negative is likely to be extremely low; therefore, most GADA-negative participants with high T1D GRS will have nonautoimmune diabetes. The use of this two-step testing approach may facilitate a precision medicine approach to patients with apparent type 2 diabetes; patients who are likely to progress rapidly are identified for targeted management, which may include increased monitoring, early therapy intensification, and/or interventions aimed at slowing progression (36,37).

The costs of analyzing the T1D GRS are relatively modest and may fall further, as genetic testing is rapidly becoming less expensive (38). […] In conclusion, a T1D GRS alters the clinical implications of a positive GADA test in patients with clinical type 2 diabetes and is independent of and additive to clinical features. This therefore represents a novel test for identifying patients with rapid progression in this population.”

iii. Retinopathy and RAAS Activation: Results From the Canadian Study of Longevity in Type 1 Diabetes.

“Diabetic retinopathy is the most common cause of preventable blindness in individuals ages 20–74 years and is the most common vascular complication in type 1 and type 2 diabetes (13). On the basis of increasing severity, diabetic retinopathy is classified into nonproliferative diabetic retinopathy (NPDR), defined in early stages by the presence of microaneurysms, retinal vascular closure, and alteration, or proliferative diabetic retinopathy (PDR), defined by the growth of new aberrant blood vessels (neovascularization) susceptible to hemorrhage, leakage, and fibrosis (4). Diabetic macular edema (DME) can be present at any stage of retinopathy and is characterized by increased vascular permeability leading to retinal thickening.

Important risk factors for the development of retinopathy continue to be chronic hyperglycemia, hyperlipidemia, hypertension, and diabetes duration (5,6). Given the systemic nature of these risk factors, cooccurrence of retinopathy with other vascular complications is common in patients with diabetes.”

“A key pathway implicated in diabetes-related small-vessel disease is overactivation of neurohormones. Activation of the neurohormonal renin-angiotensin-aldosterone system (RAAS) pathway predominates in diabetes in response to hyperglycemia and sodium retention. The RAAS plays a pivotal role in regulating systemic BP through vasoconstriction and fluid-electrolyte homeostasis. At the tissue level, angiotensin II (ANGII), the principal mediator of the RAAS, is implicated in fibrosis, oxidative stress, endothelial damage, thrombosis, inflammation, and vascular remodeling. Of note, systemic RAAS blockers reduce the risk of progression of eye disease but not DKD [Diabetic Kidney Disease, US] in adults with type 1 diabetes with normoalbuminuria (12).

Several longitudinal epidemiologic studies of diabetic retinopathy have been completed in type 1 diabetes; however, few have studied the relationships between eye, nerve, and renal complications and the influence of RAAS activation after prolonged duration (≥50 years) in adults with type 1 diabetes. As a result, less is known about mechanisms that persist in diabetes-related microvascular complications after long-standing diabetes. Accordingly, in this cross-sectional analysis from the Canadian Study of Longevity in Type 1 Diabetes involving adults with type 1 diabetes for ≥50 years, our aims were to phenotype retinopathy stage and determine associations between the presence of retinopathy and other vascular complications. In addition, we examined the relationship between retinopathy stage and renal and systemic hemodynamic function, including arterial stiffness, at baseline and dynamically after RAAS activation with an infusion of exogenous ANGII.”

“Of the 75 participants, 12 (16%) had NDR [no diabetic retinopathy], 24 (32%) had NPDR, and 39 (52%) had PDR […]. At baseline, those with NDR had lower mean HbA1c compared with those with NPDR and PDR (7.4 ± 0.7% and 7.5 ± 0.9%, respectively; P for trend = 0.019). Of note, those with more severe eye disease (PDR) had lower systolic and diastolic BP values but a significantly higher urine albumin-to-creatine ratio (UACR) […] compared with those with less severe eye disease (NPDR) or with NDR despite higher use of RAAS inhibitors among those with PDR compared with NPDR or NDR. History of cardiovascular and peripheral vascular disease history was significantly higher in participants with PDR (33.3%) than in those with NPDR (8.3%) or NDR (0%). Diabetic sensory polyneuropathy was prevalent across all groups irrespective of retinopathy status but was numerically higher in the PDR group (95%) than in the NPDR (86%) or NDR (75%) groups. No significant differences were observed in retinal thickness across the three groups.”

One quick note: This was mainly an eye study, but some of the other figures here are well worth taking note of. 3 out of 4 people in the supposedly low-risk group without eye complications had sensory polyneuropathy after 50 years of diabetes.

Conclusions

Hyperglycemia contributes to the pathogenesis of diabetic retinopathy through multiple interactive pathways, including increased production of advanced glycation end products, IGF-I, vascular endothelial growth factor, endothelin, nitric oxide, oxidative damage, and proinflammatory cytokines (2933). Overactivation of the RAAS in response to hyperglycemia also is implicated in the pathogenesis of diabetes-related complications in the retina, nerves, and kidney and is an important therapeutic target in type 1 diabetes. Despite what is known about these underlying pathogenic mechanisms in the early development of diabetes-related complications, whether the same mechanisms are active in the setting of long-standing type 1 diabetes is not known. […] In this study, we observed that participants with PDR were more likely to be taking RAAS inhibitors, to have a higher frequency of cardiovascular or peripheral vascular disease, and to have higher UACR levels, likely reflecting the higher overall risk profile of this group. Although it is not possible to determine why some patients in this cohort developed PDR while others did not after similar durations of type 1 diabetes, it seems unlikely that glycemic control alone is sufficient to fully explain the observed between-group differences and differing vascular risk profiles. Whereas the NDR group had significantly lower mean HbA1c levels than the NPDR and PDR groups, differences between participants with NPDR and those with PDR were modest. Accordingly, other factors, such as differences in vascular function, neurohormones, growth factors, genetics, and lifestyle, may play a role in determining retinopathy severity at the individual level.

The association between retinopathy and risk for DKD is well established in diabetes (34). In the setting of type 2 diabetes, patients with high levels of UACR have twice the risk of developing diabetic retinopathy than those with normal UACR levels. For example, Rodríguez-Poncelas et al. (35) demonstrated that impaired renal function is linked with increased diabetic retinopathy risk. Consistent with these studies and others, the PDR group in this Canadian Study of Longevity in Type 1 Diabetes demonstrated significantly higher UACR, which is associated with an increased risk of DKD progression, illustrating that the interaction between eye and kidney disease progression also may exist in patients with long-standing type 1 diabetes. […] In conclusion, retinopathy was prevalent after prolonged type 1 diabetes duration, and retinopathy severity associated with several measures of neuropathy and with higher UACR. Differential exaggerated responses to RAAS activation in the peripheral vasculature of the PDR group highlights that even in the absence of DKD, neurohormonal abnormalities are likely still operant, and perhaps accentuated, in patients with PDR even after long-standing type 1 diabetes duration.”

iv. Clinical and MRI Features of Cerebral Small-Vessel Disease in Type 1 Diabetes.

“Type 1 diabetes is associated with a fivefold increased risk of stroke (1), with cerebral small-vessel disease (SVD) as the most common etiology (2). Cerebral SVD in type 1 diabetes, however, remains scarcely investigated and is challenging to study in vivo per se owing to the size of affected vasculature (3); instead, MRI signs of SVD are studied. In this study, we aimed to assess the prevalence of cerebral SVD in subjects with type 1 diabetes compared with healthy control subjects and to characterize diabetes-related variables associated with SVD in stroke-free people with type 1 diabetes.”

RESEARCH DESIGN AND METHODS This substudy was cross-sectional in design and included 191 participants with type 1 diabetes and median age 40.0 years (interquartile range 33.0–45.1) and 30 healthy age- and sex-matched control subjects. All participants underwent clinical investigation and brain MRIs, assessed for cerebral SVD.

RESULTS Cerebral SVD was more common in participants with type 1 diabetes than in healthy control subjects: any marker 35% vs. 10% (P = 0.005), cerebral microbleeds (CMBs) 24% vs. 3.3% (P = 0.008), white matter hyperintensities 17% vs. 6.7% (P = 0.182), and lacunes 2.1% vs. 0% (P = 1.000). Presence of CMBs was independently associated with systolic blood pressure (odds ratio 1.03 [95% CI 1.00–1.05], P = 0.035).”

Conclusions

Cerebral SVD is more common in participants with type 1 diabetes than in healthy control subjects. CMBs especially are more prevalent and are independently associated with hypertension. Our results indicate that cerebral SVD starts early in type 1 diabetes but is not explained solely by diabetes-related vascular risk factors or the generalized microvascular disease that takes place in diabetes (7).

There are only small-scale studies on cerebral SVD, especially CMBs, in type 1 diabetes. Compared with the current study, one study with similar diabetes characteristics (i.e., diabetes duration, glycemic control, and blood pressure levels) as in the current study, but lacking a control population, showed a higher prevalence of WMHs, with more than half of the participants affected, but similar prevalence of lacunes and lower prevalence of CMBs (8). In another study, including 67 participants with type 1 diabetes and 33 control subjects, there was no difference in WMH prevalence but a higher prevalence of CMBs in participants with type 1 diabetes and retinopathy compared with control subjects (9). […] In type 1 diabetes, albuminuria and systolic blood pressure independently increase the risk for both ischemic and hemorrhagic stroke (12). […] We conclude that cerebral SVD is more common in subjects with type 1 diabetes than in healthy control subjects. Future studies will focus on longitudinal development of SVD in type 1 diabetes and the associations with brain health and cognition.”

v. The Legacy Effect in Type 2 Diabetes: Impact of Early Glycemic Control on Future Complications (The Diabetes & Aging Study).

“In the U.S., an estimated 1.4 million adults are newly diagnosed with diabetes every year and present an important intervention opportunity for health care systems. In patients newly diagnosed with type 2 diabetes, the benefits of maintaining an HbA1c <7.0% (<53 mmol/mol) are well established. The UK Prospective Diabetes Study (UKPDS) found that a mean HbA1c of 7.0% (53 mmol/mol) lowers the risk of diabetes-related end points by 12–32% compared with a mean HbA1c of 7.9% (63 mmol/mol) (1,2). Long-term observational follow-up of this trial revealed that this early glycemic control has durable effects: Reductions in microvascular events persisted, reductions in cardiovascular events and mortality were observed 10 years after the trial ended, and HbA1c values converged (1). Similar findings were observed in the Diabetes Control and Complications Trial (DCCT) in patients with type 1 diabetes (24). These posttrial observations have been called legacy effects (also metabolic memory) (5), and they suggest the importance of early glycemic control for the prevention of future complications of diabetes. Although these clinical trial long-term follow-up studies demonstrated legacy effects, whether legacy effects exist in real-world populations, how soon after diabetes diagnosis legacy effects may begin, or for what level of glycemic control legacy effects may exist are not known.

In a previous retrospective cohort study, we found that patients with newly diagnosed diabetes and an initial 10-year HbA1c trajectory that was unstable (i.e., changed substantially over time) had an increased risk for future microvascular events, even after adjusting for HbA1c exposure (6). In the same cohort population, this study evaluates associations between the duration and intensity of glycemic control immediately after diagnosis and the long-term incidence of future diabetic complications and mortality. We hypothesized that a glycemic legacy effect exists in real-world populations, begins as early as the 1st year after diabetes diagnosis, and depends on the level of glycemic exposure.”

RESEARCH DESIGN AND METHODS This cohort study of managed care patients with newly diagnosed type 2 diabetes and 10 years of survival (1997–2013, average follow-up 13.0 years, N = 34,737) examined associations between HbA1c <6.5% (<48 mmol/mol), 6.5% to <7.0% (48 to <53 mmol/mol), 7.0% to <8.0% (53 to <64 mmol/mol), 8.0% to <9.0% (64 to <75 mmol/mol), or ≥9.0% (≥75 mmol/mol) for various periods of early exposure (0–1, 0–2, 0–3, 0–4, 0–5, 0–6, and 0–7 years) and incident future microvascular (end-stage renal disease, advanced eye disease, amputation) and macrovascular (stroke, heart disease/failure, vascular disease) events and death, adjusting for demographics, risk factors, comorbidities, and later HbA1c.

RESULTS Compared with HbA1c <6.5% (<48 mmol/mol) for the 0-to-1-year early exposure period, HbA1c levels ≥6.5% (≥48 mmol/mol) were associated with increased microvascular and macrovascular events (e.g., HbA1c 6.5% to <7.0% [48 to <53 mmol/mol] microvascular: hazard ratio 1.204 [95% CI 1.063–1.365]), and HbA1c levels ≥7.0% (≥53 mmol/mol) were associated with increased mortality (e.g., HbA1c 7.0% to <8.0% [53 to <64 mmol/mol]: 1.290 [1.104–1.507]). Longer periods of exposure to HbA1c levels ≥8.0% (≥64 mmol/mol) were associated with increasing microvascular event and mortality risk.

CONCLUSIONS Among patients with newly diagnosed diabetes and 10 years of survival, HbA1c levels ≥6.5% (≥48 mmol/mol) for the 1st year after diagnosis were associated with worse outcomes. Immediate, intensive treatment for newly diagnosed patients may be necessary to avoid irremediable long-term risk for diabetic complications and mortality.”

Do note that the effect sizes here are very large and this stuff seems really quite important. Judging from the results of this study, if you’re newly diagnosed and you only obtain a HbA1c of say, 7.3% in the first year, that may translate into a close to 30% increased risk of death more than 10 years into the future, compared to a scenario of an HbA1c of 6.3%. People who did not get their HbA1c measured within the first 3 months after diagnosis had a more than 20% increased risk of mortality during the study period. This seems like critical stuff to get right.

vi. Event Rates and Risk Factors for the Development of Diabetic Ketoacidosis in Adult Patients With Type 1 Diabetes: Analysis From the DPV Registry Based on 46,966 Patients.

“Diabetic ketoacidosis (DKA) is a life-threatening complication of type 1 diabetes mellitus (T1DM) that results from absolute insulin deficiency and is marked by acidosis, ketosis, and hyperglycemia (1). Therefore, prevention of DKA is one goal in T1DM care, but recent data indicate increased incidence (2).

For adult patients, only limited data are available on rates and risk factors for development of DKA, and this complication remains epidemiologically poorly characterized. The Diabetes Prospective Follow-up Registry (DPV) has followed patients with diabetes from 1995. Data for this study were collected from 2000 to 2016. Inclusion criteria were diagnosis of T1DM, age at diabetes onset ≥6 months, patient age at follow-up ≥18 years, and diabetes duration ≥1 year to exclude DKA at manifestation. […] In total, 46,966 patients were included in this study (average age 38.5 years [median 21.2], 47.6% female). The median HbA1c was 7.7% (61 mmol/mol), median diabetes duration was 13.6 years, and 58.3% of the patients were treated in large diabetes centers.

On average, 2.5 DKA-related hospital admissions per 100 patient-years (PY) were observed (95% CI 2.1–3.0). The rate was highest in patients aged 18–30 years (4.03/100 PY) and gradually declined with increasing age […] No significant differences between males (2.46/100 PY) and females (2.59/100 PY) were found […] Patients with HbA1c levels <7% (53 mmol/mol) had significantly fewer DKA admissions than patients with HbA1c ≥9% (75 mmol/mol) (0.88/100 PY vs. 6.04/100 PY; P < 0.001)”

“Regarding therapy, use of an insulin pump (continuous subcutaneous insulin infusion [CSII]) was not associated with higher DKA rates […], while patients aged 31–50 years on CSII showed lower rates than patients using multiple daily injections (2.21 vs. 3.12/100 PY; adjusted P < 0.05) […]. Treatment in a large center was associated with lower DKA-related hospital admissions […] In both adults and children, poor metabolic control was the strongest predictor of hospital admission due to DKA. […] In conclusion, the results of this study identify patients with T1DM at risk for DKA (high HbA1c, diabetes duration 5–10 years, migrants, age 30 years and younger) in real-life diabetes care. These at-risk individuals may need specific attention since structured diabetes education has been demonstrated to specifically reduce and prevent this acute complication.”

August 13, 2019 Posted by | Cardiology, Diabetes, Genetics, Immunology, Medicine, Molecular biology, Nephrology, Neurology, Ophthalmology, Studies | Leave a comment

Neutron Stars – Victoria Kaspi

I’ve read Springer books – well, a book – about pulsars in the past and this is certainly not the first post here on this blog covering these topics, yet I definitely found this lecture hard to follow. It’s highly technical, but occasionally quite interesting.

Some links related to the lecture coverage:

Coherence time.
Coherence (physics).
Pulsar timing and its applications (Manchester 2018).
NE2001. I. A new model for the galactic distribution of free electrons and its fluctuations (Cordes and Lazio).
Pulsar Timing.
Supplementary parameters in the parameterized post-Keplerian formalism.
Shapiro time delay.
Fonseca et al. 2014.
Hulse–Taylor binary.
PSR J0737−3039.
Spin–orbit coupling.
Tests of general relativity – Binary pulsars.
Relativistic Spin Precession in the Double Pulsar (Breton et al. 2008).
PSR J1614−2230.
PSR J0348+0432.
Scalar–tensor theory.
A Massive Pulsar in a Compact Relativistic Binary (Antoniadis et al. 2013).
The strong equivalence principle.
Nordtvedt effect.
PSR J0337+1715.
A millisecond pulsar in a stellar triple system (Ransom, Archibald et al. 2014).
Millisecond pulsar (recycled pulsar).
A comprehensive study of binary compact objects as gravitational wave sources: Evolutionary channels, rates, and physical properties (Belczynski et al. 2002).
Relativistic binary pulsars with black-hole companions (Pfahl et al. 2005).
Pulsar timing array.
PALFA (Pulsar Arecibo L-band Feed Array) Survey.
Green Bank North Celestial Cap (GBNCC) Survey.

August 6, 2019 Posted by | Astronomy, Lectures, Physics, Studies | Leave a comment

The Shapes of Spaces and the Nuclear Force

This one was in my opinion a great lecture which I enjoyed watching. It covers some quite high-level mathematics and physics and some of the ways in which these two fields intersected in a specific historical research context; however it does so in a way that will enable many people outside of the fields involved to be able to follow the narrative reasonably easily.

Some links related to the lecture coverage:

Topological space.
Topological invariant.
Topological isomorphism.
Dimension of a mathematical space.
Metrically topologically complete space.
Genus (mathematics).
Quotient space (topology).
Will we ever classify simply-connected smooth 4-manifolds? (Stern, 2005).
Nuclear force.
Coulomb’s law.
Maxwell’s equations.
Commutative property.
Abelian group.
Non-abelian group.
Yang–Mills theory.
Soliton.
Instanton.
Michael Atiyah.
Donaldson theory.
Michael Freedman.
Topological (quantum) field theory.
Edward Witten.
Effective field theory.
Seiberg–Witten invariants.
“Theoretical mathematics”: toward a cultural synthesis of mathematics and theoretical physics (Jaffe & Quinn, 1993).
Responses to “Theoretical mathematics: toward a cultural synthesis of mathematics and theoretical physics (Atiyah et al, 1994).

July 31, 2019 Posted by | Lectures, Mathematics, Physics | Leave a comment

Learning Phylogeny Through Simple Statistical Genetics

From a brief skim I concluded that a lot of the stuff Patterson talks about in this lecture, particularly in terms of the concepts and methods part (…which, as he also alludes to in his introduction, makes up a substantial proportion of the talk), is included/covered in this Ancient Admixture in Human History paper he coauthored, so if you’re either curious to know more, or perhaps just wondering what the talk might be about, it’s probably worth checking it out. In the latter case I would also recommend perhaps just watching the first few minutes of the talk; he provides a very informative outline of the talk in the first four and a half minutes of the video.

A few other links of relevance:

Martingale (probability theory).
GitHub – DReichLab/AdmixTools.
Human Genome Diversity Project.
Jackknife resampling.
Ancient North Eurasian.
Upper Palaeolithic Siberian genome reveals dual ancestry of Native Americans (Raghavan et al, 2014).
General theory for stochastic admixture graphs and F-statistics. This one is only very slightly related to the talk; I came across it while looking for stuff about admixture graphs, a topic he does briefly discuss in the lecture.

July 29, 2019 Posted by | Archaeology, Biology, Genetics, Lectures, Molecular biology, Statistics | Leave a comment

Words

Many of the words below are words I encountered while reading the books Lost in a good book, The Eyre Affair, In Gods We Trust: The Evolutionary Landscape of Religion, and The Complete Saki: 144 Collected Novels and Short Stories.

Ergotropic. Trophotropic. Abreaction. Nomological. Triskaidekaphobia. Casuistry. Nonsequitous. Amontillado. Contrail. Nacelle. Potluck. Sizar. Herpetology. Phenology. Fustigate. Tintinnabula. Phoropter. Vexillology. Quondam. Onomastic.

Glossolalia. Scrupulosity. Proclaim. Pablum. Ochlocracy. Probate. Anacyclosis. Anastylosis. Diphyodonty. Pakicetus. Gymnure. Sojourner. Rescission. Illocution. Sylvatic. Diabolist. Lariat. Carcinization. Champerty. Barratry.

Pannus. Vitiate. Svengali. Brevet. Scud. Vermicelli. Couplet. Offertory. Rognon. Mangold. Dissentient. Heller. Desultory. Crinkle. Whitsuntide. Syce. Variegation. Novelette. Wassail. Kith.

Astrakhan. Satrap. Halva. Precipitancy. Hie. Lambkin. Toque. Wapiti. Spiraea. Pleasaunce. Berberis. Goodly. Estaminet. Lyddite. Acclamation. Burgh. Wharfage. Tamarin. Chaffer. Catafalque.

July 22, 2019 Posted by | Books, Language | Leave a comment

A recent perspective on invariant theory

Some time ago I covered here on the blog a lecture with a somewhat technical introduction to invariant theory. Even though I didn’t recommend the lecture, I do recommend that you don’t watch the lecture above without first knowing the sort of stuff that might have been covered in that lecture (for all you know, that is), as well as some other lectures on related topics – to be more specific, to get anything out of this lecture you need some linear algebra, you need graph theory, you need some understanding of group theory, you need to know a little about computational complexity, it’ll probably help if you know a bit about invariant theory already, and surely you need some knowledge of a few other topics I forgot to mention. One of the statements I made about the introductory lecture to which I linked above also applies here: “I had to look up a lot of stuff to just sort-of-kind-of muddle along”.

Below some links to stuff I looked up while watching the lecture:

Algebraically closed field.
Reductive group.
Rational representation.
Group homomorphism.
Morphism of algebraic varieties.
Fixed-point subring.
Graph isomorphism.
Adjacency matrix.
Group action (mathematics).
General linear group.
Special linear group.
Alternating minimization, scaling algorithms, and the null-cone problem from invariant theory. (Bürgisser, Garg, Oliveira, Walter, and Wigderson (2017))
Noether normalization lemma.
Succinct data structure. (This link is actually not directly related to the lecture’s coverage; I came across it by accident while looking for topics he did talk about and I found it interesting, so I decided to include the link here anyway)
Characteristic polynomial.
Matrix similarity.
Monomial.
Associative algebra.
Polynomial degree bounds for matrix semi-invariants (Derksen & Makam, 2015).
Semi-invariant of a quiver.

July 6, 2019 Posted by | Computer science, Lectures, Mathematics | Leave a comment

On the possibility of an instance-based complexity theory

Below some links related to the lecture’s coverage:

Computational complexity theory.
Minimum cut.
2-satisfiability.
3-SAT.
Worst-case complexity.
Average-case complexity.
Max-Cut.
Karp’s 21 NP-complete problems.
Reduction (complexity).
Levin’s Universal search algorithm – Scholarpedia.
Computational indistinguishability.
Circuit complexity.
Adversarial Perturbations of Deep Neural Networks.
Sherrington–Kirkpatrick model.
Equivalence class.
Hopkins (2018).
Planted clique.
SDP (Semidefinite programming).
Jain, Koehler & Risteski (2018): Mean-field approximation, convex hierarchies, and the optimality of correlation rounding: a unified perspective.
Structural operational semantics (SOS).

July 1, 2019 Posted by | Computer science, Lectures | Leave a comment

The pleasure of finding things out (II)

Here’s my first post about the book. In this post I have included a few more quotes from the last half of the book.

“Are physical theories going to keep getting more abstract and mathematical? Could there be today a theorist like Faraday in the early nineteenth century, not mathematically sophisticated but with a very powerful intuition about physics?
Feynman: I’d say the odds are strongly against it. For one thing, you need the math just to understand what’s been done so far. Beyond that, the behavior of subnuclear systems is so strange compared to the ones the brain evolved to deal with that the analysis has to be very abstract: To understand ice, you have to understand things that are themselves very unlike ice. Faraday’s models were mechanical – springs and wires and tense bands in space – and his images were from basic geometry. I think we’ve understood all we can from that point of view; what we’ve found in this century is different enough, obscure enough, that further progress will require a lot of math.”

“There’s a tendency to pomposity in all this, to make it all deep and profound. My son is taking a course in philosophy, and last night we were looking at something by Spinoza – and there was the most childish reasoning! There were all these Attributes, and Substances, all this meaningless chewing around, and we started to laugh. Now, how could we do that? Here’s this great Dutch philosopher, and we’re laughing at him. It’s because there was no excuse for it! In that same period there was Newton, there was Harvey studying the circulation of the blood, there were people with methods of analysis by which progress was being made! You can take every one of Spinoza’s propositions, and take the contrary propositions, and look at the world – and you can’t tell which is right. Sure, people were awed because he had the courage to take on these great questions, but it doesn’t do any good to have the courage if you can’t get anywhere with the question. […] It isn’t the philosophy that gets me, it’s the pomposity. If they’d just laugh at themselves! If they’d just say, “I think it’s like this, but von Leipzig thought it was like that, and he had a good shot at it, too.” If they’d explain that this is their best guess … But so few of them do”.

“The lesson you learn as you grow older in physics is that what we can do is a very small fraction of what there is. Our theories are really very limited.”

“The first principle is that you must not fool yourself – and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.”

“When I was an undergraduate I worked with Professor Wheeler* as a research assistant, and we had worked out together a new theory about how light worked, how the interaction between atoms in different places worked; and it was at that time an apparently interesting theory. So Professor Wigner†, who was in charge of the seminars there [at Princeton], suggested that we give a seminar on it, and Professor Wheeler said that since I was a young man and hadn’t given seminars before, it would be a good opportunity to learn how to do it. So this was the first technical talk that I ever gave. I started to prepare the thing. Then Wigner came to me and said that he thought the work was important enough that he’d made special invitations to the seminar to Professor Pauli, who was a great professor of physics visiting from Zurich; to Professor von Neumann, the world’s greatest mathematician; to Henry Norris Russell, the famous astronomer; and to Albert Einstein, who was living near there. I must have turned absolutely white or something because he said to me, “Now don’t get nervous about it, don’t be worried about it. First of all, if Professor Russell falls asleep, don’t feel bad, because he always falls asleep at lectures. When Professor Pauli nods as you go along, don’t feel good, because he always nods, he has palsy,” and so on. That kind of calmed me down a bit”.

“Well, for the problem of understanding the hadrons and the muons and so on, I can see at the present time no practical applications at all, or virtually none. In the past many people have said that they could see no applications and then later they found applications. Many people would promise under those circumstances that something’s bound to be useful. However, to be honest – I mean he looks foolish; saying there will never be anything useful is obviously a foolish thing to do. So I’m going to be foolish and say these damn things will never have any application, as far as I can tell. I’m too dumb to see it. All right? So why do you do it? Applications aren’t the only thing in the world. It’s interesting in understanding what the world is made of. It’s the same interest, the curiosity of man that makes him build telescopes. What is the use of discovering the age of the universe? Or what are these quasars that are exploding at long distances? I mean what’s the use of all that astronomy? There isn’t any. Nonetheless, it’s interesting. So it’s the same kind of exploration of our world that I’m following and it’s curiosity that I’m satisfying. If human curiosity represents a need, the attempt to satisfy curiosity, then this is practical in the sense that it is that. That’s the way I would look at it at the present time. I would not put out any promise that it would be practical in some economic sense.”

“To science we also bring, besides the experiment, a tremendous amount of human intellectual attempt at generalization. So it’s not merely a collection of all those things which just happen to be true in experiments. It’s not just a collection of facts […] all the principles must be as wide as possible, must be as general as possible, and still be in complete accord with experiment, that’s the challenge. […] Evey one of the concepts of science is on a scale graduated somewhere between, but at neither end of, absolute falsity or absolute truth. It is necessary, I believe, to accept this idea, not only for science, but also for other things; it is of great value to acknowledge ignorance. It is a fact that when we make decisions in our life, we don’t necessarily know that we are making them correctly; we only think that we are doing the best we can – and that is what we should do.”

“In this age of specialization, men who thoroughly know one field are often incompetent to discuss another.”

“I believe that moral questions are outside of the scientific realm. […] The typical human problem, and one whose answer religion aims to supply, is always of the following form: Should I do this? Should we do this? […] To answer this question we can resolve it into two parts: First – If I do this, what will happen? – and second – Do I want that to happen? What would come of it of value – of good? Now a question of the form: If I do this, what will happen? is strictly scientific. […] The technique of it, fundamentally, is: Try it and see. Then you put together a large amount of information from such experiences. All scientists will agree that a question – any question, philosophical or other – which cannot be put into the form that can be tested by experiment (or, in simple terms, that cannot be put into the form: If I do this, what will happen?) is not a scientific question; it is outside the realm of science.”

June 26, 2019 Posted by | Astronomy, Mathematics, Philosophy, Physics, Quotes/aphorisms, Science | Leave a comment

Promoting the unknown…

i.

ii.

iii.

iv.

v.

June 22, 2019 Posted by | Music | Leave a comment

The pleasure of finding things out (I?)

As I put it in my goodreads review of the book, “I felt in good company while reading this book“. Some of the ideas in the book are by now well known, for example some of the interview snippets also included in the book have been added to youtube and have been viewed by hundreds of thousands of people (I added a couple of them to my ‘about’ page some years ago, and they’re still there, these are enjoyable videos to watch and they have aged well!) (the overlap between the book’s text and the sound recordings available is not 100 % for this material, but it’s close enough that I assume these were the same interviews). Others ideas and pieces I would assume to be less well known, for example Feynman’s encounter with Uri Geller in the latter’s hotel room, where he was investigating the latter’s supposed abilities related to mind reading and key bending..

I have added some sample quotes from the book below. It’s a good book, recommended.

“My interest in science is to simply find out about the world, and the more I find out the better it is, like, to find out. […] You see, one thing is, I can live with doubt and uncertainty and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things, but I’m not absolutely sure of anything and there are many things I don’t know anything about […] I don’t have to know an answer, I don’t feel frightened by not knowing things, by being lost in a mysterious universe without having any purpose, which is the way it really is so far as I can tell. It doesn’t frighten me.”

“Some people look at the activity of the brain in action and see that in many respects it surpasses the computer of today, and in many other respects the computer surpasses ourselves. This inspires people to design machines that can do more. What often happens is that an engineer has an idea of how the brain works (in his opinion) and then designs a machine that behaves that way. This new machine may in fact work very well. But, I must warn you that that does not tell us anything about how the brain actually works, nor is it necessary to ever really know that, in order to make a computer very capable. It is not necessary to understand the way birds flap their wings and how the feathers are designed in order to make a flying machine. It is not necessary to understand the lever system in the legs of a cheetah – an animal that runs fast – in order to make an automobile with wheels that goes very fast. It is therefore not necessary to imitate the behavior of Nature in detail in order to engineer a device which can in many respects surpass Nature’s abilities.”

“These ideas and techniques [of scientific investigation] , of course, you all know. I’ll just review them […] The first is the matter of judging evidence – well, the first thing really is, before you begin you must not know the answer. So you begin by being uncertain as to what the answer is. This is very, very important […] The question of doubt and uncertainty is what is necessary to begin; for if you already know the answer there is no need to gather any evidence about it. […] We absolutely must leave room for doubt or there is no progress and there is no learning. There is no learning without having to pose a question. And a question requires doubt. […] Authority may be a hint as to what the truth is, but it is not the source of information. As long as it’s possible, we should disregard authority whenever the observations disagree with it. […] Science is the belief in the ignorance of experts.”

“If we look away from the science and look at the world around us, we find out something rather pitiful: that the environment that we live in is so actively, intensely unscientific. Galileo could say: “I noticed that Jupiter was a ball with moons and not a god in the sky. Tell me, what happened to the astrologers?” Well, they print their results in the newspapers, in the United States at least, in every daily paper every day. Why do we still have astrologers? […] There is always some crazy stuff. There is an infinite amount of crazy stuff, […] the environment is actively, intensely unscientific. There is talk about telepathy still, although it’s dying out. There is faith-healing galore, all over. There is a whole religion of faith-healing. There’s a miracle at Lourdes where healing goes on. Now, it might be true that astrology is right. It might be true that if you go to the dentist on the day that Mars is at right angles to Venus, that it is better than if you go on a different day. It might be true that you can be cured by the miracle of Lourdes. But if it is true it ought to be investigated. Why? To improve it. If it is true then maybe we can find out if the stars do influence life; that we could make the system more powerful by investigating statistically, scientifically judging the evidence objectively, more carefully. If the healing process works at Lourdes, the question is how far from the site of the miracle can the person, who is ill, stand? Have they in fact made a mistake and the back row is really not working? Or is it working so well that there is plenty of room for more people to be arranged near the place of the miracle? Or is it possible, as it is with the saints which have recently been created in the United States–there is a saint who cured leukemia apparently indirectly – that ribbons that are touched to the sheet of the sick person (the ribbon having previously touched some relic of the saint) increase the cure of leukemia–the question is, is it gradually being diluted? You may laugh, but if you believe in the truth of the healing, then you are responsible to investigate it, to improve its efficiency and to make it satisfactory instead of cheating. For example, it may turn out that after a hundred touches it doesn’t work anymore. Now it’s also possible that the results of this investigation have other consequences, namely, that nothing is there.”

“I believe that a scientist looking at nonscientific problems is just as dumb as the next guy – and when he talks about a nonscientific matter, he will sound as naive as anyone untrained in the matter.”

“If we want to solve a problem that we have never solved before, we must leave the door to the unknown ajar.”

“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

“I would like to say a word or two […] about words and definitions, because it is necessary to learn the words. It is not science. That doesn’t mean just because it is not science that we don’t have to teach the words. We are not talking about what to teach; we are talking about what science is. It is not science to know how to change centigrade to Fahrenheit. It’s necessary, but it is not exactly science. […] I finally figured out a way to test whether you have taught an idea or you have only taught a definition. Test it this way: You say, “Without using the new word which you have just learned, try to rephrase what you have just learned in your own language.”

“My father dealt a little bit with energy and used the term after I got a little bit of the idea about it. […] He would say, “It [a toy dog] moves because the sun is shining,” […]. I would say “No. What has that to do with the sun shining? It moved because I wound up the springs.” “And why, my friend, are you able to move to wind up this spring?” “I eat.” “What, my friend, do you eat?” “I eat plants.” “And how do they grow?” “They grow because the sun is shining.” […] The only objection in this particular case was that this was the first lesson. It must certainly come later, telling you what energy is, but not to such a simple question as “What makes a [toy] dog move?” A child should be given a child’s answer. “Open it up; let’s look at it.””

“Now the point of this is that the result of observation, even if I were unable to come to the ultimate conclusion, was a wonderful piece of gold, with a marvelous result. It was something marvelous. Suppose I were told to observe, to make a list, to write down, to do this, to look, and when I wrote my list down, it was filed with 130 other lists in the back of a notebook. I would learn that the result of observation is relatively dull, that nothing much comes of it. I think it is very important – at least it was to me – that if you are going to teach people to make observations, you should show that something wonderful can come from them. […] [During my life] every once in a while there was the gold of a new understanding that I had learned to expect when I was a kid, the result of observation. For I did not learn that observation was not worthwhile. […] The world looks so different after learning science. For example, the trees are made of air, primarily. When they are burned, they go back to air, and in the flaming heat is released the flaming heat of the sun which was bound in to convert the air into trees, and in the ash is the small remnant of the part which did not come from air, that came from the solid earth, instead. These are beautiful things, and the content of science is wonderfully full of them. They are very inspiring, and they can be used to inspire others.”

“Physicists are trying to find out how nature behaves; they may talk carelessly about some “ultimate particle” because that’s the way nature looks at a given moment, but . . . Suppose people are exploring a new continent, OK? They see water coming along the ground, they’ve seen that before, and they call it “rivers.” So they say they’re exploring to find the headwaters, they go upriver, and sure enough, there they are, it’s all going very well. But lo and behold, when they get up far enough they find the whole system’s different: There’s a great big lake, or springs, or the rivers run in a circle. You might say, “Aha! They’ve failed!” but not at all! The real reason they were doing it was to explore the land. If it turned out not to be headwaters, they might be slightly embarrassed at their carelessness in explaining themselves, but no more than that. As long as it looks like the way things are built is wheels within wheels, then you’re looking for the innermost wheel – but it might not be that way, in which case you’re looking for whatever the hell it is that you find!”

 

June 20, 2019 Posted by | Books, Physics, Science | Leave a comment

Quotes

i. “Roughly, religion is a community’s costly and hard-to-fake commitment to a counterfactual and counterintuitive world of supernatural agents who master people’s existential anxieties, such as death and deception.” (Scott Atran)

ii. “The more one accepts what is materially false to be really true, and the more one spends material resources in displays of such acceptance, the more others consider one’s faith deep and one’s commitment sincere.” (-ll-)

iii. “Cultures and religions do not exist apart from the individual minds that constitute them and the environments that constrain them, any more than biological species and varieties exist independently of the individual organisms that compose them and the environments that conform them. They are not well-bounded systems or definite clusters of beliefs, practices, and artifacts, but more or less regular distributions of causally connected thoughts, behaviors, material products, and environmental objects. To naturalistically understand what “cultures” are is to describe and explain the material causes responsible for reliable differences in these distributions.” (-ll-)

iv. “Religions are not adaptations and they have no evolutionary functions as such.” (-ll-)

v. “Mature cognitions of folkpsychology and agency include metarepresentation. This involves the ability to track and build a notion of self over time, to model other minds and worlds, and to represent beliefs about the actual world as being true or false. It also makes lying and deception possible. This threatens any social order. But this same metarepresentational capacity provides the hope and promise of open-ended solutions to problems of moral relativity. It does so by enabling people to conjure up counterintuitive supernatural worlds that cannot be verified or falsified, either logically or empirically. Religious beliefs minimally violate ordinary intuitions about the world, with its inescapable problems, such as death. This frees people to imagine minimally impossible worlds that seem to solve existential dilemmas, including death and deception. […] Religion survives science and secular ideology not because it is prior to or more primitive than science or secular reasoning, but because of what it affectively and collectively secures for people.” (-ll-)

vi. “Don’t worry about people stealing an idea. If it’s original, you will have to ram it down their throats.” (Howard H. Aiken)

vii. “As long as scientists are free to pursue the truth wherever it may lead, there will be a flow of new scientific knowledge to those who can apply it to practical problems.” (Vannevar Bush)

viii. “Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.” (-ll-)

ix. “There are some people who imagine that older adults don’t know how to use the internet. My immediate reaction is, “I’ve got news for you, we invented it.”” (Vinton Cerf)

x. “When we are young we are often puzzled by the fact that each person we admire seems to have a different version of what life ought to be, what a good man is, how to live, and so on. If we are especially sensitive it seems more than puzzling, it is disheartening. What most people usually do is follow one person’s ideas and then another’s depending on who looms largest on one’s horizon at the time. The one with the deepest voice, the strongest appearance, the most authority and success, is usually the one who gets our momentary allegiance; and we try to pattern our ideals after him. […] Each person thinks that he has the formula for triumphing over life’s limitations and knows with authority what it means to be a man, and he usually tries to win a following for his particular patent. Today we know that people try so hard to win converts for their point of view because it is more than merely an outlook on life: it is an immortality formula.” (Ernest Becker)

xi. “A human being cannot survive alone and be entirely human.” (Peter Farb)

xii. “The members of a society do not make conscious choices in arriving at a particular way of life. Rather, they make unconscious adaptations. …they know only that a particular choice works, even though it may appear bizarre to an outsider.” (-ll-)

xiii. “To say that the invention “was in the air” or “the times were ripe for it” are just other ways of stating that the inventors did not do the inventing, but that the cultures did.” (-ll-)

xiv. “Culture is best seen not as complexes of concrete behavior patterns — customs, usages, traditions, habit clusters — as has, by and large, been the case up to now, but as a set of control mechanisms — plans, recipes, rules, instructions (what computer engineers call “programs”) — for the governing of behavior.” (Clifford Geertz)

xv. “In the status game […] the working-class child starts out with a handicap and, to the extent that he cares what the middle-class persons think of him or has internalised the dominant middle-class attitudes toward social class position, he may be expected to feel some ‘shame’.” (Albert Cohen)

xvi. “It is nationalism which engenders nations, and not the other way round.” (Ernest Gellner)

xvii. “Doubt is the offspring of knowledge” (William Winwood Reade)

xviii. “Civilization after civilization, it is the same. The world falls to tyranny with a whisper. The frightened are ever keen to bow to a perceived necessity, in the belief that necessity forces conformity, and conformity a certain stability. In a world shaped into conformity, dissidents stand out, are easily branded and dealt with. There is no multitude of perspectives, no dialogue. The victim assumes the face of the tyrant, self-righteous and intransigent, and wars breed like vermin. And people die.” (Steven Erikson)

xix. “Helping myself is even harder than helping others.” (Gerald Weinberg)

xx. “Science is the study of those things that can be reduced to the study of other things. ” (-ll-)

June 15, 2019 Posted by | Quotes/aphorisms | Leave a comment

A few diabetes papers of interest

i. The dynamic origins of type 1 diabetes.

“Over a century ago, there was diabetes and only diabetes. Subsequently, diabetes came to be much more discretely defined (1) by age at onset (childhood or adult onset), clinical phenotype (lean or obese), treatment (insulin dependent or not insulin dependent), and, more recently, immune genotype (type 1 or type 2 diabetes). Although these categories broadly describe groups, they are often insufficient to categorize specific individuals, such as children having non–insulin-dependent diabetes and adults having type 1 diabetes (T1D) even when not requiring insulin. Indeed, ketoacidosis at presentation can be a feature of either T1D or type 2 diabetes. That heterogeneity extends to the origins and character of both major types of diabetes. In this issue of Diabetes Care, Redondo et al. (2) leverage the TrialNet study of subjects with a single diabetes-associated autoantibody at screening in order to explore factors determining progression to multiple autoantibodies and, subsequently, the pathogenesis of T1D.

T1D is initiated by presumed nongenetic event(s) operating in children with potent genetic susceptibility. But there is substantial heterogeneity even within the origins of this disease. Those nongenetic events evoke different autoantibodies such that T1D patients with insulin autoantibodies (IAA) have different features from those with GAD autoantibodies (GADA) (3,4). The former, in contrast with the latter, are younger both at seroconversion and at development of clinical diabetes, the two groups having different genetic risk and those with IAA having greater insulin secretory loss […]. These observations hint at distinct disease-associated networks leading to T1D, perhaps induced by distinct nongenetic events. Such disease-associated pathways could operate in unison, especially in children with T1D, who often have multiple autoantibodies. […]

Genetic analyses of autoimmune diseases suggest that only a small number of pathways contribute to disease risk. These pathways include NF-κB signaling, T-cell costimulation, interleukin-2, and interleukin-21 pathways and type 1 interferon antiviral responses (5,6). T1D shares most risk loci with celiac disease and rheumatoid arthritis (5), while paradoxically most risk loci shared with inflammatory bowel disease are protective or involve different haplotypes at the same locus. […] Events leading to islet autoimmunity may be encountered very early in life and invoke disease risk or disease protection (4,7) […]. Islet autoantibodies rarely appear before age 6 months, and among children with a family history of T1D there are two peaks for autoantibody seroconversion (3,4), the first for IAA at approximately age 1–2 years, while GADA-restricted autoimmunity develops after age 3 years up to adolescence, with a peak at about age 11 years”

“The precise nature of […] disease-associated nongenetic events remains unclear, but knowledge of the disease heterogeneity (1,9) has cast light on their character. Nongenetic events are implicated in increasing disease incidence, disease discordance even between identical twins, and geographical variation; e.g., Finland has 100-fold greater childhood T1D incidence than China (9,10). That effect likely increases with older age at onset […] disease incidence in Finland is sixfold greater than in an adjacent, relatively impoverished Russian province, despite similar racial origins and frequencies of high-risk HLA DQ genotypes […] Viruses, especially enteroviruses, and dietary factors have been invoked (1215). The former have been implicated because of the genetic association with antiviral interferon networks, seasonal pattern of autoantibody conversion, seroconversion being associated with enterovirus infections, and protection from seroconversion by maternal gestational respiratory infection, while respiratory infections even in the first year of life predispose to seroconversion (14) […]. Dietary factors also predispose to seroconversion and include the time of introduction of solid foods and the use of vitamin C and vitamin D (13,15). The Diabetes Autoimmunity Study in the Young (DAISY) found that early exposure to solid food (1–3 months of age) and vitamin C and late exposure to vitamin D and gluten (after 6 and 9 months of age, respectively) are T1D risk factors, leading the researchers to suggest that genetically at-risk children should have solid foods introduced at about 4 months of age with a diet high in dairy and fruit (13).” [my bold, US]

“This TCF7L2 locus is of particular interest in the context of T1D (9) as it is usually seen as the major type 2 diabetes signal worldwide. The rs7903146 SNP optimally captures that TCF7L2 disease association and is likely the causal variant. Intriguingly, this locus is associated, in some populations, with those adult-onset autoimmune diabetes patients with GADA alone who masquerade as having type 2 diabetes, since they initially do not require insulin therapy, and also markedly increases the diabetes risk in cystic fibrosis patients. One obvious explanation for these associations is that adult-onset autoimmune diabetes is simply a heterogeneous disease, an admixture of both T1D and type 2 diabetes (9), in which shared genes alter the threshold for diabetes. […] A high proportion of T1D cases present in adulthood (17,18), likely more than 50%, and many do not require insulin initially. The natural history, phenotype, and metabolic changes in adult-onset diabetes with GADA resemble a separate cluster of cases with type 2 diabetes but without GADA, which together constitute up to 24% of adult-onset diabetes (19). […] Knowledge of heterogeneity enables understanding of disease processes. In particular, identification of distinct pathways to clinical diabetes offers the possibility of defining distinct nongenetic events leading to T1D and, by implication, modulating those events could limit or eliminate disease progression. There is a growing appreciation that the two major types of diabetes may share common etiopathological factors. Just as there are a limited number of genes and pathways contributing to autoimmunity risk, there may also be a restricted number of pathways contributing to β-cell fragility.”

ii. The Association of Severe Diabetic Retinopathy With Cardiovascular Outcomes in Long-standing Type 1 Diabetes: A Longitudinal Follow-up.

OBJECTIVE It is well established that diabetic nephropathy increases the risk of cardiovascular disease (CVD), but how severe diabetic retinopathy (SDR) impacts this risk has yet to be determined.

RESEARCH DESIGN AND METHODS The cumulative incidence of various CVD events, including coronary heart disease (CHD), peripheral artery disease (PAD), and stroke, retrieved from registries, was evaluated in 1,683 individuals with at least a 30-year duration of type 1 diabetes drawn from the Finnish Diabetic Nephropathy Study (FinnDiane).”

RESULTS During 12,872 person-years of follow-up, 416 incident CVD events occurred. Even in the absence of DKD [Diabetic Kidney Disease], SDR increased the risk of any CVD (hazard ratio 1.46 [95% CI 1.11–1.92]; P < 0.01), after adjustment for diabetes duration, age at diabetes onset, sex, smoking, blood pressure, waist-to-hip ratio, history of hypoglycemia, and serum lipids. In particular, SDR alone was associated with the risk of PAD (1.90 [1.13–3.17]; P < 0.05) and CHD (1.50 [1.09–2.07; P < 0.05) but not with any stroke. Moreover, DKD increased the CVD risk further (2.85 [2.13–3.81]; P < 0.001). […]

CONCLUSIONS SDR alone, even without DKD, increases cardiovascular risk, particularly for PAD, independently of common cardiovascular risk factors in long-standing type 1 diabetes. More remains to be done to fully understand the link between SDR and CVD. This knowledge could help combat the enhanced cardiovascular risk beyond currently available regimens.”

“The 15-year cumulative incidence of any CVD in patients with and without SDR was 36.8% (95% CI 33.4–40.1) and 27.3% (23.3–31.0), respectively (P = 0.0004 for log-rank test) […] Patients without DKD and SDR at baseline had 4.0-fold (95% CI 3.3–4.7) increased risk of CVD compared with control subjects without diabetes up to 70 years of age […]. Intriguingly, after this age, the CVD incidence was similar to that in the matched control subjects (SIR 0.9 [95% CI 0.3–1.9]) in this subgroup of patients with diabetes. However, in patients without DKD but with SDR, the CVD risk was still increased after the patients had reached 70 years of age (SIR 3.4 [95% CI 1.8–6.2]) […]. Of note, in patients with both DKD and SDR, the CVD burden was high already at young ages.”

“This study highlights the role of SDR on a complete range of CVD outcomes in a large sample of patients with long-standing T1D and longitudinal follow-up. We showed that SDR alone, without concomitant DKD, increases the risk of macrovascular disease, independently of the traditional risk factors. The risk is further increased in case of accompanying DKD, especially if SDR is present together with DKD. Findings from this large and well-characterized cohort of patients have a direct impact on clinical practice, emphasizing the importance of regular screening for SDR in individuals with T1D and intensive multifactorial interventions for CVD prevention throughout their life span.

This study also confirms and complements previous data on the continuum of diabetic vascular disease, by which microvascular and macrovascular disease do not seem to be separate diseases, but rather interconnected (10,12,18). The link is most obvious for DKD, which clearly emerges as a major predictor of cardiovascular morbidity and mortality (2,24,25). The association of SDR with CVD is less clear. However, our recent cross-sectional study with the Joslin Medalist Study showed that the CVD risk was in fact increased in patients with SDR on top of DKD compared with DKD alone (19). In the present longitudinal study, we were able to extend those results also to show that SDR alone, without DKD and after the adjustment for other traditional risk factors, increases CVD risk substantially. SDR further increases CVD risk in case DKD is present as well. In addition, the role of SDR as an independent CVD risk predictor is also supported by our data using albuminuria as a marker of DKD. This is important because albuminuria is a known predictor of diabetic retinopathy progression (26) as well as a recognized biomarker for CVD.”

“A novel finding is that, independently of any signs of DKD, the risk of PAD is increased twofold in the presence of SDR. Although this association has recently been highlighted in individuals with type 2 diabetes (10,29), the data in T1D are scarce (16,30). Notably, the previous studies mostly lack adjustments for DKD, the major predictor of mortality in patients with shorter diabetes duration. Both complications, besides sharing some conventional cardiovascular risk factors, may be linked by additional pathological processes involving changes in the microvasculature in both the retina and the vasa vasorum of the conductance vessels (31). […] Patients with T1D duration of >30 years face a continuously increased CVD risk that is further increased by the occurrence of advanced PDR. Therefore, by examining the retina, additional insight into individual CVD risk is gained and can guide the clinician to a more tailored approach to CVD prevention. Moreover, our findings suggest that the link between SDR and CVD is at least partially independent of traditional risk factors, and the mechanism behind the phenomenon warrants further research, aiming to find new therapies to alleviate the CVD burden more efficiently.”

The model selection method employed in the paper is far from optimal [“Variables for the model were chosen based on significant univariable associations.” – This is not the way to do things!], but regardless these are interesting results.

iii. Fasting Glucose Variability in Young Adulthood and Cognitive Function in Middle Age: The Coronary Artery Risk Development in Young Adults (CARDIA) Study.

“Individuals with type 2 diabetes (T2D) have 50% greater risk for the development of neurocognitive dysfunction relative to those without T2D (13). The American Diabetes Association recommends screening for the early detection of cognitive impairment for adults ≥65 years of age with diabetes (4). Coupled with the increasing prevalence of prediabetes and diabetes, this calls for better understanding of the impact of diabetes on cerebral structure and function (5,6). Among older individuals with diabetes, higher intraindividual variability in glucose levels around the mean is associated with worse cognition and the development of Alzheimer disease (AD) (7,8). […] Our objectives were to characterize fasting glucose (FG) variability during young adulthood before the onset of diabetes and to assess whether such variability in FG is associated with cognitive function in middle adulthood. We hypothesized that a higher variability of FG during young adulthood would be associated with a lower level of cognitive function in midlife compared with lower FG variability.”

“We studied 3,307 CARDIA (Coronary Artery Risk Development in Young Adults) Study participants (age range 18–30 years and enrolled in 1985–1986) at baseline and calculated two measures of long-term glucose variability: the coefficient of variation about the mean FG (CV-FG) and the absolute difference between successive FG measurements (average real variability [ARV-FG]) before the onset of diabetes over 25 and 30 years of follow-up. Cognitive function was assessed at years 25 (2010–2011) and 30 (2015–2016) with the Digit Symbol Substitution Test (DSST), Rey-Auditory Verbal Learning Test (RAVLT), Stroop Test, Montreal Cognitive Assessment, and category and letter fluency tests. We estimated the association between glucose variability and cognitive function test score with adjustment for clinical and behavioral risk factors, mean FG level, change in FG level, and diabetes development, medication use, and duration.

RESULTS After multivariable adjustment, 1-SD increment of CV-FG was associated with worse cognitive scores at year 25: DSST, standardized regression coefficient −0.95 (95% CI −1.54, −0.36); RAVLT, −0.14 (95% CI −0.27, −0.02); and Stroop Test, 0.49 (95% CI 0.04, 0.94). […] We did not find evidence for effect modification by race or sex for any variability-cognitive function association”

CONCLUSIONS Higher intraindividual FG variability during young adulthood below the threshold of diabetes was associated with worse processing speed, memory, and language fluency in midlife independent of FG levels. […] In this cohort of black and white adults followed from young adulthood into middle age, we observed that greater intraindividual variability in FG below a diabetes threshold was associated with poorer cognitive function independent of behavioral and clinical risk factors. This association was observed above and beyond adjustment for concurrent glucose level; change in FG level during young adulthood; and diabetes status, duration, and medication use. Intraindividual glucose variability as determined by CV was more strongly associated with cognitive function than was absolute average glucose variability.”

iv. Maternal Antibiotic Use During Pregnancy and Type 1 Diabetes in Children — A National Prospective Cohort Study. It is important that papers like these get published and read, even if the results may not sound particularly exciting:

“Prenatal prescription of antibiotics is common but may perturb the composition of the intestinal microbiota in the offspring. In childhood the latter may alter the developing immune system to affect the pathogenesis of type 1 diabetes (1). Previous epidemiological studies reported conflicting results regarding the association between early exposure to antibiotics and childhood type 1 diabetes (2,3). Here we investigated the association in a Danish register setting.

The Danish National Birth Cohort (DNBC) provided data from 100,418 pregnant women recruited between 1996 and 2002 and their children born between 1997 and 2003 (n = 96,840). The women provided information on exposures during and after pregnancy. Antibiotic prescription during pregnancy was obtained from the Danish National Prescription Registry (anatomical therapeutic chemical code J01) [it is important to note that: “In Denmark, purchasing antibiotics requires a prescription, and all purchases are registered at the Danish National Prescription Registry”], and type 1 diabetes diagnoses (diagnostic codes DE10 and DE14) during childhood and adolescence were obtained from the Danish National Patient Register. The children were followed until 2014 (mean follow-up time 14.3 years [range 11.5–18.4 years, SD 1.4]).”

“A total of 336 children developed type 1 diabetes during follow-up. Neither overall exposure (hazard ratio [HR] 0.90; 95% CI 0.68–1.18), number of courses (HR 0.36–0.97[…]), nor trimester-specific exposure (HR 0.81–0.89 […]) of antibiotics in utero was associated with childhood diabetes. Moreover, exposure to specific types of antibiotics in utero did not change the risk of childhood type 1 diabetes […] This large prospective Danish cohort study demonstrated that maternal use of antibiotics during pregnancy was not associated with childhood type 1 diabetes. Thus, the results from this study do not support a revision of the clinical recommendations on treatment with antibiotics during pregnancy.”

v. Decreasing Cumulative Incidence of End-Stage Renal Disease in Young Patients With Type 1 Diabetes in Sweden: A 38-Year Prospective Nationwide Study.

“Diabetic nephropathy is a devastating complication to diabetes. It can lead to end-stage renal disease (ESRD), which demands renal replacement therapy (RRT) with dialysis or kidney transplantation. In addition, diabetic nephropathy is associated with increased risk of cardiovascular morbidity and mortality (1,2). As a nation, Sweden, next to Finland, has the highest incidence of type 1 diabetes in the world (3), and the incidence of childhood-onset diabetes is increasing globally (4,5). The incidence of ESRD caused by diabetic nephropathy in these Nordic countries is fairly low as shown in recent studies, 3–8% at maximum 30 years’ of diabetes duration (6,7). This is to be compared with studies from Denmark in the 1980s that showed a cumulative incidence of diabetic nephropathy of 41% at 40 years of diabetes duration. Older, hospital-based cohort studies found that the incidence of persistent proteinuria seemed to peak at 25 years of diabetes duration; after that, the incidence levels off (8,9). This implies the importance of genetic susceptibility as a risk factor for diabetic nephropathy, which has also been indicated in recent genome-wide scan studies (10,11). Still, modifiable factors such as metabolic control are clearly of major importance in the development of diabetic nephropathy (1215). Already in 1994, a decreasing incidence of diabetic nephropathy was seen in a hospital-based study in Sweden, and the authors concluded that this was mainly driven by better metabolic control (16). Young age at onset of diabetes has previously been found to protect, or postpone, the development of ESRD caused by diabetic nephropathy, while diabetes onset at older ages is associated with increased risk (7,9,17). In a previous study, we found that age at onset of diabetes affects men and women differently (7). Earlier studies have indicated a male predominance (8,18), while our previous study showed that the incidence of ESRD was similar in men and women with diabetes onset before 20 years of age, but with diabetes onset after 20 years of age, men had increased risk of developing ESRD compared with women. The current study analyzes the incidence of ESRD due to type 1 diabetes, and changes over time, in a large Swedish population-based cohort with a maximum follow-up of 38 years.”

“Earlier studies have shown that it takes ∼15 years to develop persistent proteinuria and another 10 to proceed to ESRD (9,25). In the current study population, no patients developed ESRD because of type 1 diabetes at a duration <14 years; thus only patients with diabetes duration of ≥14 years were included in the study. […] A total of 18,760 unique patients were included in the study: 10,560 (56%) men and 8,200 (44%) women. The mean age at the end of the study was somewhat lower for women, 38.9 years, compared with 40.2 years for men. Women tend to develop type 1 diabetes about a year earlier than men: mean age 15.0 years for women compared with 16.5 years for men. There was no difference regarding mean diabetes duration between men and women in the study (23.8 years for women and 23.7 years for men). A total of 317 patients had developed ESRD due to diabetes. The maximum diabetes duration was 38.1 years for patients in the SCDR and 32.6 years for the NDR and the DISS. The median time from onset of diabetes to ESRD was 22.9 years (minimum 14.1 and maximum 36.6). […] At follow-up, 77 patients with ESRD and 379 without ESRD had died […]. The risk of dying during the course of the study was almost 12 times higher among the ESRD patients (HR 11.9 [95% CI 9.3–15.2]) when adjusted for sex and age. Males had almost twice as high a risk of dying as female patients (HR 1.7 [95% CI 1.4–2.1]), adjusted for ESRD and age.”

“The overall incidence rate of ESRD during 445,483 person-years of follow-up was 0.71 per 1,000 person-years. […] The incidence rate increases with diabetes duration. For patients with diabetes onset at 0–9 and 10–19 years of age, there was an increase in incidence up to 36 years of duration; at longer durations, the number of cases is too small and results must be interpreted with caution. With diabetes onset at 20–34 years of age the incidence rate increases until 25 years of diabetes duration, and then a decrease can be observed […] In comparison of different time periods, the risk of developing ESRD was lower in patients with diabetes onset in 1991–2001 compared with onset in 1977–1984 (HR 3.5 [95% CI 2.3–5.3]) and 1985–1990 (HR 2.6 [95% CI 1.7–3.8]), adjusted for age at follow-up and sex. […] The lowest risk of developing ESRD was found in the group with onset of diabetes before the age of 10 years — both for males and females […]. With this group as reference, males diagnosed with diabetes at 10–19 or 20–34 years of age had increased risk of ESRD (HR 2.4 [95% CI 1.6–3.5] and HR 2.2 [95% CI 1.4–3.3]), respectively. For females, the risk of developing ESRD was also increased with diabetes onset at 10–19 years of age (HR 2.4 [95% CI 1.5–3.6]); however, when diabetes was diagnosed after the age of 20 years, the risk of developing ESRD was not increased compared with an early onset of diabetes (HR 1.4 [95% CI 0.8–3.4]).”

“By combining data from the SCDR, DISS, and NDR registers and identifying ESRD cases via the SRR, we have included close to all patients with type 1 diabetes in Sweden with diabetes duration >14 years who developed ESRD since 1991. The cumulative incidence of ESRD in this study is low: 5.6% (5.9% and 5.3% for males and females, respectively) at maximum 38 years of diabetes duration. For the first time, we could see a clear decrease in ESRD incidence in Sweden by calendar year of diabetes onset. The results are in line with a recent study from Norway that reported a modest incidence of 5.3% after 40 years of diabetes duration (27). In the current study, we found a decrease in the incidence rate after 25 years of diabetes duration in the group with diabetes onset at 20–34 years. With age at onset of diabetes 0–9 or 10–19 years, the ESRD incidence rate increases until 35 years of diabetes duration, but owing to the limited number of patients with longer duration we cannot determine whether the peak incidence has been reached or not. We can, however, conclude that the onset of ESRD has been postponed at least 10 years compared with that in older prospective cohort studies (8,9). […] In conclusion, this large population-based study shows a low incidence of ESRD in Swedish patients with onset of type 1 diabetes after 1977 and an encouraging decrease in risk of ESRD, which is probably an effect of improved diabetes care. We confirm that young age at onset of diabetes protects against, or prolongs, the time until development of severe complications.”

vi. Hypoglycemia and Incident Cognitive Dysfunction: A Post Hoc Analysis From the ORIGIN Trial. Another potentially important negative result, this one related to the link between hypoglycemia and cognitive impairment:

“Epidemiological studies have reported a relationship between severe hypoglycemia, cognitive dysfunction, and dementia in middle-aged and older people with type 2 diabetes. However, whether severe or nonsevere hypoglycemia precedes cognitive dysfunction is unclear. Thus, the aim of this study was to analyze the relationship between hypoglycemia and incident cognitive dysfunction in a group of carefully followed patients using prospectively collected data in the Outcome Reduction with Initial Glargine Intervention (ORIGIN) trial.”

“This prospective cohort analysis of data from a randomized controlled trial included individuals with dysglycemia who had additional cardiovascular risk factors and a Mini-Mental State Examination (MMSE) score ≥24 (N = 11,495). Severe and nonsevere hypoglycemic events were collected prospectively during a median follow-up time of 6.2 years. Incident cognitive dysfunction was defined as either reported dementia or an MMSE score of <24. The hazard of at least one episode of severe or nonsevere hypoglycemia for incident cognitive dysfunction (i.e., the dependent variable) from the time of randomization was estimated using a Cox proportional hazards model after adjusting for baseline cardiovascular disease, diabetes status, treatment allocation, and a propensity score for either form of hypoglycemia.

RESULTS This analysis did not demonstrate an association between severe hypoglycemia and incident cognitive impairment either before (hazard ratio [HR] 1.16; 95% CI 0.89, 1.52) or after (HR 1.00; 95% CI 0.76, 1.31) adjusting for the severe hypoglycemia propensities. Conversely, nonsevere hypoglycemia was inversely related to incident cognitive impairment both before (HR 0.59; 95% CI 0.52, 0.68) and after (HR 0.58; 95% CI 0.51, 0.67) adjustment.

CONCLUSIONS Hypoglycemia did not increase the risk of incident cognitive dysfunction in 11,495 middle-aged individuals with dysglycemia. […] These findings provide no support for the hypothesis that hypoglycemia causes long-term cognitive decline and are therefore reassuring for patients and their health care providers.”

vii. Effects of Severe Hypoglycemia on Cardiovascular Outcomes and Death in the Veterans Affairs Diabetes Trial.

“The VADT was a large randomized controlled trial aimed at determining the effects of intensive treatment of T2DM in U.S. veterans (9). In the current study, we examine predictors and consequences of severe hypoglycemia within the VADT and report several key findings. First, we identified risk factors for severe hypoglycemia that included intensive therapy, insulin use, proteinuria, and autonomic neuropathy. Consistent with prior reports in glucose-lowering studies, severe hypoglycemia occurred at a threefold significantly greater rate in those assigned to intensive glucose lowering. Second, severe hypoglycemia was associated with an increased risk of cardiovascular events, cardiovascular mortality, and all-cause mortality in both the standard and the intensive treatment groups. Of importance, however, severe hypoglycemia was associated with an even greater risk of all-cause mortality in the standard compared with the intensive treatment group. Third, the association between severe hypoglycemia and serious cardiovascular events was greater in individuals with an elevated risk for CVD at baseline.”

“Mean participant characteristics were as follows: age, 60.4 years; duration of diabetes, 11.5 years; BMI, 31.3 kg/m2; and HbA1c, 9.4%. Seventy-two percent had hypertension, 40% had a previous cardiovascular event, 62% had a microvascular complication, and 52% had baseline insulin use. The standard and intensive treatment groups included 899 and 892 participants, respectively. […] During the study, the standard treatment group averaged 3.7 severe hypoglycemic events per 100 patient-years versus 10.3 events per 100 patient-years in the intensive treatment group (P < 0.001). Overall, the combined rate of severe hypoglycemia during follow-up in the VADT from both study arms was 7.0 per 100 patient-years. […] Severe hypoglycemia within the prior 3 months was associated with an increased risk for composite cardiovascular outcome (HR 1.9 [95% CI 1.1, 3.5]; P = 0.03), cardiovascular mortality (3.7 [1.3, 10.4]; P = 0.01), and all-cause mortality (2.4 [1.1, 5.1]; P = 0.02) […]. More distant hypoglycemia (4–6 months prior) had no independently associated increased risk with adverse events or death. The association of severe hypoglycemia with cardiovascular events or cardiovascular mortality were not significantly different between the intensive and standard treatment groups […]. In contrast, the association of severe hypoglycemia with all-cause mortality was significantly greater in the standard versus the intensive treatment group (6.7 [2.7, 16.6] vs. 0.92 [0.2, 3.8], respectively; P = 0.019 for interaction). Because of the relative paucity of repeated severe hypoglycemic events in either study group, there was insufficient power to determine whether more than one episode of severe hypoglycemia increased the risk of subsequent outcomes.”

“Although recent severe hypoglycemia increased the risk of major cardiovascular events for those with a 10-year cardiovascular risk score of 35% (HR 2.88 [95% CI 1.57, 5.29]; absolute risk increase per 10 episodes = 0.252; number needed to harm = 4), hypoglycemia was not significantly associated with increased major cardiovascular events for those with a risk score of ≤7.5%. The absolute associated risk of major adverse cardiovascular events, cardiovascular mortality, and all-cause mortality increased with higher CVD risk for all three outcomes […]. We were not able to identify, however, any group of patients in either treatment arm in which severe hypoglycemia did not increase the risk of CVD events and mortality at least to some degree.”

“Although the explanation for the relatively greater risk of serious adverse events after severe hypoglycemia in the standard treatment group is unknown, we agree with previous reports that milder episodes of hypoglycemia, which are more frequent in the intensive treatment group, may quantitatively blunt the release of neuroendocrine and autonomic nervous system responses and their resultant metabolic and cardiovascular responses to hypoglycemia, thereby lessening the impact of subsequent severe hypoglycemic episodes (18,19). Episodes of prior hypoglycemia have rapid and significant effects on reducing (i.e., blunting) subsequent counterregulatory responses to a falling plasma glucose level (20,21). Thus, if one of the homeostatic counterregulatory responses (e.g., epinephrine) also can initiate unwanted intravascular atherothrombotic consequences, it may follow that severe hypoglycemia in a more intensively treated and metabolically well-controlled individual would provoke a reduced counterregulatory response. Although hypoglycemia frequency may be increased in these individuals, this may also lower unwanted and deleterious effects on the vasculature from counterregulatory responses. On the other hand, an isolated severe hypoglycemic event in a less well-controlled individual could provoke a relatively greater counterregulatory response with a proportionally attendant elevated risk for adverse vascular effects (22). In support of this, we previously reported in a subset of VADT participants that despite more frequent serious hypoglycemia in the intensive therapy group, progression of coronary artery calcium scores after severe hypoglycemia only occurred in the standard treatment group (23).”

“In the current study, we demonstrate that the association of severe hypoglycemia with subsequent serious adverse cardiovascular events and death occurred within the preceding 3 months but not beyond. The temporal relationship and proximity of severe hypoglycemia to a subsequent serious cardiovascular event and/or death has been investigated in a number of recent clinical trials in T2DM (25,13,14). All these trials consistently reported an association between severe hypoglycemic and subsequent serious adverse events. However, the proximity of severe hypoglycemic events to subsequent adverse events and death varies. In ADVANCE, a severe hypoglycemic episode increased the risk of major cardiovascular events for both the next 3 months and the following 6 months. In A Trial Comparing Cardiovascular Safety of Insulin Degludec Versus Insulin Glargine in Subjects With Type 2 Diabetes at High Risk of Cardiovascular Events (DEVOTE) and the Liraglutide Effect and Action in Diabetes: Evaluation of Cardiovascular Outcome Results (LEADER) trial, there was an increased risk of either serious cardiovascular events or all-cause mortality starting 15 days and extending (albeit with decreasing risk) up to 1 year after severe hypoglycemia (13,14).”

June 15, 2019 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Nephrology, Neurology, Ophthalmology, Studies | Leave a comment

Viruses

This book is not great, but it’s also not bad – I ended up giving it three stars on goodreads, being much closer to 2 stars than 4. It’s a decent introduction to the field of virology, but not more than that. Below some quotes and links related to the book’s coverage.

“[I]t was not until the invention of the electron microscope in 1939 that viruses were first visualized and their structure elucidated, showing them to be a unique class of microbes. Viruses are not cells but particles. They consist of a protein coat which surrounds and protects their genetic material, or, as the famous immunologist Sir Peter Medawar (1915–87) termed it, ‘a piece of bad news wrapped up in protein’. The whole structure is called a virion and the outer coat is called the capsid. Capsids come in various shapes and sizes, each characteristic of the virus family to which it belongs. They are built up of protein subunits called capsomeres and it is the arrangement of these around the central genetic material that determines the shape of the virion. For example, pox viruses are brick-shaped, herpes viruses are icosahedral (twenty-sided spheres), the rabies virus is bullet-shaped, and the tobacco mosaic virus is long and thin like a rod […]. Some viruses have an outer layer surrounding the capsid called an envelope. […] Most viruses are too small to be seen under a light microscope. In general, they are around 100 to 500 times smaller than bacteria, varying in size from 20 to 300 nanometres in diameter […] Inside the virus capsid is its genetic material, or genome, which is either RNA or DNA depending on the type of virus […] Viruses usually have between 4 and 200 genes […] Cells of free-living organisms, including bacteria, contain a variety of organelles essential for life such as ribosomes that manufacture proteins, mitochondria, or other structures that generate energy, and complex membranes for transporting molecules within the cell, and also across the cell wall. Viruses, not being cells, have none of these and are therefore inert until they infect a living cell. Then they hijack a cell’s organelles and use what they need, often killing the cell in the process. Thus viruses are obliged to obtain essential components from other living things to complete their life cycle and are therefore called obligate parasites.”

“Plant viruses either enter cells through a break in the cell wall or are injected by a sap-sucking insect vector like aphids. They then spread very efficiently from cell to cell via plasmodesmata, pores that transport molecules between cells. In contrast, animal viruses infect cells by binding to specific cell surface receptor molecules. […] Once a virus has bound to its cellular receptor, the capsid penetrates the cell and its genome (DNA or RNA) is released into the cell cytoplasm. The main ‘aim’ of a virus is to reproduce successfully, and to do this its genetic material must download the information it carries. Mostly, this will take place in the cell’s nucleus where the virus can access the molecules it needs to begin manufacturing its own proteins. Some large viruses, like pox viruses, carry genes for the enzymes they need to make their proteins and so are more self-sufficient and can complete the whole life cycle in the cytoplasm. Once inside a cell, DNA viruses simply masquerade as pieces of cellular DNA, and their genes are transcribed and translated using as much of the cell’s machinery as they require. […] Because viruses have a high mutation rate, significant evolutionary change, estimated at around 1 per cent per year for HIV, can be measured over a short timescale. […] RNA viruses have no proof-reading system so they have a higher mutation rate than DNA viruses. […] By constantly evolving, […] viruses appear to have honed their skills for spreading from one host to another to reach an amazing degree of sophistication. For instance, the common cold virus (rhinovirus), while infecting cells lining the nasal cavities, tickles nerve endings to cause sneezing. During these ‘explosions’, huge clouds of virus-carrying mucus droplets are forcefully ejected, then float in the air until inhaled by other susceptible hosts. Similarly, by wiping out sheets of cells lining the intestine, rotavirus prevents the absorption of fluids from the gut cavity. This causes severe diarrhea and vomiting that effectively extrudes the virus’s offspring back into the environment to reach new hosts. Other highly successful viruses hitch a ride from one host to another with insects. […] As a virus’s generation time is so much shorter than ours, the evolution of genetic resistance to a new human virus is painfully slow, and constantly leaves viruses with the advantage.”

“The phytoplankton is a group of organisms that uses solar energy and carbon dioxide to generate energy by photosynthesis. As a by-product of this reaction, they produce almost half of the world’s oxygen and are therefore of vital importance to the chemical stability of the planet. Phytoplankton forms the base of the whole marine food-web, being grazed upon by zooplankton and young marine animals which in turn fall prey to fish and higher marine carnivores. By infecting and killing plankton microbes, marine viruses control the dynamics of all these essential populations and their interactions. For example, the common and rather beautiful phytoplankton Emiliania huxleyi regularly undergoes blooms that turn the ocean surface an opaque blue over areas so vast that they can be detected from space by satellites. These blooms disappear as quickly as they arise, and this boom-and-bust cycle is orchestrated by the viruses in the community that specifically infect E. huxleyi. Because they can produce thousands of offspring from every infected cell, virus numbers amplify in a matter of hours and so act as a rapid-response team, killing most of the bloom microbes in just a few days. […] Overall, marine viruses kill an estimated 20-40 per cent of marine bacteria every day, and as the major killer of marine microbes, they profoundly affect the carbon cycle by the so-called ‘viral shunt‘.”

“By the end of 2015 WHO reported 36.7 million people living with HIV globally, 70 per cent of whom are in sub-Saharan Africa. Since the first identification of HIV-induced acquired immunodeficiency syndrome (AIDS) approximately 78 million people have been infected with HIV, causing around 35 million deaths […] Antiviral drugs are key in curtailing HIV spread and are being rolled out worldwide, with present coverage of around 46 per cent of those in need. […] The HIVs are most closely related to primate retroviruses called simian immunodeficiency viruses (SIVs) and it is now clear that these HIV-like viruses have jumped from primates to humans in central Africa on several occasions in the past giving rise to human infections with HIV-1 types M, N, O, and P as well as HIV-2. Yet only one of these viruses, HIV-1 type M, has succeeded in spreading globally. The ancestor of this virus has been traced to a subspecies of chimpanzees (Pan troglodytes troglodytes), among whom it can cause an AIDS-like disease. Since these animals are hunted for bush meat, it is most likely that human infection occurred by blood contamination during the killing and butchering process. This event probably took place in south-east Cameroon where chimpanzees carrying an SIV most closely related to HIV-1 type M live.”

Flu viruses are paramyxoviruses with an RNA genome with eight genes that are segmented, meaning that instead of being a continuous RNA chain, each gene forms a separate strand. The H (haemaglutinin) and N (neuraminidase) genes are the most important in stimulating protective host immunity. There are sixteen different H and nine different N genes, all of which can be found in all combinations in bird flu viruses. Because these genes are separate RNA strands, on occasions they become mixed up, or recombined. So if two flu A viruses with different H and/or N genes infect a single cell, the offspring will carry varying combinations of genes from the two parent viruses. Most of these viruses will be unable to infect humans, but occasionally a new virus strain is produced that can jump directly to humans and cause a pandemic. […] The emergence of almost all recent novel flu viruses has been traced to China where they circulate freely among animals kept in cramped conditions in farms and live bird markets. […] once established in humans their spread has been much enhanced by travel, particularly air travel that can take a virus inside a traveller across the globe before they even realize they are infected. […] With over a billion people worldwide boarding international flights every year, novel viruses have an efficient mechanism for rapid spread.”

“Once an acute emerging virus such as a new strain of flu is successfully established in a population, it generally settles into a mode of cyclical epidemics during which many susceptible people are infected and become immune to further attack. When most are immune, the virus moves on, only returning when a new susceptible population has emerged, which generally consists of those born since the last epidemic. Before vaccination programmes became widespread, young children suffered from a series of well-recognized infectious diseases called the ‘childhood infections’. These included measles, mumps, rubella, and chickenpox, all caused by viruses […] following the introduction of vaccine programmes these have become a rarity, particularly in the developed world. […] Of the three viruses, measles is the most infectious and produces the severest disease. It killed millions of children each year before vaccination was introduced in the mid-20th century. Even today, this virus kills over 70,000 children annually in countries with low vaccine coverage. […] In developing countries, measles kills 1-5 per cent of those it infects”.

Smallpox virus is in a class of its own as the world’s worst killer virus. It first infected humans at least 5,000 years ago and killed around 300 million in the 20th century alone. The virus killed up to 30 per cent of those it infected, scarring and blinding many of the survivors. […] Worldwide, eradication of smallpox was declared in 1980.”

“Viruses spread between hosts in many different ways, but those that cause acute epidemics generally utilize fast and efficient methods, such as the airborne or faecal-oral routes. […] Broadly speaking, virus infections are distinguished by the organs they affect, with airborne viruses mainly causing respiratory illnesses, […] and those transmitted by faecal-oral contamination causing intestinal upsets, with nausea, vomiting, and diarrhoea. There are literally thousands of viruses capable of causing human epidemics […] worldwide, acute respiratory infections, mostly viral, cause an estimated four million deaths a year in children under 5. […] Most people get two or three colds a year, suggesting that the immune system, which is so good at protecting us against a second attack of measles, mumps, or rubella, is defeated by the common cold virus. But this is not the case. In fact, there are so many viruses that cause the typical symptoms of blocked nose, headache, malaise, sore throat, sneezing, coughing, and sometimes fever, that even if we live for a hundred years, we will not experience them all. The common cold virus, or rhinovirus, alone has over one hundred different types, and there are many other viruses that infect the cells lining the nose and throat and cause similar symptoms, often with subtle variations. […] Viruses that target the gut are just as diverse as respiratory viruses […] Rotaviruses are a major cause of gastroenteritis globally, particularly targeting children under 5. The disease varies in severity […] rotaviruses cause over 600,000 infant deaths a year worldwide […] Noroviruses are the second most common cause of viral gastroenteritis after rotaviruses, producing a milder disease of shorter duration. These viruses account for around 23 million cases of gastroenteritis every year […] Many virus families such as rotaviruses that rely on faecal-oral transmission and cause gastroenteritis in humans produce the same symptoms in animals, resulting in great economic loss to the farming industry. […] over the centuries, Rinderpest virus, the cause of cattle plague, has probably been responsible for more loss and hardship than any other. […] Rinderpest is classically described by the three Ds: discharge, diarrhoea, and death, the latter being caused by fluid loss with rapid dehydration. The disease kills around 90 per cent of animals infected. Rinderpest used to be a major problem in Europe and Asia, and when it was introduced into Africa in the late 19th century it killed over 90 per cent of cattle, with devastating economic loss. The Global Rinderpest Eradication Programme was set up in the 1980s aiming to use the effective vaccine to rid the world of the virus by 2010. This was successful, and in October 2010 the disease was officially declared eradicated, the first animal disease and second infectious disease ever to be eliminated.”

“At present, 1.8 million virus-associated cancers are diagnosed worldwide annually. This accounts for 18 per cent of all cancers, but since these human tumour viruses were only identified fairly recently, it is probable that there are several more out there waiting to be discovered. […] Primary liver cancer is a major global health problem, being one of the ten most common cancers worldwide, with over 250,000 cases diagnosed every year and only 5 per cent of sufferers surviving five years. The tumour is more common in men than women and is most prevalent in sub-Saharan Africa and South East Asia where the incidence reaches over 30 per 100,000 population per year, compared to fewer than 5 per 100,000 in the USA and Europe. Up to 80 per cent of these tumours are caused by a hepatitis virus, the remainder being related to liver damage from toxic agents such as alcohol. […] hepatitis B and C viruses cause liver cancer. […] a large study carried out on 22,000 men in Taiwan in the 1990s showed that those persistently infected with HBV were over 200 times more likely than non-carriers to develop liver cancer, and that over half the deaths in this group were due to liver cancer or cirrhosis. […] A vaccine against HBV is available, and its use has already caused a decline in HBV-related liver cancer in Taiwan, where a vaccination programme was implemented in the 1980s”.

“Most persistent viruses have evolved to cause mild or even asymptomatic infections, since a life-threatening disease would not only be detrimental to the host but also deprive the virus of its home. Indeed, some viruses apparently cause no ill effects at all, and have been discovered only by chance. One example is TTV, a tiny DNA virus found in 1997 during the search for the cause of hepatitis and named after the initials (TT) of the patient from whom it was first isolated. We now know that TTV, and its relative TTV-like mini virus, represent a whole spectrum of similar viruses that are carried by almost all humans, non-human primates, and a variety of other vertebrates, but so far they have not been associated with any disease. With modern, highly sensitive molecular techniques for identifying non-pathogenic viruses, we can expect to find more of these silent passengers in the future. […] Historically, diagnosis and treatment of virus infections have lagged far behind those of bacterial diseases and are only now catching up. […] Diagnostic laboratories are still unable to find a culprit virus in many so-called ‘viral’ meningitis, encephalitis, and respiratory infections. This strongly suggests that there are many pathogenic viruses waiting to be discovered”.

“There is no doubt that although vaccines are expensive to prepare and test, they are the safest, easiest, and most cost-effective way of controlling infectious diseases worldwide.”

Virology. Virus. RNA virus. DNA virus. Retrovirus. Reverse transcriptase. Integrase. Provirus.
Germ theory of disease.
Antonie van Leeuwenhoek. Louis Pasteur. Robert Koch. Adolf Mayer. Dmitri Ivanovsky. Martinus Beijerinck.
Tobacco mosaic virus.
Mimivirus.
Viral evolution – origins.
White spot syndrome.
Fibropapillomatosis.
Acyrthosiphon pisum.
Vibrio_cholerae#Genome (Vibrio cholerae are bacteria, but viruses play a very important role here regarding the toxin-producing genes – “Only cholera bacteria infected with the toxigenic phage are pathogenic to humans”).
Yellow fever.
Dengue fever.
CCR5.
Immune system. Cytokine. Interferon. Macrophage. Lymphocyte. Antigen. CD4++ T cells. CD8+ T-cell. Antibody. Regulatory T cell. Autoimmunity.
Zoonoses.
Arbovirus. Coronavirus. SARS-CoV. MERS-CoV. Ebolavirus. Henipavirus. Influenza virus. H5N1. HPAI. H7N9. Foot-and-mouth disease. Monkeypox virus. Chikungunya virus. Schmallenberg virus. Zika virus. Rift valley fever. Bluetongue disease. Arthrogryposis. West Nile fever. Chickenpox. Polio. Bocavirus.
Sylvatic cycle.
Nosocomial infections.
Subacute sclerosing panencephalitis.
Herpesviridae. CMV. Herpes simplex virus. Epstein–Barr virus. Human herpesvirus 6. Human betaherpesvirus 7. Kaposi’s sarcoma-associated herpesvirus (KSHV). Varicella-zoster virus (VZV). Infectious mononucleosis. Hepatitis. Rous sarcoma virus. Human T-lymphotropic virus. Adult t cell leukemia. HPV. Cervical cancer.
Oncovirus. Myc.
Variolation. Edward Jenner. Mary Wortley Montagu. Benjamin Jesty. James Phipps. Joseph Meister. Jonas Salk. Albert Sabin.
Marek’s disease. Rabies. Post-exposure prophylaxis.
Vaccine.
Aciclovir. Oseltamivir.
PCR.

 

June 10, 2019 Posted by | Biology, Books, Cancer/oncology, Immunology, Infectious disease, Medicine, Microbiology, Molecular biology | Leave a comment

Random stuff

i. Your Care Home in 120 Seconds. Some quotes:

“In order to get an overall estimate of mental power, psychologists have chosen a series of tasks to represent some of the basic elements of problem solving. The selection is based on looking at the sorts of problems people have to solve in everyday life, with particular attention to learning at school and then taking up occupations with varying intellectual demands. Those tasks vary somewhat, though they have a core in common.

Most tests include Vocabulary, examples: either asking for the definition of words of increasing rarity; or the names of pictured objects or activities; or the synonyms or antonyms of words.

Most tests include Reasoning, examples: either determining which pattern best completes the missing cell in a matrix (like Raven’s Matrices); or putting in the word which completes a sequence; or finding the odd word out in a series.

Most tests include visualization of shapes, examples: determining the correspondence between a 3-D figure and alternative 2-D figures; determining the pattern of holes that would result from a sequence of folds and a punch through folded paper; determining which combinations of shapes are needed to fill a larger shape.

Most tests include episodic memory, examples: number of idea units recalled across two or three stories; number of words recalled from across 1 to 4 trials of a repeated word list; number of words recalled when presented with a stimulus term in a paired-associate learning task.

Most tests include a rather simple set of basic tasks called Processing Skills. They are rather humdrum activities, like checking for errors, applying simple codes, and checking for similarities or differences in word strings or line patterns. They may seem low grade, but they are necessary when we try to organise ourselves to carry out planned activities. They tend to decline with age, leading to patchy, unreliable performance, and a tendency to muddled and even harmful errors. […]

A brain scan, for all its apparent precision, is not a direct measure of actual performance. Currently, scans are not as accurate in predicting behaviour as is a simple test of behaviour. This is a simple but crucial point: so long as you are willing to conduct actual tests, you can get a good understanding of a person’s capacities even on a very brief examination of their performance. […] There are several tests which have the benefit of being quick to administer and powerful in their predictions.[..] All these tests are good at picking up illness related cognitive changes, as in diabetes. (Intelligence testing is rarely criticized when used in medical settings). Delayed memory and working memory are both affected during diabetic crises. Digit Symbol is reduced during hypoglycaemia, as are Digits Backwards. Digit Symbol is very good at showing general cognitive changes from age 70 to 76. Again, although this is a limited time period in the elderly, the decline in speed is a notable feature. […]

The most robust and consistent predictor of cognitive change within old age, even after control for all the other variables, was the presence of the APOE e4 allele. APOE e4 carriers showed over half a standard deviation more general cognitive decline compared to noncarriers, with particularly pronounced decline in their Speed and numerically smaller, but still significant, declines in their verbal memory.

It is rare to have a big effect from one gene. Few people carry it, and it is not good to have.

ii. What are common mistakes junior data scientists make?

Apparently the OP had second thoughts about this query so s/he deleted the question and marked the thread nsfw (??? …nothing remotely nsfw in that thread…). Fortunately the replies are all still there, there are quite a few good responses in the thread. I added some examples below:

“I think underestimating the domain/business side of things and focusing too much on tools and methodology. As a fairly new data scientist myself, I found myself humbled during this one project where I had I spent a lot of time tweaking parameters and making sure the numbers worked just right. After going into a meeting about it became clear pretty quickly that my little micro-optimizations were hardly important, and instead there were X Y Z big picture considerations I was missing in my analysis.”

[…]

  • Forgetting to check how actionable the model (or features) are. It doesn’t matter if you have amazing model for cancer prediction, if it’s based on features from tests performed as part of the post-mortem. Similarly, predicting account fraud after the money has been transferred is not going to be very useful.

  • Emphasis on lack of understanding of the business/domain.

  • Lack of communication and presentation of the impact. If improving your model (which is a quarter of the overall pipeline) by 10% in reducing customer churn is worth just ~100K a year, then it may not be worth putting into production in a large company.

  • Underestimating how hard it is to productionize models. This includes acting on the models outputs, it’s not just “run model, get score out per sample”.

  • Forgetting about model and feature decay over time, concept drift.

  • Underestimating the amount of time for data cleaning.

  • Thinking that data cleaning errors will be complicated.

  • Thinking that data cleaning will be simple to automate.

  • Thinking that automation is always better than heuristics from domain experts.

  • Focusing on modelling at the expense of [everything] else”

“unhealthy attachments to tools. It really doesn’t matter if you use R, Python, SAS or Excel, did you solve the problem?”

“Starting with actual modelling way too soon: you’ll end up with a model that’s really good at answering the wrong question.
First, make sure that you’re trying to answer the right question, with the right considerations. This is typically not what the client initially told you. It’s (mainly) a data scientist’s job to help the client with formulating the right question.”

iii. Some random wikipedia links: Ottoman–Habsburg wars. Planetshine. Anticipation (genetics). Cloze test. Loop quantum gravity. Implicature. Starfish Prime. Stall (fluid dynamics). White Australia policy. Apostatic selection. Deimatic behaviour. Anti-predator adaptation. Lefschetz fixed-point theorem. Hairy ball theorem. Macedonia naming dispute. Holevo’s theorem. Holmström’s theorem. Sparse matrix. Binary search algorithm. Battle of the Bismarck Sea.

iv. 5-HTTLPR: A Pointed Review. This one is hard to quote, you should read all of it. I did however decide to add a few quotes from the post, as well as a few quotes from the comments:

“…what bothers me isn’t just that people said 5-HTTLPR mattered and it didn’t. It’s that we built whole imaginary edifices, whole castles in the air on top of this idea of 5-HTTLPR mattering. We “figured out” how 5-HTTLPR exerted its effects, what parts of the brain it was active in, what sorts of things it interacted with, how its effects were enhanced or suppressed by the effects of other imaginary depression genes. This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot.

This is why I start worrying when people talk about how maybe the replication crisis is overblown because sometimes experiments will go differently in different contexts. The problem isn’t just that sometimes an effect exists in a cold room but not in a hot room. The problem is more like “you can get an entire field with hundreds of studies analyzing the behavior of something that doesn’t exist”. There is no amount of context-sensitivity that can help this. […] The problem is that the studies came out positive when they shouldn’t have. This was a perfectly fine thing to study before we understood genetics well, but the whole point of studying is that, once you have done 450 studies on something, you should end up with more knowledge than you started with. In this case we ended up with less. […] I think we should take a second to remember that yes, this is really bad. That this is a rare case where methodological improvements allowed a conclusive test of a popular hypothesis, and it failed badly. How many other cases like this are there, where there’s no geneticist with a 600,000 person sample size to check if it’s true or not? How many of our scientific edifices are built on air? How many useless products are out there under the guise of good science? We still don’t know.”

A few more quotes from the comment section of the post:

“most things that are obviously advantageous or deleterious in a major way aren’t gonna hover at 10%/50%/70% allele frequency.

Population variance where they claim some gene found in > [non trivial]% of the population does something big… I’ll mostly tend to roll to disbelieve.

But if someone claims a family/village with a load of weirdly depressed people (or almost any other disorder affecting anything related to the human condition in any horrifying way you can imagine) are depressed because of a genetic quirk… believable but still make sure they’ve confirmed it segregates with the condition or they’ve got decent backing.

And a large fraction of people have some kind of rare disorder […]. Long tail. Lots of disorders so quite a lot of people with something odd.

It’s not that single variants can’t have a big effect. It’s that really big effects either win and spread to everyone or lose and end up carried by a tiny minority of families where it hasn’t had time to die out yet.

Very few variants with big effect sizes are going to be half way through that process at any given time.

Exceptions are

1: mutations that confer resistance to some disease as a tradeoff for something else […] 2: Genes that confer a big advantage against something that’s only a very recent issue.”

“I think the summary could be something like:
A single gene determining 50% of the variance in any complex trait is inherently atypical, because variance depends on the population plus environment and the selection for such a gene would be strong, rapidly reducing that variance.
However, if the environment has recently changed or is highly variable, or there is a trade-off against adverse effects it is more likely.
Furthermore – if the test population is specifically engineered to target an observed trait following an apparently Mendelian inheritance pattern – such as a family group or a small genetically isolated population plus controls – 50% of the variance could easily be due to a single gene.”

v. Less research is needed.

“The most over-used and under-analyzed statement in the academic vocabulary is surely “more research is needed”. These four words, occasionally justified when they appear as the last sentence in a Masters dissertation, are as often to be found as the coda for a mega-trial that consumed the lion’s share of a national research budget, or that of a Cochrane review which began with dozens or even hundreds of primary studies and progressively excluded most of them on the grounds that they were “methodologically flawed”. Yet however large the trial or however comprehensive the review, the answer always seems to lie just around the next empirical corner.

With due respect to all those who have used “more research is needed” to sum up months or years of their own work on a topic, this ultimate academic cliché is usually an indicator that serious scholarly thinking on the topic has ceased. It is almost never the only logical conclusion that can be drawn from a set of negative, ambiguous, incomplete or contradictory data.” […]

“Here is a quote from a typical genome-wide association study:

“Genome-wide association (GWA) studies on coronary artery disease (CAD) have been very successful, identifying a total of 32 susceptibility loci so far. Although these loci have provided valuable insights into the etiology of CAD, their cumulative effect explains surprisingly little of the total CAD heritability.”  [1]

The authors conclude that not only is more research needed into the genomic loci putatively linked to coronary artery disease, but that – precisely because the model they developed was so weak – further sets of variables (“genetic, epigenetic, transcriptomic, proteomic, metabolic and intermediate outcome variables”) should be added to it. By adding in more and more sets of variables, the authors suggest, we will progressively and substantially reduce the uncertainty about the multiple and complex gene-environment interactions that lead to coronary artery disease. […] We predict tomorrow’s weather, more or less accurately, by measuring dynamic trends in today’s air temperature, wind speed, humidity, barometric pressure and a host of other meteorological variables. But when we try to predict what the weather will be next month, the accuracy of our prediction falls to little better than random. Perhaps we should spend huge sums of money on a more sophisticated weather-prediction model, incorporating the tides on the seas of Mars and the flutter of butterflies’ wings? Of course we shouldn’t. Not only would such a hyper-inclusive model fail to improve the accuracy of our predictive modeling, there are good statistical and operational reasons why it could well make it less accurate.”

vi. Why software projects take longer than you think – a statistical model.

Anyone who built software for a while knows that estimating how long something is going to take is hard. It’s hard to come up with an unbiased estimate of how long something will take, when fundamentally the work in itself is about solving something. One pet theory I’ve had for a really long time, is that some of this is really just a statistical artifact.

Let’s say you estimate a project to take 1 week. Let’s say there are three equally likely outcomes: either it takes 1/2 week, or 1 week, or 2 weeks. The median outcome is actually the same as the estimate: 1 week, but the mean (aka average, aka expected value) is 7/6 = 1.17 weeks. The estimate is actually calibrated (unbiased) for the median (which is 1), but not for the the mean.

A reasonable model for the “blowup factor” (actual time divided by estimated time) would be something like a log-normal distribution. If the estimate is one week, then let’s model the real outcome as a random variable distributed according to the log-normal distribution around one week. This has the property that the median of the distribution is exactly one week, but the mean is much larger […] Intuitively the reason the mean is so large is that tasks that complete faster than estimated have no way to compensate for the tasks that take much longer than estimated. We’re bounded by 0, but unbounded in the other direction.”

I like this way to conceptually frame the problem, and I definitely do not think it only applies to software development.

“I filed this in my brain under “curious toy models” for a long time, occasionally thinking that it’s a neat illustration of a real world phenomenon I’ve observed. But surfing around on the interwebs one day, I encountered an interesting dataset of project estimation and actual times. Fantastic! […] The median blowup factor turns out to be exactly 1x for this dataset, whereas the mean blowup factor is 1.81x. Again, this confirms the hunch that developers estimate the median well, but the mean ends up being much higher. […]

If my model is right (a big if) then here’s what we can learn:

  • People estimate the median completion time well, but not the mean.
  • The mean turns out to be substantially worse than the median, due to the distribution being skewed (log-normally).
  • When you add up the estimates for n tasks, things get even worse.
  • Tasks with the most uncertainty (rather the biggest size) can often dominate the mean time it takes to complete all tasks.”

vii. Attraction inequality and the dating economy.

“…the relentless focus on inequality among politicians is usually quite narrow: they tend to consider inequality only in monetary terms, and to treat “inequality” as basically synonymous with “income inequality.” There are so many other types of inequality that get air time less often or not at all: inequality of talent, height, number of friends, longevity, inner peace, health, charm, gumption, intelligence, and fortitude. And finally, there is a type of inequality that everyone thinks about occasionally and that young single people obsess over almost constantly: inequality of sexual attractiveness. […] One of the useful tools that economists use to study inequality is the Gini coefficient. This is simply a number between zero and one that is meant to represent the degree of income inequality in any given nation or group. An egalitarian group in which each individual has the same income would have a Gini coefficient of zero, while an unequal group in which one individual had all the income and the rest had none would have a Gini coefficient close to one. […] Some enterprising data nerds have taken on the challenge of estimating Gini coefficients for the dating “economy.” […] The Gini coefficient for [heterosexual] men collectively is determined by [-ll-] women’s collective preferences, and vice versa. If women all find every man equally attractive, the male dating economy will have a Gini coefficient of zero. If men all find the same one woman attractive and consider all other women unattractive, the female dating economy will have a Gini coefficient close to one.”

“A data scientist representing the popular dating app “Hinge” reported on the Gini coefficients he had found in his company’s abundant data, treating “likes” as the equivalent of income. He reported that heterosexual females faced a Gini coefficient of 0.324, while heterosexual males faced a much higher Gini coefficient of 0.542. So neither sex has complete equality: in both cases, there are some “wealthy” people with access to more romantic experiences and some “poor” who have access to few or none. But while the situation for women is something like an economy with some poor, some middle class, and some millionaires, the situation for men is closer to a world with a small number of super-billionaires surrounded by huge masses who possess almost nothing. According to the Hinge analyst:

On a list of 149 countries’ Gini indices provided by the CIA World Factbook, this would place the female dating economy as 75th most unequal (average—think Western Europe) and the male dating economy as the 8th most unequal (kleptocracy, apartheid, perpetual civil war—think South Africa).”

Btw., I’m reasonably certain “Western Europe” as most people think of it is not average in terms of Gini, and that half-way down the list should rather be represented by some other region or country type, like, say Mongolia or Bulgaria. A brief look at Gini lists seemed to support this impression.

Quartz reported on this finding, and also cited another article about an experiment with Tinder that claimed that that “the bottom 80% of men (in terms of attractiveness) are competing for the bottom 22% of women and the top 78% of women are competing for the top 20% of men.” These studies examined “likes” and “swipes” on Hinge and Tinder, respectively, which are required if there is to be any contact (via messages) between prospective matches. […] Yet another study, run by OkCupid on their huge datasets, found that women rate 80 percent of men as “worse-looking than medium,” and that this 80 percent “below-average” block received replies to messages only about 30 percent of the time or less. By contrast, men rate women as worse-looking than medium only about 50 percent of the time, and this 50 percent below-average block received message replies closer to 40 percent of the time or higher.

If these findings are to be believed, the great majority of women are only willing to communicate romantically with a small minority of men while most men are willing to communicate romantically with most women. […] It seems hard to avoid a basic conclusion: that the majority of women find the majority of men unattractive and not worth engaging with romantically, while the reverse is not true. Stated in another way, it seems that men collectively create a “dating economy” for women with relatively low inequality, while women collectively create a “dating economy” for men with very high inequality.”

I think the author goes a bit off the rails later in the post, but the data is interesting. It’s however important keeping in mind in contexts like these that sexual selection pressures apply at multiple levels, not just one, and that partner preferences can be non-trivial to model satisfactorily; for example as many women have learned the hard way, males may have very different standards for whom to a) ‘engage with romantically’ and b) ‘consider a long-term partner’.

viii. Flipping the Metabolic Switch: Understanding and Applying Health Benefits of Fasting.

“Intermittent fasting (IF) is a term used to describe a variety of eating patterns in which no or few calories are consumed for time periods that can range from 12 hours to several days, on a recurring basis. Here we focus on the physiological responses of major organ systems, including the musculoskeletal system, to the onset of the metabolic switch – the point of negative energy balance at which liver glycogen stores are depleted and fatty acids are mobilized (typically beyond 12 hours after cessation of food intake). Emerging findings suggest the metabolic switch from glucose to fatty acid-derived ketones represents an evolutionarily conserved trigger point that shifts metabolism from lipid/cholesterol synthesis and fat storage to mobilization of fat through fatty acid oxidation and fatty-acid derived ketones, which serve to preserve muscle mass and function. Thus, IF regimens that induce the metabolic switch have the potential to improve body composition in overweight individuals. […] many experts have suggested IF regimens may have potential in the treatment of obesity and related metabolic conditions, including metabolic syndrome and type 2 diabetes.()”

“In most studies, IF regimens have been shown to reduce overall fat mass and visceral fat both of which have been linked to increased diabetes risk.() IF regimens ranging in duration from 8 to 24 weeks have consistently been found to decrease insulin resistance.(, , , , , , , , , ) In line with this, many, but not all,() large-scale observational studies have also shown a reduced risk of diabetes in participants following an IF eating pattern.”

“…we suggest that future randomized controlled IF trials should use biomarkers of the metabolic switch (e.g., plasma ketone levels) as a measure of compliance and the magnitude of negative energy balance during the fasting period. It is critical for this switch to occur in order to shift metabolism from lipidogenesis (fat storage) to fat mobilization for energy through fatty acid β-oxidation. […] As the health benefits and therapeutic efficacies of IF in different disease conditions emerge from RCTs, it is important to understand the current barriers to widespread use of IF by the medical and nutrition community and to develop strategies for broad implementation. One argument against IF is that, despite the plethora of animal data, some human studies have failed to show such significant benefits of IF over CR [Calorie Restriction].() Adherence to fasting interventions has been variable, some short-term studies have reported over 90% adherence,() whereas in a one year ADMF study the dropout rate was 38% vs 29% in the standard caloric restriction group.()”

ix. Self-repairing cells: How single cells heal membrane ruptures and restore lost structures.

June 2, 2019 Posted by | Astronomy, Biology, Data, Diabetes, Economics, Evolutionary biology, Genetics, Geography, History, Mathematics, Medicine, Physics, Psychology, Statistics, Wikipedia | Leave a comment

Quotes

i. “The surest way to be deceived is to think oneself more clever than others.” (Rochefoucauld)

ii. “It is more trouble to make a maxim than it is to do right.” (Mark Twain)

iii. “Between us, we cover all knowledge; he knows all that can be known, and I know the rest.” (-ll-)

iv. “Customs do not concern themselves with right or wrong or reason. But they have to be obeyed; one reasons all around them until he is tired, but he must not transgress them, it is sternly forbidden.” (-ll-)

v. “Every one is a moon, and has a dark side which he never shows to anybody.” (-ll-)

vi. “Often, the surest way to convey misinformation is to tell the strict truth.” (-ll-)

vii. “A man is never more truthful than when he acknowledges himself a liar.” (-ll-)

viii. “It is not worth while to try to keep history from repeating itself, for man’s character will always make the preventing of the repetitions impossible.” (-ll-)

ix. “Man will do many things to get himself loved; he will do all things to get himself envied.” (-ll-)

x. “Grief can take care of itself, but to get the full value of a joy you must have somebody to divide it with.” (-ll-)

xi. “Science, at bottom, is really anti-intellectual. It always distrusts pure reason, and demands the production of objective fact.” (H. L. Mencken)

xii. “It is the natural tendency of the ignorant to believe what is not true. In order to overcome that tendency it is not sufficient to exhibit the true; it is also necessary to expose and denounce the false.” (-ll-)

xiii. “It is the dull man who is always sure, and the sure man who is always dull.” (-ll-)

xiv. “…enlightenment, among mankind, is very narrowly dispersed. It is common to assume that human progress affects everyone-that even the dullest man, in these bright days, knows more than any man of, say, the Eighteenth Century, and is, far more civilized. This assumption is quite erroneous.” (-ll-)

xv. “…there are more viruses in the world than all other forms of life added together.” (Dorothy H. Crawford, Viruses: A Very Short Introduction).

xvi. “People don’t think about you nearly as much as you think about people thinking about you.” (‘Abstrusegoose‘)

xvii. “Most people are not intellectuals — a fact that intellectuals have terrible trouble coming to terms with.” (John Derbyshire)

xviii. “Few of the great tragedies of history were created by the village idiot, and many by the village genius.” (Thomas Sowell)

xix. “If I have not seen as far as others, it is because giants were standing on my shoulders.” (Hal Abelson)

xx. “If I have seen further than others, it is because I am surrounded by dwarfs.” (Murray Gell-Mann. RIP.)

 

May 25, 2019 Posted by | Quotes/aphorisms | Leave a comment

Cardiology: Diabetes Mellitus

Despite the title this is mainly a pharmacology lecture. It’s a bit dated, but on the other hand the action mechanism of a major drug class usually doesn’t change dramatically in a semi-decade, so the fact that the lecture is a few years old I don’t think is that much of a problem. This is not in my opinion a great lecture, but it was worth watching.

A few random links related to topics covered in the talk:

Thiazolidinedione.
PPAR agonist.
Pioglitazone.
Dipeptidyl peptidase-4 inhibitor.
Glucagon-like peptide-1 receptor agonist.
Pregnancy categories.
Alpha-glucosidase inhibitor.
Sulfonylurea.
SGLT2 inhibitor.
Pramlintide.

May 25, 2019 Posted by | Cardiology, Diabetes, Lectures, Pharmacology | Leave a comment