Econstudentlog

Quotes

i. “Power breeds responsibilities […] To dodge or disclaim these responsibilities is one form of the abuse of power.” (Irving Kristol)

ii. “An intellectual may be defined as a man who speaks with general authority about a subject on which he has no particular competence.” (-ll-)

iii. “One must be very naïve or dishonest to imagine that men choose their beliefs independently of their situation.” (Claude Lévi-Strauss)

iv. “I may be subjected to the criticism of being called ‘scientistic’ or a kind of blind believer in science who holds that science is able to solve absolutely all problems. Well, I certainly don’t believe that, because I cannot conceive that a day will come when science will be complete and achieved.” (-ll-)

v. “This century has been so rich in discovery and so packed with technical innovation that it is tempting to believe that there can never be another like it. That conceit betrays the poverty of our collective imagination.” (John Maddox, 1998)

vi. “[Y]ou have to study and learn so that you can make up your own mind about history and everything else but you can’t make up an empty mind. Stock your mind, stock your mind. You might be poor, your shoes might be broken, but your mind is a palace.” (Frank McCourt)

vii. “I suppose that writers should, in a way, feel flattered by the censorship laws. They show a primitive fear and dread at the fearful magic of print.” (John Mortimer)

viii. “Real winners do not compete.” (Samuli Paronen)

ix. “In a calm sea every man is a pilot.” (John Ray)

x. “Yes, I know Marcus Aurelius or Vauvenargues or Chesterton has already said this, and far better; but let’s face it — you weren’t listening then either.” (Don Paterson)

xi. “I never fail to be mystified by those who regard the revision of a former opinion as a sign of weakness.” (-ll-)

xii. “The audience will always feel far more generous if, as some point in the evening, a little time has been found for them to applaud themselves.” (-ll-)

xiii. “Smashing things is the violent way stupid mortal monkeys solve their problems.” (Kage Baker)

xiv. “True believers aren’t real receptive to the idea that what they’re telling you is just mythology.” (-ll-)

xv. “Every failure is a step to success. Every detection of what is false directs us towards what is true: every trial exhausts some tempting form of error.” (William Whewell)

xvi. “Man is the interpreter of nature, science the right interpretation.” (-ll-)

xvii. “If an argument is a good one, dissonant deeds do nothing to contradict it. In fact, the hypocrite may have something to be said for him; it would be worrying if his ideals were not better than the way he lives.” (David Fleming)

xviii. “‘The harder I work, the luckier I get’. It was Thomas Jefferson who started the stream of variations on that theme. He should have added, ‘The harder I work on one thing, the unluckier I get on all the other commitments I haven’t had time for’.” (-ll-)

xix. “Men of power have not time to read; yet men who do not read are unfit for power” (Michael Foot)

xx. “So-called electronic communities encourage participation in fragmented, mostly silent, micro-groups who are primarily engaged in dialogues of self-congratulation. In other words, most people lurk; and the ones that post are pleased with themselves.” (Carmen Hermosillo)

Advertisements

October 21, 2017 Posted by | Quotes/aphorisms | Leave a comment

Infectious Disease Surveillance (IV)

I have added some more observations from the second half of the book below.

“The surveillance systems for all stages of HIV infection, including stage 3 (AIDS), are the most highly developed, complex, labor-intensive, and expensive of all routine infectious disease surveillance systems. […] Although some behaviorally based prevention interventions (e.g., individual counseling and testing) are relatively inexpensive and simple to implement, others are expensive and difficult to maintain. Consequently, HIV control programs have added more treatment-based methods in recent years. These consist primarily of routine and, in some populations, repeated and frequent testing for HIV with an emphasis on diagnosing every infected person as quickly as possible, linking them to clinical care, prescribing ART, monitoring for retention in care, and maintaining an undetectable viral load. This approach is referred to as “treatment as prevention.” […] Prior to the advent of HAART in the mid-1990s, surveillance consisted primarily of collecting initial HIV diagnosis, followed by monitoring of progression to AIDS and death. The current need to monitor adherence to treatment and care has led to surveillance to collect results of all CD4 count and viral load tests conducted on HIV-infected persons. Treatment guidelines recommend such testing quarterly [11], leading to dozens of laboratory tests being reported for each HIV-infected person in care; hence, the need to receive laboratory results electronically and efficiently has increased. […] The standard set by CDC for completeness is that at least 85% of diagnosed cases are reported to public health within the year of diagnosis. […] As HIV-infected persons live longer as a consequence of ART, the scope of HIV surveillance has expanded […] A critical part of collecting HIV data is maintaining the database.”

“The World Health Organization (WHO) estimates that 8.7 million new cases of TB and 1.4 million deaths from TB occurred in 2011 worldwide [2]. […] WHO estimates that one of every three individuals worldwide is infected with TB [6]. An estimated 5–10% of persons with LTBI [latent TB infection] in the general population will eventually develop active TB disease. Persons with latent infection who are immune suppressed for any reason are more likely to develop active disease. It is estimated that people infected with human immunodeficiency virus (HIV) are 21–34 times more likely to progress from latent to active TB disease […] By 2010, the percentage of all TB cases tested for HIV was 65% and the prevalence of coinfection was 6% [in the United States] [4]. […] From a global perspective, the United States is considered a low morbidity and mortality country for TB. In 2010, the national annual incidence rate for TB was 3.6 per 100,000 persons with 11,182 reported cases of TB  […] In 1953, 113,531 tuberculosis cases were reported in the United States […] Tuberculosis surveillance in the United States has changed a great deal in depth and quality since its inception more than a century ago. […] To assure uniformity and standardization of surveillance data, all TB programs in the United States report verified TB cases via the Report of Verified Case of Tuberculosis (RVCT) [43]. The RVCT collects demographic, diagnostic, clinical, and risk-factor information on incident TB cases […] A companion form, the Follow-up 1 (FU-1), records the date of specimen collection and results of the initial drug susceptibility test at the time of diagnosis for all culture-confirmed TB cases. […]  The Follow-up 2 (FU-2) form collects outcome data on patient treatment and additional clinical and laboratory information. […] Since 1993, the RVCT, FU-1, and FU-2 have been used to collect demographic and clinical information, as well as laboratory results for all reported TB cases in the United States […] The RVCT collects information about known risk factors for TB disease; and in an effort to more effectively monitor TB caused by drug-resistant strains, CDC also gathers information regarding drug susceptibility testing for culture-confirmed cases on the FU-2.”

“Surveillance data may come from widely different systems with different specific purposes. It is essential that the purpose and context of any specific system be understood before attempting to analyze and interpret the surveillance data produced by that system. It is also essential to understand the methodology by which the surveillance system collects data. […] The most fundamental challenge for analysis and interpretation of surveillance data is the identification of a baseline. […] For infections characterized by seasonal outbreaks, the baseline range will vary by season in a generally predictable manner […] The comparison of observations to the baseline range allows characterization of the impact of intentional interventions or natural phenomenon and determination of the direction of change. […] Resource investment in surveillance often occurs in response to a newly recognized disease […] a suspected change in the frequency, virulence, geography, or risk population of a familiar disease […] or following a natural disaster […] In these situations, no baseline data are available against which to judge the significance of data collected under newly implemented surveillance.”

“Differences in data collection methods may result in apparent differences in disease occurrence between geographic regions or over time that are merely artifacts resulting from variations in surveillance methodology. Data should be analyzed using standard periods of observation […] It may be helpful to examine the same data by varied time frames. An outbreak of short duration may be recognizable through hourly, daily, or weekly grouping of data but obscured if data are examined only on an annual basis. Conversely, meaningful longer-term trends may be recognized more efficiently by examining data on an annual basis or at multiyear intervals. […] An early approach to analysis of infectious disease surveillance data was to convert observation of numbers into observations of rates. Describing surveillance observations as rates […] standardizes the data in a way that allows comparisons of the impact of disease across time and geography and among different populations”.

“Understanding the sensitivity and specificity of surveillance systems is important. […] Statistical methods based on tests of randomness have been applied to infectious disease surveillance data for the purpose of analysis of aberrations. Methods include adaptations of quality control charts from industry; Bayesian, cluster, regression, time series, and bootstrap analyses; and application of smoothing algorithms, simulation, and spatial statistics [1,14].[…] Time series forecasting and regression methods have been fitted to mortality data series to forecast future epidemics of seasonal diseases, most commonly influenza, and to estimate the excess associated mortality. […] While statistical analysis can be applied to surveillance data, the use of statistics for this purpose is often limited by the nature of surveillance data. Populations under surveillance are often not random samples of a general population, and may not be broadly representative, complicating efforts to use statistics to estimate morbidity and mortality impacts on populations. […] The more information an epidemiologist has about the purpose of the surveillance system, the people who perform the reporting, and the circumstances under which the data are collected and conveyed through the system, the more likely it is that the epidemiologist will interpret the data correctly. […] In the context of public health practice, a key value of surveillance data is not just in the observations from the surveillance system but also in the fact that these data often stimulate action to collect better data, usually through field investigations. Field investigations may improve understanding of risk factors that were suggested by the surveillance data itself. Often, field investigations triggered by surveillance observations lead to research studies such as case control comparisons that identify and better define the strength of risk factors.”

“The increasing frequency of disease outbreaks that have spread across national borders has led to the development of multicountry surveillance networks. […] Countries that participate in surveillance networks typically agree to share disease outbreak information and to collaborate in efforts to control disease spread. […] Multicountry disease surveillance networks now exist in many parts of the world, such as the Middle East, Southeast Asia, Southern Africa, Southeastern Europe, and East Africa. […] Development of accurate and reliable diagnoses of illnesses is a fundamental challenge in global surveillance. Clinical specimen collection, analysis, and laboratory confirmation of the etiology of disease outbreaks are important components of any disease surveillance system [37]. In many areas of the world, however, insufficient diagnostic capacity leads to no or faulty diagnoses, inappropriate treatments, and disease misreporting. For example, surveillance for malaria is challenged by a common reliance on clinical symptoms for diagnosis, which has been shown to be a poor predictor of actual infection [38,39]. […] A WHO report indicates that more than 60% of laboratory equipment in countries with limited resources is outdated or not functioning [46]. Even when there is sufficient laboratory capacity, laboratory-based diagnosis of disease can also be slow, delaying detection of outbreaks. For example, it can take more than a month to determine whether a patient is infected with drug-resistant strains of tuberculosis. […] The International Health Regulations (IHR) codify the measures that countries must take to limit the international spread of disease while ensuring minimum interference with trade and travel. […] From the perspective of an individual nation, there are few incentives to report an outbreak of a disease to the international community. Rather, the decision to report diseases may result in adverse consequences — significant drops in tourism and trade, closings of borders, and other measures that the IHR are supposed to prevent.”

“Concerns about biological terrorism have raised the profile of infectious disease surveillance in the United States and around the globe [14]. […] Improving global surveillance for biological terrorism and emerging infectious diseases is now a major focus of the U.S. Department of Defense’s (DoD) threat reduction programs [17]. DoD spends more on global health surveillance than any other U.S. governmental agency [18].”

“Zoonoses, or diseases that can transmit between humans and animals, have been responsible for nearly two-thirds of infectious disease outbreaks that have occurred since 1950 and more than $200 billion in worldwide economic losses in the last 10 years [52]. Despite the significant economic and health threats caused by these diseases, worldwide capacity for surveillance of zoonotic diseases is insufficient [52]. […] Over the last few decades, there have been significant changes in the way in which infectious disease surveillance is practiced. New regulations and goals for infectious disease surveillance have given rise to the development of new surveillance approaches and methods and have resulted in participation by nontraditional sectors, including the security community. Though most of these developments have positively shaped global surveillance, there remain key challenges that stand in the way of continued improvements. These include insufficient diagnostic capabilities and lack of trained staff, lack of integration between human and animal-health surveillance efforts, disincentives for countries to report disease outbreaks, and lack of information exchange between public health agencies and other sectors that are critical for surveillance.

“The biggest limitations to the development and sustainment of electronic disease surveillance systems, particularly in resource-limited countries, are the ease with which data are collected, accessed, and used by public health officials. Systems that require large amounts of resources, whether that is in the form of the workforce or information technology (IT) infrastructure, will not be successful in the long term. Successful systems run on existing hardware that can be maintained by modestly trained IT professionals and are easy to use by end users in public health [20].”

October 20, 2017 Posted by | Books, Medicine, Statistics, Epidemiology, Infectious disease | Leave a comment

Beyond Significance Testing (V)

I never really finished my intended coverage of this book. Below I have added some observations from the last couple of chapters.

“Estimation of the magnitudes and precisions of interaction effects should be the focus of the analysis in factorial designs. Methods to calculate standardized mean differences for contrasts in such designs are not as well developed as those for one-way designs. Standardizers for single-factor contrasts should reflect variability as a result of intrinsic off-factors that vary naturally in the population, but variability due to extrinsic off-factors that do not vary naturally should be excluded. Measures of association may be preferred in designs with three or more factors or where some factors are random. […] There are multivariate versions of d statistics and measures of association for designs with two or more continuous outcomes. For example, a Mahalanobis distance is a multivariate d statistic, and it estimates the difference between two group centroids (the sets of all univariate means) in standard deviation units controlling for intercorrelation.”

“Replication is a foundational scientific activity but one neglected in the behavioral sciences. […] There is no single nomenclature to classify replication studies (e.g., Easley et al., 2000), but there is enough consensus to outline at least the broad types […] Internal replication includes statistical resampling and cross-validation by the original researcher(s). Resampling includes bootstrapping and related computer-based methods, such as the jackknife technique, that randomly combine the cases in an original data set in different ways to estimate the effect of idiosyncrasies in the sample on the results […] Such procedures are not replication in the usual scientific sense. the total sample in cross-validation is randomly divided into a derivation sample and a cross-validation sample, and the same analyses are conducted in each one. External replication is conducted by people other than the original researchers, and it involves new samples collected at different times or places.
There are two broad contexts for external replication. The first concerns different kinds of replications of experimental studies. One is exact replication, also known as direct replication, literal replication, or precise replication, where all major aspects of an original study — its sampling methods, design, and outcome measures — are closely copied. True exact replications exist more in theory than in practice because it is difficult to perfectly duplicate a study […] Another type is operational replication — also referred to as partial replication or improvisational replication — where just the sampling and methods of an original study are duplicated. […] The outcome of operational replication is potentially more informative than that of literal replication, because robust effects should stand out against variations in procedures, settings, or samples.
In balanced replication, operational replications are used as control conditions. Other conditions may represent the manipulation of additional substantive variables to test new hypotheses. […] The logic of balanced replication is similar to that of strong inference, which features designing studies to rule out competing explanations, and to that of dismantling research. The aim of the latter is to study elements of treatments with multiple components in smaller combinations to find the ones responsible for treatment efficacy.
A researcher who conducts a construct replication or conceptual replication avoids close imitation of the specific methods of an original study. An ideal construct replication would be carried out by telling a skilled researcher little more than the original empirical result. this researcher would then specify the design, measures, and data analysis methods deemed appropriate to test whether a finding has generality beyond the particular situation studied in an original work.”

“There is evidence that only small proportions — in some cases < 1% — of all published studies in the behavioral sciences are specifically described as replications (e.g., Easley et al., 2000; Kmetz, 2002). […] K. Hunt (1975), S. schmidt (2009), and others have argued that most replication in the behavioral sciences occurs covertly in the form of follow-up studies, which combine direct replication (or at least construct replication) with new procedures, measures, or hypotheses in the same investigation. Such studies may be described by their authors as “extensions” of previous works with new elements but not as “replications,” […] the problem with this informal approach to replication is that it is not explicit and therefore is unsystematic. […] Perhaps replication would be more highly valued if confidence intervals were reported more often. Then readers of empirical articles would be able to see the low precision with which many studies are conducted. […] Wide confidence intervals indicate that a study contains only limited information, a fact that is concealed when only results of statistical tests are reported”.

“Because sets of related investigations in the behavioral sciences are generally made up of follow-up studies, the explanation of observed variability in their results is a common goal in meta-analysis. That is, the meta-analyst tries to identify and measure characteristics of follow-up studies that give rise to variability among the results. These characteristics include attributes of samples (e.g., mean age, gender), settings in which cases are tested (e.g., inpatient vs. outpatient), and the type of treatment administered (e.g., duration, dosage). Other factors concern properties of the outcome measures (e.g., self-report vs. observational), quality of the research design, source of funding (e.g., private vs. public), professional backgrounds of the authors, or date of publication. The last reflects the potential impact of temporal factors such as changing societal attitudes. […] Study factors are conceptualized as meta-analytic predictors, and study outcome measured with the same standardized effect size is typically the criterion. Each predictor is actually a moderator variable, which implies interaction. This is because the criterion, study effect size, usually represents the association between the independent and dependent variables. If observed variation in effect sizes across a set of studies is explained by a meta-analytic predictor, the relation between the independent and dependent variables changes across the levels of that predictor. For the same reason, the terms moderator variable analysis and meta-regression describe the process of estimating whether study characteristics explain variability in results. […] study factors can covary, such as when different variations of a treatment tend to be administered to patients with acute versus chronic forms of a disorder. If meta-analytic predictors covary, it is necessary to control for overlapping explained proportions of variability in effect sizes.
It is also possible for meta-analytic predictors to interact, which means that they have a joint influence on observed effect sizes. Interaction also implies that to understand variability in results, one must consider the predictors together. This is a subtle point, one that requires some elaboration: Each individual predictor in meta-analysis is a moderator variable. But the relation of one meta-analytic predictor to study outcome may depend on another predictor. For example, the effect of treatment type on observed effect sizes may depend on whether cases with mild versus severe forms of an illness were studied. A different kind of phenomenon is mediation, or indirect effects among study factors. Suppose that one factor is degree of early exposure to a toxic agent and another is illness chronicity. The exposure factor may affect study outcome both directly and indirectly through its influence on chronicity. Indirect effects can be estimated in meta-analysis by applying techniques from structural equation modeling to covariance matrices of study factors and effect sizes pooled over related studies. the use of both techniques together is called mediational meta-analysis or model-driven meta-analysis. […] It is just as important in meta-analysis as when conducting a primary study to clearly specify the hypotheses and operational definitions of constructs.”

“There are ways to estimate in meta-analysis what is known as the fail-safe N, which is the number of additional studies where the average effect size is zero that would be needed to increase the p value in a meta-analysis for the test of the mean observed effect size to > .05 (i.e., the nil hypothesis is not rejected). These additional studies are assumed to be file drawer studies or to be otherwise not found in the literature search of a meta-analysis. If the estimated number of such studies is so large that it is unlikely that so many studies (e.g., 2,000) with a mean nil effect size could exist, more confidence in the results may be warranted. […] Studies from each source are subject to different types of biases. For example, bias for statistical significance implies that published studies have more H0 rejections and larger effect sizes than do unpublished studies […] There are techniques in meta-analysis for estimating the extent of publication bias […]. If such bias is indicated, a meta-analysis based mainly on published sources may be inappropriate.”

“For two reasons, it is crucial to assess the […] research quality for each found primary study. The first is to eliminate from further consideration studies so flawed that their results are untrustworthy. […] The other reason concerns the remaining (nonexcluded) studies, which may be divided into those that are well designed versus those with significant limitations. Results synthesized from the former group may be given greater weight in the analysis than those from the latter group. […] Relatively high proportions of found studies in meta-analyses are often discarded due to poor rated quality, a sad comment on the status of a research literature. […] It is probably best to see meta-analysis as a way to better understand the status of a research area than as an end in itself or some magical substitute for critical thought. Its emphasis on effect sizes and the explicit description of study retrieval methods and assumptions is an improvement over narrative literature reviews. It also has the potential to address hypotheses not directly tested in primary studies. […] But meta-analysis does not solve the replication crisis in the behavioral sciences.”

“Conventional meta-analysis and Bayesian analysis are both methods for research synthesis, and it is worthwhile to briefly summarize their relative strengths. Both methods accumulate evidence about a parameter of interest and generate confidence intervals for that parameter. Both methods also allow sensitivity analysis of the consequences of making different kinds of decisions that may affect the results. Because meta-analysis is based on traditional statistical methods, it tests basically the same kinds of hypotheses that are evaluated in primary studies with traditional statistical tests. This limits the kinds of questions that can be addressed in meta-analysis. For example, a standard meta-analysis cannot answer the question, What is the probability that treatment has an effect? It could be determined whether zero is included in the confidence interval based on the average effect size across a set of studies, but this would not address the question just posed. In contrast, there is no special problem in dealing with this kind of question in Bayesian statistics. A Bayesian approach takes into account both previous knowledge and the inherent plausibility of the hypothesis, but meta-analysis is concerned only with the former. It is possible to combine meta-analytical and Bayesian methods in the same analysis (see Howard et al., 2000).”

“Bayesian methods are no more magical than any other set of statistical techniques. One drawback is that there is no direct way in Bayesian estimation to control type I or type II errors regarding the dichotomous decision to reject or retain some hypothesis. Researchers can do so in traditional significance testing, but too often they ignore power (the complement of the probability of a type II error) or specify an arbitrary level of type I error (e.g., α = .05), so this capability is usually wasted. Specification of prior probabilities or prior distributions in Bayesian statistics affects estimates of their posterior counterparts. If these specifications are grossly wrong, the results could be meaningless […] assumptions in Bayesian analyses should be explicitly stated and thus open to scrutiny. Bowers and Davis (2012) criticized the application of Bayesian methods in neuroscience. They noted in particular that Bayesian methods offer little improvement over more standard statistical techniques, but they also noted problems with use of the former, such as the specification of prior probabilities or utility functions in ways that are basically arbitrary. As with more standard statistical methods, Bayesian techniques are not immune to misuse. […] The main point of this chapter — and that of the whole book — is [however] that there are alternatives to the unthinking overreliance on significance testing that has handicapped the behavioral sciences for so long.”

 

October 19, 2017 Posted by | Books, Statistics | Leave a comment

Quotes

i. “Man seeks objectives that enable him to convert the attainment of every goal into a means for the attainment of a new and more desirable goal. The ultimate objective in such a sequence cannot be obtainable; otherwise its attainment would put an end to the process. An end that satisfies these conditions is an ideal… Thus the formulation and pursuit of ideals is a means by which to put meaning and significance into his life and into the history of which he is part.” (Russell Ackoff)

ii. “Successful problem solving requires finding the right solution to the right problem. We fail more often because we solve the wrong problem than because we get the wrong solution to the right problem.” (-ll-)

iii. “A good deal of the corporate planning I have observed is like a ritual rain dance; it has no effect on the weather that follows, but those who engage in it think it does. Moreover, it seems to me that much of the advice and instruction related to corporate planning is directed at improving the dancing, not the weather.” (-ll-)

iv. “Over time, every way of thinking generates important problems that it cannot solve.” (-ll-)

v. “The only problems that have simple solutions are simple problems. The only managers that have simple problems have simple minds. […] Complex problems do not have simple solutions.” (-ll-)

vi. “The constant questioning of our values and achievements is a challenge without which neither science nor society can remain healthy.” (Aage Niels Bohr)

vii. “You can’t build a peaceful world on empty stomachs and human misery.” (Norman Borlaug)

viii. “In the past century, and even nowadays, one could encounter the opinion that in physics nearly everything had been done. There allegedly are only dim ‘cloudlets’ in the sky or theory, which will soon be eliminated to give rise to the ‘theory of everything’. I consider these views as some kind of blindness. The entire history of physics, as well as the state of present-day physics and, in particular, astrophysics, testifies to the opposite. In my view we are facing a boundless sea of unresolved problems.” (Vitaly Ginzburg)

ix. “Suspect each moment, for it is a thief, tiptoeing away with more than it brings.” (John Updike)

x. “That a marriage ends is less than ideal; but all things end under heaven, and if temporality is held to be invalidating, then nothing real succeeds.” (-ll-)

xi. “Life is a hill that gets steeper the more you climb.” (-ll-)

xii. “Learning should not only take us somewhere; it should allow us later to go further more easily.” (Ted Sizer)

xiii. “I have actually programmed a fair bit in Perl, like I have C++ code published with my name on it. Other things I have tried and have no intention to do again if I can at all avoid it include smoking, getting drunk enough to puke and waste the whole next day with hang-over, breaking a leg in a violent car crash, getting mugged in New York City, or travel with Aeroflot.” (Erik Naggum)

xiv. “Languages shape the way we think, or don’t.” (-ll-)

xv. “The secret to feeling great about yourself is not to be found in searching for people who are less than you and then show yourself superior to them, but in searching for people who are more than you and then show yourself worthy of their company.” (-ll-) [The secret to feeling terrible about yourself is to try to do the above, and fail miserablyUS]

xvi. “Duty largely consists of pretending that the trivial is critical.” (John Fowles)

xvii. “A model is a qualitative or quantitative representation of a process or endeavor that shows the effects of those factors which are significant for the purposes being considered.” (Harold Chestnut)

xviii. “If two objects or human beings show similar behaviour in all their relevant aspects open to observation, the assumption of some unobservable hidden difference between them must be regarded as a completely gratuitous hypothesis and one contrary to sound scientific method.” (John Harsanyi)

xix. “We cannot stem linguistic change, but we can drag our feet.” (Willard van Orman Quine)

xx. “Treat a child as though he already is the person he’s capable of becoming.” (Haim Ginott)

October 14, 2017 Posted by | Quotes/aphorisms | Leave a comment

A few diabetes papers of interest

i. Burden of Diabetic Foot Ulcers for Medicare and Private Insurers.

Some observations from the paper (my bold):

According to the American Diabetes Association, the annual cost of diabetes, which affects 22.3 million people in the U.S., was $245 billion in 2012: $176 billion in excess health care expenditures and $69 billion in reduced workforce productivity (1). While much of the excess health care cost is attributable to treatment of diabetes itself, a substantial amount of the cost differential arises via treatment of chronic complications such as those related to the heart, kidneys, and nervous system (1).

One common complication of diabetes is the development of foot ulcers. Historically, foot ulcers have been estimated to affect 1–4% of patients with diabetes annually (2,3) and as many as 25% of the patients with diabetes over their lifetimes (2). More recently, Margolis et al. (3) have estimated that the annual incidence of foot ulcers among patients with diabetes may be as high as 6%. Treatment of diabetic foot ulcers (DFUs) includes conventional wound management (e.g., debridement, moist dressings, and offloading areas of high pressure or friction) as well as more sophisticated treatments such as bioengineered cellular technologies and hyperbaric oxygen therapy (HBO) (4).

DFUs often require extensive healing time and are associated with increased risk for infections and other sequelae that can result in severe and costly outcomes (4). […] DFU patients have a low survival prognosis, with a 3-year cumulative mortality rate of 28% (6) and rates among amputated patients approaching 50% (7).”

“While DFU patients can require substantial amounts of resource use, little is known about the burden of DFUs imposed on the U.S. health care system and payers. In fact, we are aware of only two studies to date that have estimated the incremental medical resource use and costs of DFU beyond that of diabetes alone (6,8). Neither of these analyses, however, accounted for the many underlying differences between DFU and non-DFU patient populations, such as disproportionate presence of costly underlying comorbid conditions among DFU patients […] Other existing literature on the burden of DFUs in the U.S. calculated the overall health care costs (as opposed to incremental) without reference to a non-DFU control population (911). As a result of the variety of data and methodologies used, it is not surprising that the burden of DFUs reported in the literature is wide-ranging, with the average per-patient costs, for example, ranging from $4,595 per episode (9) to over $35,000 annually for all services (6).

The objective of this study was to expand and improve on previous research to provide a more robust, current estimate of incremental clinical and economic burden of DFUs. To do so, this analysis examined the differences in medical resource use and costs between patients with DFUs during a recent time period (January 2007–September 2011) and a matched control population with diabetes but without DFUs, using administrative claims records from nationally representative databases for Medicare and privately insured populations. […] [Our] criteria resulted in a final analytic sample of 231,438 Medicare patients, with 29,681 (12.8%) identified as DFU patients and the remaining 201,757 comprising the potential control population of non-DFU diabetic patients. For private insurance, 119,018 patients met the sample selection criteria, with 5,681 (4.8%) DFU patients and 113,337 potential controls (Fig. 1).”

Prior to matching, DFU patients were statistically different from the non-DFU control population on nearly every dimension examined during the 12-month preindex period. […] The matching process resulted in the identification of 27,878 pairs of DFU and control patients for Medicare and 4,536 pairs for private insurance that were very similar with regards to preindex patient characteristics […] [I]mportantly, the matched DFU and control groups had comparable health care costs during the 12 months prior to the index date (Medicare, $17,744 DFU and controls; private insurance, $14,761 DFU vs. $14,766 controls). […] Despite having matched the groups to ensure similar patient characteristics, DFU patients used significantly (P < 0.0001) more medical resources during the 12-month follow-up period than did the matched controls […]. Among matched Medicare patients, DFU patients had 138.2% more days hospitalized, 85.4% more days of home health care, 40.6% more ED visits, and 35.1% more outpatient/physician office visits. The results were similar for the privately insured DFU patients, who had 173.5% more days hospitalized, 230.0% more days of home health care, 109.0% more ED visits, and 42.5% more outpatient/physician office visits than matched controls. […] The rate of lower limb amputations was 3.8% among matched Medicare DFU patients and 5.0% among matched privately insured DFU patients. In contrast, observed lower limb amputation rates among diabetic patients without foot ulcer were only 0.04% in Medicare and 0.02% in private insurance.”

Increased medical resource utilization resulted in DFU patients having approximately twice the costs as the matched non-DFU controls […], with annual incremental per-patient medical costs ranging from $11,710 for Medicare ($28,031 vs. $16,320; P < 0.0001) to $15,890 for private insurance ($26,881 vs. $10,991; P < 0.0001). All places of service (i.e., inpatient, ED, outpatient/physician office, home health care, and other) contributed approximately equally to the cost differential among Medicare patients. For the privately insured, however, increased inpatient costs ($17,061 vs. $6,501; P < 0.0001) were responsible for nearly two-thirds of the overall cost differential, […] resulting in total incremental direct health care (i.e., medical + prescription drug) costs of $16,883 ($31,419 vs. $14,536; P < 0.0001). Substantial proportions of the incremental medical costs were attributable to claims with DFU-related diagnoses or procedures for both Medicare (45.1%) and privately insured samples (60.3%).”

“Of the 4,536 matched pairs of privately insured patients, work-loss information was available for 575 DFU patients and 857 controls. DFU patients had $3,259 in excess work-loss costs ($6,311 vs. $3,052; P < 0.0001) compared with matched controls, with disability and absenteeism comprising $1,670 and $1,589 of the overall differential, respectively […] The results indicate that compared with diabetic patients without foot ulcers, DFU patients miss more days of work due to medical-related absenteeism and to disability, imposing additional burden on employers.”

“These estimates indicate that DFU imposes substantial burden on payers beyond that required to treat diabetes itself. For example, prior research has estimated annual per-patient incremental health care expenditures for patients with diabetes (versus those without diabetes) of approximately $7,900 (1). The estimates of this analysis suggest that the presence of DFU further compounds these incremental treatment costs by adding $11,710 to $16,883 per patient. Stated differently, the results indicate that the excess health care costs of DFU are approximately twice that attributable to treatment of diabetes itself, and that the presence of DFU approximately triples the excess cost differential versus a population of patients without diabetes.

“Using estimates of the total U.S. diabetes population (22.3 million) (1) and the midpoint (3.5%) of annual DFU incidence estimates (1–6%) (2,3), the results of this analysis suggest an annual incremental payer burden of DFU ranging from $9.1 billion (22.3 million patients with diabetes × 3.5% DFU incidence × $11,710 Medicare cost differential) to $13.2 billion (22.3 million patients with diabetes × 3.5% DFU incidence × $16,883 private insurance cost differential). These estimates, moreover, likely understate the actual burden of DFU because the incremental costs referenced in this calculation do not include excess work-loss costs described above, prescription drug costs for Medicare patients, out-of-pocket costs paid by the patient, costs borne by supplemental insurers, and other (non-work loss) indirect costs such as those associated with premature mortality, reduced quality of life, and informal caregiving.”

ii. Contributors to Mortality in High-Risk Diabetic Patients in the Diabetes Heart Study.

“Rates of cardiovascular disease (CVD) are two- to fourfold greater in individuals with type 2 diabetes compared with nondiabetic individuals, and up to 65% of all-cause mortality among individuals with type 2 diabetes is attributed to CVD (1,2). However, the risk profile is not uniform for all individuals affected by diabetes (35). Coronary artery calcified plaque (CAC), determined using computed tomography, is a measure of CVD burden (6,7). CAC scores have been shown to be an independent predictor of CVD outcomes and mortality in population-based studies (810) and a powerful predictor of all-cause and CVD mortality in individuals affected by type 2 diabetes (4,1115).

In the Diabetes Heart Study (DHS), individuals with CAC >1,000 were found to have greater than 6-fold (16) and 11-fold (17) increased risk for all-cause mortality and CVD mortality, respectively, after 7 years of follow-up. With this high risk for adverse outcomes, it is noteworthy that >50% of the DHS sample with CAC >1,000 have lived with this CVD burden for (now) an average of over 12 years. This suggests that outcomes vary in the type 2 diabetic patient population, even among individuals with the highest risk. This study examined the subset of DHS participants with CAC >1,000 and evaluated whether differences in a range of clinical factors and measurements, including modifiable CVD risk factors, provided further insights into risk for mortality.”

“This investigation focused on 371 high-risk participants (from 260 families) […] The goal of this analysis was to identify clinical and other characteristics that influence risk for all-cause mortality in high-risk (baseline CAC >1,000) DHS participants. […] a predominance of traditional CVD risk factors, including older age, male sex, elevated BMI, and high rates of dyslipidemia and hypertension, was evident in this high-risk subgroup (Table 1). These participants were followed for 8.2 ± 3.0 years (mean ± SD), over which time 41% died. […] a number of indices continued to significantly predict outcome following adjustment for other CVD risk factors (including age, sex, and medication use) […]. Higher cholesterol and LDL concentrations were associated with an increased risk (∼1.3-fold) for mortality […] Slightly larger increases in risk for mortality were observed with changes in kidney function (1.3- to 1.4-fold) and elevated CRP (∼1.4-fold) […] use of cholesterol-lowering medication was less common among the deceased participants; those reporting no use of cholesterol-lowering medication at baseline were at a 1.4-fold increased risk of mortality […] these results confirm that, even among this high-risk group, heterogeneity in known CVD risk factors and associations with adverse outcomes are still observed and support their ongoing consideration as useful tools for individual risk assessment. Finally, the data presented here suggest that use of cholesterol-lowering medication was strongly associated with protection, supporting the known beneficial effects of cholesterol management on CVD risk (28,29). […] data suggest that cholesterol-lowering medications may be used less than recommended and need to be more aggressively targeted as a critical modifiable risk factor.”

iii. Neurological Consequences of Diabetic Ketoacidosis at Initial Presentation of Type 1 Diabetes in a Prospective Cohort Study of Children.

“Patients aged 6–18 years with and without DKA at diagnosis were studied at four time points: <48 h, 5 days, 28 days, and 6 months postdiagnosis. Patients underwent magnetic resonance imaging (MRI) and spectroscopy with cognitive assessment at each time point. Relationships between clinical characteristics at presentation and MRI and neurologic outcomes were examined using multiple linear regression, repeated-measures, and ANCOVA analyses.”

“With DKA, cerebral white matter showed the greatest alterations with increased total white matter volume and higher mean diffusivity in the frontal, temporal, and parietal white matter. Total white matter volume decreased over the first 6 months. For gray matter in DKA patients, total volume was lower at baseline and increased over 6 months. […] Of note, although changes in total and regional brain volumes over the first 5 days resolved, they were associated with poorer delayed memory recall and poorer sustained and divided attention at 6 months. Age at time of presentation and pH level were predictors of neuroimaging and functional outcomes.

CONCLUSIONS DKA at type 1 diabetes diagnosis results in morphologic and functional brain changes. These changes are associated with adverse neurocognitive outcomes in the medium term.”

“This study highlights the common nature of transient focal cerebral edema and associated impaired mental state at presentation with new-onset type 1 diabetes in children. We demonstrate that alterations occur most markedly in cerebral white matter, particularly in the frontal lobes, and are most prominent in the youngest children with the most dramatic acidemia. […] early brain changes were associated with persisting alterations in attention and memory 6 months later. Children with DKA did not differ in age, sex, SES, premorbid need for school assistance/remediation, or postdiagnosis clinical trajectory. Earlier diagnosis of type 1 diabetes in children may avoid the complication of DKA and the neurological consequences documented in this study and is worthy of a major public health initiative.”

“In relation to clinical risk factors, the degree of acidosis and younger age appeared to be the greatest risk factors for alterations in cerebral structure. […] cerebral volume changes in the frontal, temporal, and parietal regions in the first week after diagnosis were associated with lower attention and memory scores 6 months later, suggesting that functional information processing difficulties persist after resolution of tissue water increases in cerebral white matter. These findings have not been reported to date but are consistent with the growing concern over academic performance in children with diabetes (2). […] Brain injury should no longer be considered a rare complication of DKA. This study has shown that it is both frequent and persistent.” (my bold)

iv. Antihypertensive Treatment and Resistant Hypertension in Patients With Type 1 Diabetes by Stages of Diabetic Nephropathy.

“High blood pressure (BP) is a risk factor for coronary artery disease, heart failure, and stroke, as well as for chronic kidney disease. Furthermore, hypertension has been estimated to affect ∼30% of patients with type 1 diabetes (1,2) and both parallels and precedes the worsening of kidney disease in these patients (35). […] Despite strong evidence that intensive treatment of elevated BP reduces the risk of cardiovascular disease and microvascular complications, as well as improves the prognosis of patients with diabetic nephropathy (especially with the use of ACE inhibitors [ACEIs] and angiotensin II antagonists [angiotensin receptor blockers, ARBs]) (1,911), treatment targets and recommendations seem difficult to meet in clinical practice (1215). This suggests that the patients might either show poor adherence to the treatment and lifestyle changes or have a suboptimal drug regimen. It is evident that most patients with hypertension might require multiple-drug therapy to reach treatment goals (16). However, certain subgroups of the patients have been considered to have resistant hypertension (RH). RH is defined as office BP that remains above target even after using a minimum of three antihypertensive drugs at maximal tolerated doses, from different classes, one of which is a diuretic. Also, patients with controlled BP using four or more antihypertensive drugs are considered resistant to treatment (17).”

“The true prevalence of RH is unknown, but clinical trials suggest a share between 10 and 30% of the hypertensive patients in the general population (18). […] Only a few studies have considered BP control and treatment in patients with type 1 diabetes (2,15,22). Typically these studies have been limited to a small number of participants, which has not allowed stratifying of the patients according to the nephropathy status. The rate of RH is therefore unknown in patients with type 1 diabetes in general and with respect to different stages of diabetic nephropathy. Therefore, we estimated to what extent patients with type 1 diabetes meet the BP targets proposed by the ADA guidelines. We also evaluated the use of antihypertensive medication and the prevalence of RH in the patients stratified by stage of diabetic nephropathy.”

“[A]ll adult patients with type 1 diabetes from >80 hospitals and primary healthcare centers across Finland were asked to participate. Type 1 diabetes was defined by age at onset of diabetes <40 years, C-peptide ≤0.3 nmol/L, and insulin treatment initiated within 1 year of diagnosis, if C-peptide was not measured. […] we used two different ADA BP targets: <130/85 mmHg, which was the target until 2000 (6), and <130/80 mmHg, which was the target between 2001 and 2012 (7). Patients were divided into groups based on whether their BP had reached the target or not and whether the antihypertensive drug was in use or not. […] uncontrolled hypertension was defined as failure to achieve target BP, based on these two different ADA guidelines, despite use of antihypertensive medication. RH was defined as failure to achieve the goal BP (<130/85 mmHg) even after using a minimum of three antihypertensive drugs, from different classes, one of which was a diuretic. […] On the basis of eGFR (mL/min/1.73 m2) level, patients were classified into five groups according to the Kidney Disease Outcomes Quality Initiative (KDOQI) guidelines: stage 1 eGFR ≥90, stage 2 eGFR 60–89, stage 3 eGFR 30–59, stage 4 eGFR 15–29, and stage 5 eGFR <15. Patients who were on dialysis were classified into stage 5. […] A total of 3,678 patients with complete data on systolic and diastolic BP and nephropathy status were identified from the FinnDiane database. […] The mean age was 38.0 ± 12.0 and mean duration of diabetes 22.1 ± 12.3 years.  […] The patients with advanced diabetic nephropathy had higher BP, worse dyslipidemia, poorer glycemic control, and more insulin resistance and macrovascular complications. BMI values were lower in the dialysis patients, probably due to renal cachexia.”

“Of all patients, 60.9% did not reach the BP target <130/85 mmHg, and the proportion was 70.3% with the target of <130/80 mmHg. […] The patients who were not on target had higher age and longer duration of diabetes and were more likely to be men. They also had poorer glycemic and lipid control as well as more micro- and macrovascular complications. […] Based on the BP target <130/85 mmHg, more than half of the patients in the normoalbuminuria group did not reach the BP target, and the share increased along with the worsening of nephropathy; two-thirds of the patients in the microalbuminuria group and fourfifths in the macroalbuminuria group were not on target, while even 90% of the dialysis and kidney transplant patients did not reach the target (Fig. 1A). Based on the stricter BP target of <130/80 mmHg, the numbers were obviously worse, but the trend was the same (Fig. 1B).”

“About 37% of the FinnDiane patients had antihypertensive treatment […] Whereas 14.1% of the patients with normal AER [Albumin Excretion Rate] had antihypertensive treatment, the proportions were 60.5% in the microalbuminuric, 90.3% in the macroalbuminuric, 88.6% in the dialysis, and 91.2% in the kidney transplant patients. However, in all groups, only a minority of the patients had BP values on target with the antihypertensive drug treatment they were prescribed […] The mean numbers of antihypertensive drugs varied within the nephropathy groups between those who had BP on target and those who did not […]. However, only in the micro- (P = 0.02) and macroalbuminuria (P = 0.003) groups were the mean numbers of the drugs higher if the BP was not on target, compared with those who had reached the targets. Notably, among the patients with normoalbuminuria who had not reached the BP target, 58% and, of the patients with microalbuminuria, 61% were taking only one antihypertensive drug. In contrast, more than half of the dialysis and 40% of the macroalbuminuric and transplanted patients, who had not reached the targets, had at least three drugs in their regimen. Moreover, one-fifth of the dialysis, 15% of the macroalbuminuric, and 10% of the transplanted patients had at least four antihypertensive drugs in use without reaching the target (Table 2). Almost all patients treated with antihypertensive drugs in the normo-, micro-, and macroalbuminuria groups (76% of normo-, 93% of micro-, and 89% of macrolbuminuric patients) had ACEIs or ARBs in the regimen. The proportions were lower in the ESRD groups: 42% of the dialysis and 29% of the transplanted patients were taking these drugs.”

“In general, the prevalence of RH was 7.9% for all patients with type 1 diabetes (n = 3,678) and 21.2% for the antihypertensive drug–treated patients (n = 1,370). The proportion was higher in men than in women (10.0 vs. 5.7%, P < 0.0001) […] When the patients were stratified by nephropathy status, the figures changed; in the normoalbuminuria group, the prevalence of RH was 1.2% of all and 8.7% of the drug treated patients. The corresponding numbers were 4.7 and 7.8% for the microalbuminuric patients, 28.1 and 31.2% for the macroalbuminuric patients, 36.6 and 41.3% for the patients on dialysis, and 26.3 and 28.8% for the kidney-transplanted patients, respectively […] The prevalence of RH also increased along with the worsening of renal function. The share was 1.4% for all and 7.4% for drug-treated patients at KDOQI stage 1. The corresponding numbers were 3.8 and 10.0% for the patients at stage 2, 26.6 and 30.0% for the patients at stage 3, 54.8 and 56.0% for the patients at stage 4, and 48.0 and 52.1% for those at stage 5, when kidney transplantation patients were excluded. […] In a multivariate logistic regression analysis, higher age, lower eGFR, higher waist-to-hip ratio, higher triglycerides, as well as microalbuminuria and macroalbuminuria, when normoalbuminuria was the reference category, were independently associated with RH […] A separate analysis also showed that dietary sodium intake, based on urinary sodium excretion rate, was independently associated with RH.”

“The current study shows that the prevalence of RH in patients with type 1 diabetes increases alongside the worsening of diabetic nephropathy. Whereas less than one-tenth of the antihypertensive drug–treated patients with normo- or microalbuminuria met the criteria for RH, the proportions were substantially higher among the patients with overt nephropathy: one-third of the patients with macroalbuminuria or a transplanted kidney and even 40% of the patients on dialysis. […] the prevalence of RH for the drug-treated patients was even higher (56%) in patients at the predialysis stage (eGFR 15–29). The findings are consistent with other studies that have demonstrated that chronic kidney disease is a strong predictor of failure to achieve BP targets despite the use of three or more different types of antihypertensive drugs in the general hypertensive population (26).”

“The prevalence of RH was 21.2% of the patients treated with antihypertensive drugs. Previous studies have indicated a prevalence of RH of 13% among patients being treated for hypertension (1921,27). […] the prevalence [of RH] seems to be […] higher among the drug-treated type 1 diabetic patients. These figures can only partly be explained by the use of a lower treatment target for BP, as recommended for patients with diabetes (6), since even when we used the BP target recommended for hypertensive patients (<140/90 mmHg), our data still showed a higher prevalence of RH (17%).”

“The study also confirmed previous findings that a large number of patients with type 1 diabetes do not achieve the recommended BP targets. Although the prevalence of RH increased with the severity of diabetic nephropathy, our data also suggest that patients with normo- and microalbuminuria might have a suboptimal drug regimen, since the majority of those who had not reached the BP target were taking only one antihypertensive drug. […] There is therefore an urgent need to improve antihypertensive treatment, not only in patients with overt nephropathy but also in those who have elevated BP without complications or early signs of renal disease. Moreover, further emphasis should be placed on the transplanted patients, since it is well known that hypertension affects both graft and patient survival negatively (30).” (my bold)

v. Association of Autoimmunity to Autonomic Nervous Structures With Nerve Function in Patients With Type 1 Diabetes: A 16-Year Prospective Study.

“Neuropathy is a chronic complication that includes a number of distinct syndromes and autonomic dysfunctions and contributes to increase morbidity and mortality in the diabetic population. In particular, cardiovascular autonomic neuropathy (CAN) is an independent risk factor for mortality in type 1 diabetes and is associated with poor prognosis and poor quality of life (13). Cardiovascular (CV) autonomic regulation rests upon a balance between sympathetic and parasympathetic innervation of the heart and blood vessels controlling heart rate and vascular dynamics. CAN encompasses several clinical manifestations, from resting tachycardia to fatal arrhythmia and silent myocardial infarction (4).

The mechanisms responsible for altered neural function in diabetes are not fully understood, and it is assumed that multiple mutually perpetuating pathogenic mechanisms may concur. These include dysmetabolic injury, neurovascular insufficiency, deficiency of neurotrophic growth factors and essential fatty acids, advanced glycosylation products (5,6), and autoimmune damage. Independent cross-sectional and prospective (713) studies identified circulating autoantibodies to autonomic nervous structures and hypothesized that immune determinants may be involved in autonomic nerve damage in type 1 diabetes. […] However, demonstration of a cause–effect relationship between antibodies (Ab) and diabetic autonomic neuropathy awaits confirmation.”

“We report on a 16-year follow-up study specifically designed to prospectively examine a cohort of patients with type 1 diabetes and aimed at assessing whether the presence of circulating Ab to autonomic nervous structures is associated with increased risk and predictive value of developing CAN. This, in turn, would be highly suggestive of the involvement of autoimmune mechanisms in the pathogenesis of this complication.”

“The present prospective study, conducted in young patients without established autonomic neuropathy at recruitment and followed for over 16 years until adulthood, strongly indicates that a cause–effect relationship may exist between auto-Ab to autonomic nervous tissues and development of diabetic autonomic neuropathy. Incipient or established CAN (22) reached a prevalence of 68% among the Ab-positive patients, significantly higher compared with the Ab-negative patients. […] Logistic regression analysis indicates that auto-Ab carry an almost 15-fold increased RR of developing an abnormal DB [deep breathing] test over 16 years and an almost sixfold increase of developing at least one abnormal CV [cardiovascular] test, independent of other variables. […] Circulating Ab to autonomic structures are associated with the development of autonomic dysfunction in young diabetic patients independent of glycemic control. […] autoimmune mechanisms targeting sympathetic and parasympathetic structures may play a primary etiologic role in the development and progression of autonomic dysfunction in type 1 diabetes in the long term. […] positivity for auto-Ab had a high positive predictive value for the later development of autonomic neuropathy.”

“Diabetic autonomic neuropathy, possibly the least recognized and most overlooked of diabetes complications, has increasingly gained attention as an independent predictor of silent myocardial ischemia and mortality, as consistently indicated by several cross-sectional studies (2,3,33). The pooled prevalence rate risk for silent ischemia is estimated at 1.96 by meta-analysis studies (5). In this report, established CAN (22) was detected in nearly 20% of young adult patients with acceptable metabolic control, after over approximately 23 years of diabetes duration, against 12% of patients of the same cohort with subtle asymptomatic autonomic dysfunction (one abnormal CV test) a decade earlier, in line with other studies in type 1 diabetes (2,24). Approximately 30% of the patients developed signs of peripheral somatic neuropathy not associated with autonomic dysfunction. This discrepancy suggests the participation of pathogenic mechanisms different from metabolic control and a distinct clinical course, as indicated by the DCCT study, where hyperglycemia had a less robust relationship with autonomic than somatic neuropathy (6).”

“Furthermore, this study shows that autonomic neuropathy, together with female sex and the occurrence of severe hypoglycemia, is a major determinant for poor quality of life in patients with type 1 diabetes. This is in agreement with previous reports (35) and linked to such invalidating symptoms as orthostatic hypotension and chronic diarrhea. […] In conclusion, the current study provides persuasive evidence for a primary pathogenic role of autoimmunity in the development of autonomic diabetic neuropathy. However, the mechanisms through which auto-Ab impair their target organ function, whether through classical complement action, proapoptotic effects of complement, enhanced antigen presentation, or channelopathy (26,39,40), remain to be elucidated.” (my bold)

vi. Body Composition Is the Main Determinant for the Difference in Type 2 Diabetes Pathophysiology Between Japanese and Caucasians.

“According to current understanding, the pathophysiology of type 2 diabetes is different in Japanese compared with Caucasians in the sense that Japanese are unable to compensate insulin resistance with increased insulin secretion to the same extent as Caucasians. Prediabetes and early stage diabetes in Japanese are characterized by reduced β-cell function combined with lower degree of insulin resistance compared with Caucasians (810). In a prospective, cross-sectional study of individuals with normal glucose tolerance (NGT) and impaired glucose tolerance (IGT), it was demonstrated that Japanese in Japan were more insulin sensitive than Mexican Americans in the U.S. and Arabs in Israel (11). The three populations also differed with regards to β-cell response, whereas the disposition index — a measure of insulin secretion relative to insulin resistance — was similar across ethnicities for NGT and IGT participants. These studies suggest that profound differences in type 2 diabetes pathophysiology exist between different populations. However, few attempts have been made to establish the underlying demographic or lifestyle-related factors such as body composition, physical fitness, and physical activity leading to these differences.”

“The current study aimed at comparing Japanese and Caucasians at various glucose tolerance states, with respect to 1) insulin sensitivity and β-cell response and 2) the role of demographic, genetic, and lifestyle-related factors as underlying predictors for possible ethnic differences in insulin sensitivity and β-cell response. […] In our study, glucose profiles from OGTTs [oral glucose tolerance tests] were similar in Japanese and Caucasians, whereas insulin and C-peptide responses were lower in Japanese participants compared with Caucasians. In line with these observations, measures of β-cell response were generally lower in Japanese, who simultaneously had higher insulin sensitivity. Moreover, β-cell response relative to the degree of insulin resistance as measured by disposition indices was virtually identical in the two populations. […] We […] confirmed the existence of differences in insulin sensitivity and β-cell response between Japanese and Caucasians and showed for the first time that a major part of these differences can be explained by differences in body composition […]. On the basis of these results, we propose a similar pathophysiology of type 2 diabetes in Caucasians and Japanese with respect to insulin sensitivity and β-cell function.”

October 12, 2017 Posted by | Cardiology, Diabetes, Epidemiology, Health Economics, Medicine, Nephrology, Neurology, Pharmacology, Studies | Leave a comment

Words

Most of the words below are words which I encountered while reading Flashman and the Angel of the Lord and Flashman on the March.

Guerdon. Frowst. Dunnage. Veldt. Whelk. Tup. Gannet. Hawser. Doss-house. Brogue. Tucker. Voluptuary. Morion. Flawn. Ague. Fusee/Fuzee. Jimp. Anent. Skein. Fob.

Arbitrament. Whiffler. Abide. Beldam. Schiltron. Pickaninny/piccaninny. Gird/girt. Despond. Whittling. Glim. Peignoir. Gamp. Connubial. Ensconce. Confab. Trestle. Squawl. Paterfamilias. Dabble. Peal.

Buff. Duenna. Yawl. Palaver. Lateen. Felucca. Coracle. Gimlet. Tippet. Toggery. Dry-gulch. Nuncheon. Lovelock. Josser. Casque. Withy. Weir. Sonsy. Guzzle. Hearty.

Rattle. Pippin. Trencherman. Potation. Bilbo. Burly. Haulier. Roundelay. Lych-gate. Skilligalee/skilly. Labial. Dudgeon. Caravanserai. Mithridatism. Avast. Lagniappe. Thigmotaxis. Afforesting. Immiseration. Chamberlain.

October 11, 2017 Posted by | Books, Language | Leave a comment

Diabetes and the Brain (V)

I have blogged this book in some detail in the past, but I never really finished my intended coverage of the book. This post is an attempt to rectify this.

Below I have added some quotes and observations from some of the chapters I have not covered in my previous posts about the book. I bolded some key observations along the way.

A substantial number of studies have assessed the effect of type 2 diabetes on cognitive functioning with psychometric tests. The majority of these studies reported subtle decrements in individuals with type 2 diabetes relative to non-diabetic controls (2, 4). […] the majority of studies in patients with type 2 diabetes reported moderate reductions in neuropsychological test performance, mainly in memory, information-processing speed, and mental flexibility, a pattern that is also observed in aging-related cognitive decline. […] the observed cognitive decrements are relatively subtle and rather non-specific. […] All in all, disturbances in glucose and insulin metabolism and associated vascular risk factors are associated with modest reductions in cognitive performance in “pre-diabetic stages.” Consequently, it may well be that the cognitive decrements that can be observed in patients with type 2 diabetes also start to develop before the actual onset of the diabetes. […] Because the different vascular and metabolic risk factors that are clustered in the metabolic syndrome are strongly interrelated, the contribution of each of the individual factor will be difficult to assess.” 

“Aging-related changes on brain imaging include vascular lesions and focal and global atrophy. Vascular lesions include (silent) brain infarcts and white-matter hyperintensities (WMHs). WMHs are common in the general population and their prevalence increases with age, approaching 100% by the age of 85 (69). The prevalence of lacunar infarcts also increases with age, up to 5% for symptomatic infarcts and 30% for silent infarcts by the age of 80 (70). In normal aging, the brain gradually reduces in size, which becomes particularly evident after the age of 70 (71). This loss of brain volume is global […] age-related changes of the brain […] are often relatively more pronounced in older patients with type 2 […] A recent systematic review showed that patients with diabetes have a 2-fold increased risk of (silent) infarcts compared to non-diabetic persons (75). The relationship between type 2 diabetes and WMHs is subject to debate. […] there are now clear indications that diabetes is a risk factor for WMH progression (82). […] The presence of the APOE ε4 allele is a risk factor for the development of Alzheimer’s disease (99). Patients with type 2 diabetes who carry the APOE ε4 allele appeared to have a 2-fold increased risk of dementia compared to persons with either of these risk factors in isolation (100, 101).”

In adults with type 1 diabetes the occurrence of microvascular complications is associated with reduced cognitive performance (137) and accelerated cognitive decline (138). Moreover, type 1 diabetes is associated with decreased white-matter volume of the brain and diminished cognitive performance in particular in patients with retinopathy (139). Microvascular complications are also thought to play a role in the development of cognitive decline in patients with type 2 diabetes, but studies that have specifically examined this association are scarce. […] Currently there are no established specific treatment measures to prevent or ameliorate cognitive impairments in patients with diabetes.”

“Clinicians should be aware of the fact that cognitive decrements are relatively more common among patients with diabetes. […] it is important to note that cognitive complaints as spontaneously expressed by the patient are often a poor indicator of the severity of cognitive decrements. People with moderate disturbances may express marked complaints, while people with marked disturbances of cognition often do not complain at all. […] Diabetes is generally associated with relatively mild impairments, mainly in attention, memory, information-processing speed, and executive function. Rapid cognitive decline or severe cognitive impairment, especially in persons under the age of 60 is indicative of other underlying pathology. Potentially treatable causes of cognitive decline such as depression should be excluded. People who are depressed often present with complaints of concentration or memory.”

“Insulin resistance increases with age, and the organism maintains normal glucose levels as long as it can produce enough insulin (hyperinsulinemia). Some individuals are less capable than others to mount sustained hyperinsulinemia and will develop glucose intolerance and T2D (23). Other individuals with insulin resistance will maintain normal glucose levels at the expense of hyperinsulinemia but their pancreas will eventually “burn out,” will not be able to sustain hyperinsulinemia, and will develop glucose intolerance and diabetes (23). Others will continue having insulin resistance, may have or not have glucose intolerance, will not develop diabetes, but will have hyperinsulinemia and suffer its consequences. […] Elevations of adiposity result in insulin resistance, causing the pancreas to increase insulin to abnormal levels to sustain normal glucose, and if and when the pancreas can no longer sustain hyperinsulinemia, glucose intolerance and diabetes will ensue. However, the overlap between these processes is not complete (26). Not all persons with higher adiposity will develop insulin resistance and hyperinsulinemia, but most will. Not all persons with insulin resistance and hyperinsulinemia will develop glucose intolerance and diabetes, and this depends on genetic and other susceptibility factors that are not completely understood (25, 26). Some adults develop diabetes without going through insulin resistance and hyperinsulinemia, but it is thought that most will. The susceptibility to adiposity, that is, the risk of developing the above-described sequence in response to adiposity, varies by gender (4) and particularly by ethnicity. […] Chinese and Southeast Asians are more susceptible than Europeans to developing insulin resistance with comparable increases of adiposity (2).”

There is very strong evidence that adiposity, hyperinsulinemia, and T2D are related to cognitive impairment syndromes, whether AD [Alzheimer’s Disease], VD [Vascular Dementia], or MCI [Mild Cognitive Impairment], and whether the main mechanism is cerebrovascular disease or non-vascular mechanisms. However, more evidence is needed to establish causation. If the relation between these conditions and dementia were to be causal, the public health implications are enormous. […] Diabetes mellitus affects about 20% of adults older than 65 years of age […] two-thirds of the adult population in the United States are overweight or obese, and the short-term trend is for this to worsen. These trends are also being observed worldwide. […] We estimated that in New York City the presence of diabetes or hyperinsulinemia in elderly people could account for 39% of cases of AD (78).”

Psychiatric illnesses in general may be more common among persons with diabetes than in community-based samples, specifically affective and anxiety-related disorders (4). Persons with diabetes are twice as likely to have depression as non-diabetic persons (5). A review of 20 studies on the comorbidity of depression and diabetes found that the average prevalence was about 15%, and ranged from 8.5 to 40%, three times the rate of depressive disorders found in the general adult population of the United States (4–7). The rates of clinically significant depressive symptoms among persons with diabetes are even higher – ranging from 21.8 to 60.0% (8). Recent studies have indicated that persons with type II diabetes, accompanied by either major or minor depression, have significantly higher mortality rates than non-depressed persons with diabetes (9–10) […] A recent meta-analysis reported that patients with type 2 diabetes have a 2-fold increased risk of depression compared to non-diabetic persons (142). The prevalence of major depressive disorder in patients with type 2 diabetes was estimated at 11% and depressive symptoms were observed in 31% of the patients.” (As should be obvious from the above quotes the range of estimates vary a lot here, but the estimates tend to be high – US.)

Depression is an important risk factor for cardiovascular disease (Glassman, Maj & Sartorius is a decent book on these topics), and diabetes is also an established risk factor. Might this not lead to a hypothesis that diabetics who are depressed may do particularly poorly, with higher mortality rates and so on? Yes. …and it seems that this is also what people tend to find when they look at this stuff:

Persons with diabetes and depressive symptoms have mortality rates nearly twice as high as persons with diabetes and no depressive symptomatology (9). Persons with co-occurring medical illness and depression also have higher health care utilization leading to higher direct and indirect health care costs (12–13) […]. A meta-analysis of the relationship between depression and diabetes (types I and II) indicated that an increase in the number of depressive symptoms is associated with an increase in the severity and number of diabetic complications, including retinopathy, neuropathy, and nephropathy (15–17). Compared to persons with either diabetes or depression alone, individuals with co-occurring diabetes and depression have shown poorer adherence to dietary and physical activity recommendations, decreased adherence to hypoglycemic medication regimens, higher health care costs, increases in HgbA1c levels, poorer glycemic control, higher rates of retinopathy, and macrovascular complications such as stroke and myocardial infarction, higher ambulatory care use, and use of prescriptions (14, 18–22). Diabetes and depressive symptoms have been shown to have strong independent effects on physical functioning, and individuals experiencing either of these conditions will have worse functional outcomes than those with neither or only one condition (19–20). Nearly all of diabetes management is conducted by the patient and those with co-occurring depression may have poorer outcomes and increased risk of complications due to less adherence to glucose, diet, and medication regimens […] There is some evidence that treatment of depression with antidepressant and/or cognitive-behavioral therapies can improve glycemic control and glucose regulation without any change in the treatment for diabetes (27, 28) […] One important finding is [also] that treatment of depression seems to be able to halt atrophy of the hippocampus and may even lead to stimulation of neurogenesis of hippocampal cells (86).”

Diabetic neuropathy is a severe, disabling chronic condition that affects a significant number of individuals with diabetes. Long considered a disease of the peripheral nervous system, there is mounting evidence of central nervous system involvement. Recent advances in neuroimaging methods have led to a better understanding and refinement of how diabetic neuropathy affects the central nervous system. […] spinal cord atrophy is an early process being present not only in established-DPN [diabetic peripheral neuropathy] but also even in subjects with relatively modest impairments of nerve function (subclinical-DPN) […] findings […] show that the neuropathic process in diabetes is not confined to the peripheral nerve and does involve the spinal cord. Worryingly, this occurs early in the neuropathic process. Even at the early DPN stage, extensive and perhaps even irreversible damage may have occurred. […] it is likely that the insult of diabetes is generalised, concomitantly affecting the PNS and CNS. […] It is noteworthy that a variety of therapeutic interventions specifically targeted at peripheral nerve damage in DPN have thus far been ineffective, and it is possible that this may in part be due to inadequate appreciation of the full extent of CNS involvement in DPN.

Interestingly, if the CNS is also involved in the pathogenesis of (‘human’) diabetic neuropathy it may have some relevance to the complaint that some methods of diabetes-induction in animal models cause (secondary) damage to central structures in animal models – a complaint which I’ve previously made a note of e.g. in the context of my coverage of Horowitz & Samson’s book. The relevance of this depends quite a bit on whether it’s the same central structures that are affected in the animal models and in humans. It probably isn’t. These guys also discuss this stuff in some detail, though I won’t go into too much detail here. Some observations on related topics are however worth including here:

“Several studies examining behavioral learning have shown progressive deficits in diabetic rodents, whereas simple avoidance tasks are preserved. Impaired spatial learning and memory as assessed by the Morris water maze paradigm occur progressively in both the spontaneously diabetic BB/Worrat and STZ-induced diabetic rodents (1, 11, 12, 22, 41, 42). The cognitive components reflected by impaired Morris water maze performances involve problem-solving, enhanced attention and storage, and retrieval of information (43). […] Observations regarding cognition and plasticity in models characterized by hyperglycemia and insulin deficiency (i.e., alloxan or STZ-diabetes, BB/Wor rats, NOD-mice), often referred to as models of type 1 diabetes, are quite consistent. With respect to clinical relevance, it should be noted that the level of glycemia in these models markedly exceeds that observed in patients. Moreover, changes in cognition as observed in these models are much more rapid and severe than in adult patients with type 1 diabetes […], even if the relatively shorter lifespan of rodents is taken into account. […] In my view these models of “type 1 diabetes” may help to understand the pathophysiology of the effects of severe chronic hyperglycemia–hypoinsulinemia on the brain, but mimic the impact of type 1 diabetes on the brain in humans only to a limited extent.”

“Abnormalities in cognition and plasticity have also been noted in the majority of models characterized by insulin resistance, hyperinsulinemia, and (modest) hyperglycemia (e.g., Zucker fa/fa rat, Diabetic Zucker rat, db/db mouse, GK rat, OLETF rat), often referred to as models of type 2 diabetes. With regard to clinical relevance, it is important to note that although the endocrinological features of these models do mimic certain aspects of type 2 diabetes, the genetic defect that underlies each of them is not the primary defect encountered in humans with type 2 diabetes. Some of the genetic abnormalities that lead to a “diabetic phenotype” may also have a direct impact on the brain. […] some studies using these models report abnormalities in cognition and plasticity, even in the absence of hyperglycemia […] In addition, in the majority of available models insulin resistance and associated metabolic abnormalities develop at a relatively early age. Although this is practical for research purposes it needs to be acknowledged that type 2 diabetes is typically a disease of older age in humans. […] It is therefore still too early to determine the clinical significance of the available models in understanding the impact of type 2 diabetes on the brain. Further efforts into the development of a valid model are warranted.”

[A] key problem in clinical studies is the complexity and multifactorial nature of cerebral complications in relation to diabetes. Metabolic factors in patients (e.g., glucose levels, insulin levels, insulin sensitivity) are strongly interrelated and related to other factors that may affect the brain (e.g., blood pressure, lipids, inflammation, oxidative stress). Derangements in these factors in the periphery and the brain may be dissociated, for example, through the role of the blood–brain barrier, or adaptations of transport across this barrier, or through differences in receptor functions and post-receptor signaling cascades in the periphery and the brain. The different forms of treatments that patients receive add to the complexity. A key contribution of animal studies may be to single out individual components and study them in isolation or in combination with a limited number of other factors in a controlled fashion.

October 9, 2017 Posted by | Books, Cardiology, Diabetes, Epidemiology, Medicine, Neurology, Pharmacology | Leave a comment

Quotes

i. “We live on an island surrounded by a sea of ignorance. As our island of knowledge grows, so does the shore of our ignorance.” (John Archibald Wheeler)

ii. “There are many modes of thinking about the world around us and our place in it. I like to consider all the angles from which we might gain perspective on our amazing universe and the nature of existence.” (-ll-)

iii. “A couple of months in the laboratory can frequently save a couple of hours in the library.” (Frank Westheimer)

iv. “Those who do monumental work don’t need monuments.” (Baba Amte)

v. “Science is the most exciting and sustained enterprise of discovery in the history of our species. It is the great adventure of our time. We live today in an era of discovery that far outshadows the discoveries of the New World five hundred years ago.” (Michael Crichton)

vi. “Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results.” (-ll-)

vii. “The romantic view of the natural world as a blissful Eden is only held by people who have no actual experience of nature. People who live in nature are not romantic about it at all. They may hold spiritual beliefs about the world around them, they may have a sense of the unity of nature or the aliveness of all things, but they still kill the animals and uproot the plants in order to eat, to live. If they don’t, they will die.” (-ll-)

viii. “Age does not bring you wisdom, age brings you wrinkles.” (Estelle Getty)

ix. “Political correctness is tyranny with manners.” (Charlton Heston)

x. “At least half of my life’s many mistakes can be safely put down to impetuosity: the other half derive from inertia.” (Donald James)

xi. “Everybody is forever saying that the essay is dead. This is always said in essays.” (John Leonard)

xii. “I’ve been accused of being aloof. I’m not. I’m just wary.” (Paul Newman)

xiii. “Simple calculations based on a range of variables are better than elaborate ones based on limited input.” (Ralph Brazelton Peck)

xiv. “The most fruitful research grows out of practical problems.” (-ll-)

xv. “We cannot change the cards we are dealt, just how we play the hand.” (Randy Pausch)

xvi. “Good judgment comes from experience. Experience comes from bad judgment.” (-ll-)

xvii. “You don’t find time for important things, you make it.” (-ll-)

xviii. “When you’re screwing up and nobody says anything to you anymore, that means they’ve given up on you.” (-ll-)

xix. “The less we understand a phenomenon, the more variables we require to explain it.” (Russell Ackoff)

xx. “Things that people learn purely out of curiosity can have a revolutionary effect on human affairs.” (Frederick Seitz)

October 7, 2017 Posted by | Quotes/aphorisms | Leave a comment

The mystery of over-parametrization in neural networks

October 6, 2017 Posted by | Computer science, Mathematics | Leave a comment

Physical chemistry

This is a good book, I really liked it, just as I really liked the other book in the series which I read by the same author, the one about the laws of thermodynamics (blog coverage here). I know much, much more about physics than I do about chemistry and even though some of it was review I learned a lot from this one. Recommended, certainly if you find the quotes below interesting. As usual, I’ve added some observations from the book and some links to topics/people/etc. covered/mentioned in the book below.

Some quotes:

“Physical chemists pay a great deal of attention to the electrons that surround the nucleus of an atom: it is here that the chemical action takes place and the element expresses its chemical personality. […] Quantum mechanics plays a central role in accounting for the arrangement of electrons around the nucleus. The early ‘Bohr model’ of the atom, […] with electrons in orbits encircling the nucleus like miniature planets and widely used in popular depictions of atoms, is wrong in just about every respect—but it is hard to dislodge from the popular imagination. The quantum mechanical description of atoms acknowledges that an electron cannot be ascribed to a particular path around the nucleus, that the planetary ‘orbits’ of Bohr’s theory simply don’t exist, and that some electrons do not circulate around the nucleus at all. […] Physical chemists base their understanding of the electronic structures of atoms on Schrödinger’s model of the hydrogen atom, which was formulated in 1926. […] An atom is often said to be mostly empty space. That is a remnant of Bohr’s model in which a point-like electron circulates around the nucleus; in the Schrödinger model, there is no empty space, just a varying probability of finding the electron at a particular location.”

“No more than two electrons may occupy any one orbital, and if two do occupy that orbital, they must spin in opposite directions. […] this form of the principle [the Pauli exclusion principleUS] […] is adequate for many applications in physical chemistry. At its very simplest, the principle rules out all the electrons of an atom (other than atoms of one-electron hydrogen and two-electron helium) having all their electrons in the 1s-orbital. Lithium, for instance, has three electrons: two occupy the 1s orbital, but the third cannot join them, and must occupy the next higher-energy orbital, the 2s-orbital. With that point in mind, something rather wonderful becomes apparent: the structure of the Periodic Table of the elements unfolds, the principal icon of chemistry. […] The first electron can enter the 1s-orbital, and helium’s (He) second electron can join it. At that point, the orbital is full, and lithium’s (Li) third electron must enter the next higher orbital, the 2s-orbital. The next electron, for beryllium (Be), can join it, but then it too is full. From that point on the next six electrons can enter in succession the three 2p-orbitals. After those six are present (at neon, Ne), all the 2p-orbitals are full and the eleventh electron, for sodium (Na), has to enter the 3s-orbital. […] Similar reasoning accounts for the entire structure of the Table, with elements in the same group all having analogous electron arrangements and each successive row (‘period’) corresponding to the next outermost shell of orbitals.”

“[O]n crossing the [Periodic] Table from left to right, atoms become smaller: even though they have progressively more electrons, the nuclear charge increases too, and draws the clouds in to itself. On descending a group, atoms become larger because in successive periods new outermost shells are started (as in going from lithium to sodium) and each new coating of cloud makes the atom bigger […] the ionization energy [is] the energy needed to remove one or more electrons from the atom. […] The ionization energy more or less follows the trend in atomic radii but in an opposite sense because the closer an electron lies to the positively charged nucleus, the harder it is to remove. Thus, ionization energy increases from left to right across the Table as the atoms become smaller. It decreases down a group because the outermost electron (the one that is most easily removed) is progressively further from the nucleus. […] the electron affinity [is] the energy released when an electron attaches to an atom. […] Electron affinities are highest on the right of the Table […] An ion is an electrically charged atom. That charge comes about either because the neutral atom has lost one or more of its electrons, in which case it is a positively charged cation […] or because it has captured one or more electrons and has become a negatively charged anion. […] Elements on the left of the Periodic Table, with their low ionization energies, are likely to lose electrons and form cations; those on the right, with their high electron affinities, are likely to acquire electrons and form anions. […] ionic bonds […] form primarily between atoms on the left and right of the Periodic Table.”

“Although the Schrödinger equation is too difficult to solve for molecules, powerful computational procedures have been developed by theoretical chemists to arrive at numerical solutions of great accuracy. All the procedures start out by building molecular orbitals from the available atomic orbitals and then setting about finding the best formulations. […] Depictions of electron distributions in molecules are now commonplace and very helpful for understanding the properties of molecules. It is particularly relevant to the development of new pharmacologically active drugs, where electron distributions play a central role […] Drug discovery, the identification of pharmacologically active species by computation rather than in vivo experiment, is an important target of modern computational chemistry.”

Work […] involves moving against an opposing force; heat […] is the transfer of energy that makes use of a temperature difference. […] the internal energy of a system that is isolated from external influences does not change. That is the First Law of thermodynamics. […] A system possesses energy, it does not possess work or heat (even if it is hot). Work and heat are two different modes for the transfer of energy into or out of a system. […] if you know the internal energy of a system, then you can calculate its enthalpy simply by adding to U the product of pressure and volume of the system (H = U + pV). The significance of the enthalpy […] is that a change in its value is equal to the output of energy as heat that can be obtained from the system provided it is kept at constant pressure. For instance, if the enthalpy of a system falls by 100 joules when it undergoes a certain change (such as a chemical reaction), then we know that 100 joules of energy can be extracted as heat from the system, provided the pressure is constant.”

“In the old days of physical chemistry (well into the 20th century), the enthalpy changes were commonly estimated by noting which bonds are broken in the reactants and which are formed to make the products, so A → B might be the bond-breaking step and B → C the new bond-formation step, each with enthalpy changes calculated from knowledge of the strengths of the old and new bonds. That procedure, while often a useful rule of thumb, often gave wildly inaccurate results because bonds are sensitive entities with strengths that depend on the identities and locations of the other atoms present in molecules. Computation now plays a central role: it is now routine to be able to calculate the difference in energy between the products and reactants, especially if the molecules are isolated as a gas, and that difference easily converted to a change of enthalpy. […] Enthalpy changes are very important for a rational discussion of changes in physical state (vaporization and freezing, for instance) […] If we know the enthalpy change taking place during a reaction, then provided the process takes place at constant pressure we know how much energy is released as heat into the surroundings. If we divide that heat transfer by the temperature, then we get the associated entropy change in the surroundings. […] provided the pressure and temperature are constant, a spontaneous change corresponds to a decrease in Gibbs energy. […] the chemical potential can be thought of as the Gibbs energy possessed by a standard-size block of sample. (More precisely, for a pure substance the chemical potential is the molar Gibbs energy, the Gibbs energy per mole of atoms or molecules.)”

“There are two kinds of work. One kind is the work of expansion that occurs when a reaction generates a gas and pushes back the atmosphere (perhaps by pressing out a piston). That type of work is called ‘expansion work’. However, a chemical reaction might do work other than by pushing out a piston or pushing back the atmosphere. For instance, it might do work by driving electrons through an electric circuit connected to a motor. This type of work is called ‘non-expansion work’. […] a change in the Gibbs energy of a system at constant temperature and pressure is equal to the maximum non-expansion work that can be done by the reaction. […] the link of thermodynamics with biology is that one chemical reaction might do the non-expansion work of building a protein from amino acids. Thus, a knowledge of the Gibbs energies changes accompanying metabolic processes is very important in bioenergetics, and much more important than knowing the enthalpy changes alone (which merely indicate a reaction’s ability to keep us warm).”

“[T]he probability that a molecule will be found in a state of particular energy falls off rapidly with increasing energy, so most molecules will be found in states of low energy and very few will be found in states of high energy. […] If the temperature is low, then the distribution declines so rapidly that only the very lowest levels are significantly populated. If the temperature is high, then the distribution falls off very slowly with increasing energy, and many high-energy states are populated. If the temperature is zero, the distribution has all the molecules in the ground state. If the temperature is infinite, all available states are equally populated. […] temperature […] is the single, universal parameter that determines the most probable distribution of molecules over the available states.”

“Mixing adds disorder and increases the entropy of the system and therefore lowers the Gibbs energy […] In the absence of mixing, a reaction goes to completion; when mixing of reactants and products is taken into account, equilibrium is reached when both are present […] Statistical thermodynamics, through the Boltzmann distribution and its dependence on temperature, allows physical chemists to understand why in some cases the equilibrium shifts towards reactants (which is usually unwanted) or towards products (which is normally wanted) as the temperature is raised. A rule of thumb […] is provided by a principle formulated by Henri Le Chatelier […] that a system at equilibrium responds to a disturbance by tending to oppose its effect. Thus, if a reaction releases energy as heat (is ‘exothermic’), then raising the temperature will oppose the formation of more products; if the reaction absorbs energy as heat (is ‘endothermic’), then raising the temperature will encourage the formation of more product.”

“Model building pervades physical chemistry […] some hold that the whole of science is based on building models of physical reality; much of physical chemistry certainly is.”

“For reasonably light molecules (such as the major constituents of air, N2 and O2) at room temperature, the molecules are whizzing around at an average speed of about 500 m/s (about 1000 mph). That speed is consistent with what we know about the propagation of sound, the speed of which is about 340 m/s through air: for sound to propagate, molecules must adjust their position to give a wave of undulating pressure, so the rate at which they do so must be comparable to their average speeds. […] a typical N2 or O2 molecule in air makes a collision every nanosecond and travels about 1000 molecular diameters between collisions. To put this scale into perspective: if a molecule is thought of as being the size of a tennis ball, then it travels about the length of a tennis court between collisions. Each molecule makes about a billion collisions a second.”

“X-ray diffraction makes use of the fact that electromagnetic radiation (which includes X-rays) consists of waves that can interfere with one another and give rise to regions of enhanced and diminished intensity. This so-called ‘diffraction pattern’ is characteristic of the object in the path of the rays, and mathematical procedures can be used to interpret the pattern in terms of the object’s structure. Diffraction occurs when the wavelength of the radiation is comparable to the dimensions of the object. X-rays have wavelengths comparable to the separation of atoms in solids, so are ideal for investigating their arrangement.”

“For most liquids the sample contracts when it freezes, so […] the temperature does not need to be lowered so much for freezing to occur. That is, the application of pressure raises the freezing point. Water, as in most things, is anomalous, and ice is less dense than liquid water, so water expands when it freezes […] when two gases are allowed to occupy the same container they invariably mix and each spreads uniformly through it. […] the quantity of gas that dissolves in any liquid is proportional to the pressure of the gas. […] When the temperature of [a] liquid is raised, it is easier for a dissolved molecule to gather sufficient energy to escape back up into the gas; the rate of impacts from the gas is largely unchanged. The outcome is a lowering of the concentration of dissolved gas at equilibrium. Thus, gases appear to be less soluble in hot water than in cold. […] the presence of dissolved substances affects the properties of solutions. For instance, the everyday experience of spreading salt on roads to hinder the formation of ice makes use of the lowering of freezing point of water when a salt is present. […] the boiling point is raised by the presence of a dissolved substance [whereas] the freezing point […] is lowered by the presence of a solute.”

“When a liquid and its vapour are present in a closed container the vapour exerts a characteristic pressure (when the escape of molecules from the liquid matches the rate at which they splash back down into it […][)] This characteristic pressure depends on the temperature and is called the ‘vapour pressure’ of the liquid. When a solute is present, the vapour pressure at a given temperature is lower than that of the pure liquid […] The extent of lowering is summarized by yet another limiting law of physical chemistry, ‘Raoult’s law’ [which] states that the vapour pressure of a solvent or of a component of a liquid mixture is proportional to the proportion of solvent or liquid molecules present. […] Osmosis [is] the tendency of solvent molecules to flow from the pure solvent to a solution separated from it by a [semi-]permeable membrane […] The entropy when a solute is present in a solvent is higher than when the solute is absent, so an increase in entropy, and therefore a spontaneous process, is achieved when solvent flows through the membrane from the pure liquid into the solution. The tendency for this flow to occur can be overcome by applying pressure to the solution, and the minimum pressure needed to overcome the tendency to flow is called the ‘osmotic pressure’. If one solution is put into contact with another through a semipermeable membrane, then there will be no net flow if they exert the same osmotic pressures and are ‘isotonic’.”

“Broadly speaking, the reaction quotient [‘Q’] is the ratio of concentrations, with product concentrations divided by reactant concentrations. It takes into account how the mingling of the reactants and products affects the total Gibbs energy of the mixture. The value of Q that corresponds to the minimum in the Gibbs energy […] is called the equilibrium constant and denoted K. The equilibrium constant, which is characteristic of a given reaction and depends on the temperature, is central to many discussions in chemistry. When K is large (1000, say), we can be reasonably confident that the equilibrium mixture will be rich in products; if K is small (0.001, say), then there will be hardly any products present at equilibrium and we should perhaps look for another way of making them. If K is close to 1, then both reactants and products will be abundant at equilibrium and will need to be separated. […] Equilibrium constants vary with temperature but not […] with pressure. […] van’t Hoff’s equation implies that if the reaction is strongly exothermic (releases a lot of energy as heat when it takes place), then the equilibrium constant decreases sharply as the temperature is raised. The opposite is true if the reaction is strongly endothermic (absorbs a lot of energy as heat). […] Typically it is found that the rate of a reaction [how fast it progresses] decreases as it approaches equilibrium. […] Most reactions go faster when the temperature is raised. […] reactions with high activation energies proceed slowly at low temperatures but respond sharply to changes of temperature. […] The surface area exposed by a catalyst is important for its function, for it is normally the case that the greater that area, the more effective is the catalyst.”

Links:

John Dalton.
Atomic orbital.
Electron configuration.
S,p,d,f orbitals.
Computational chemistry.
Atomic radius.
Covalent bond.
Gilbert Lewis.
Valence bond theory.
Molecular orbital theory.
Orbital hybridisation.
Bonding and antibonding orbitals.
Schrödinger equation.
Density functional theory.
Chemical thermodynamics.
Laws of thermodynamics/Zeroth law/First law/Second law/Third Law.
Conservation of energy.
Thermochemistry.
Bioenergetics.
Spontaneous processes.
Entropy.
Rudolf Clausius.
Chemical equilibrium.
Heat capacity.
Compressibility.
Statistical thermodynamics/statistical mechanics.
Boltzmann distribution.
State of matter/gas/liquid/solid.
Perfect gas/Ideal gas law.
Robert Boyle/Joseph Louis Gay-Lussac/Jacques Charles/Amedeo Avogadro.
Equation of state.
Kinetic theory of gases.
Van der Waals equation of state.
Maxwell–Boltzmann distribution.
Thermal conductivity.
Viscosity.
Nuclear magnetic resonance.
Debye–Hückel equation.
Ionic solids.
Catalysis.
Supercritical fluid.
Liquid crystal.
Graphene.
Benoît Paul Émile Clapeyron.
Phase (matter)/phase diagram/Gibbs’ phase rule.
Ideal solution/regular solution.
Henry’s law.
Chemical kinetics.
Electrochemistry.
Rate equation/First order reactions/Second order reactions.
Rate-determining step.
Arrhenius equation.
Collision theory.
Diffusion-controlled and activation-controlled reactions.
Transition state theory.
Photochemistry/fluorescence/phosphorescence/photoexcitation.
Photosynthesis.
Redox reactions.
Electrochemical cell.
Fuel cell.
Reaction dynamics.
Spectroscopy/emission spectroscopy/absorption spectroscopy/Raman spectroscopy.
Raman effect.
Magnetic resonance imaging.
Fourier-transform spectroscopy.
Electron paramagnetic resonance.
Mass spectrum.
Electron spectroscopy for chemical analysis.
Scanning tunneling microscope.
Chemisorption/physisorption.

October 5, 2017 Posted by | Biology, Books, Chemistry, Pharmacology, Physics | Leave a comment

A few diabetes papers of interest

i. Neurocognitive Functioning in Children and Adolescents at the Time of Type 1 Diabetes Diagnosis: Associations With Glycemic Control 1 Year After Diagnosis.

“Children and youth with type 1 diabetes are at risk for developing neurocognitive dysfunction, especially in the areas of psychomotor speed, attention/executive functioning, and visuomotor integration (1,2). Most research suggests that deficits emerge over time, perhaps in response to the cumulative effect of glycemic extremes (36). However, the idea that cognitive changes emerge gradually has been challenged (79). Ryan (9) argued that if diabetes has a cumulative effect on cognition, cognitive test performance should be positively correlated with illness duration. Yet he found comparable deficits in psychomotor speed (the most commonly noted area of deficit) in adolescents and young adults with illness duration ranging from 6 to 25 years. He therefore proposed a diathesis model in which cognitive declines in diabetes are especially likely to occur in more vulnerable patients, at crucial periods, in response to illness-related events (e.g., severe hyperglycemia) known to have an impact on the central nervous system (CNS) (8). This model accounts for the finding that cognitive deficits are more likely in children with early-onset diabetes, and for the accelerated cognitive aging seen in diabetic individuals later in life (7). A third hypothesized crucial period is the time leading up to diabetes diagnosis, during which severe fluctuations in blood glucose and persistent hyperglycemia often occur. Concurrent changes in blood-brain barrier permeability could result in a flood of glucose into the brain, with neurotoxic effects (9).”

“In the current study, we report neuropsychological test findings for children and adolescents tested within 3 days of diabetes diagnosis. The purpose of the study was to determine whether neurocognitive impairments are detectable at diagnosis, as predicted by the diathesis hypothesis. We hypothesized that performance on tests of psychomotor speed, visuomotor integration, and attention/executive functioning would be significantly below normative expectations, and that differences would be greater in children with earlier disease onset. We also predicted that diabetic ketoacidosis (DKA), a primary cause of diabetes-related neurological morbidity (12) and a likely proxy for severe peri-onset hyperglycemia, would be associated with poorer performance.”

“Charts were reviewed for 147 children/adolescents aged 5–18 years (mean = 10.4 ± 3.2 years) who completed a short neuropsychological screening during their inpatient hospitalization for new-onset type 1 diabetes, as part of a pilot clinical program intended to identify patients in need of further neuropsychological evaluation. Participants were patients at a large urban children’s hospital in the southwestern U.S. […] Compared with normative expectations, children/youth with type 1 diabetes performed significantly worse on GPD, GPN, VMI, and FAS (P < 0.0001 in all cases), with large decrements evident on all four measures (Fig. 1). A small but significant effect was also evident in DSB (P = 0.022). High incidence of impairment was evident on all neuropsychological tasks completed by older participants (aged 9–18 years) except DSF/DSB (Fig. 2).”

“Deficits in neurocognitive functioning were evident in children and adolescents within days of type 1 diabetes diagnosis. Participants performed >1 SD below normative expectations in bilateral psychomotor speed (GP) and 0.7–0.8 SDs below expected performance in visuomotor integration (VMI) and phonemic fluency (FAS). Incidence of impairment was much higher than normative expectations on all tasks except DSF/DSB. For example, >20% of youth were impaired in dominant hand fine-motor control, and >30% were impaired with their nondominant hand. These findings provide provisional support for Ryan’s hypothesis (79) that the peri-onset period may be a time of significant cognitive vulnerability.

Importantly, deficits were not evident on all measures. Performance on measures of attention/executive functioning (TMT-A, TMT-B, DSF, and DSB) was largely consistent with normative expectations, as was reading ability (WRAT-4), suggesting that the below-average performance in other areas was not likely due to malaise or fatigue. Depressive symptoms at diagnosis were associated with performance on TMT-B and FAS, but not on other measures. Thus, it seems unlikely that depressive symptoms accounted for the observed motor slowing.

Instead, the findings suggest that the visual-motor system may be especially vulnerable to early effects of type 1 diabetes. This interpretation is especially compelling given that psychomotor impairment is the most consistently reported long-term cognitive effect of type 1 diabetes. The sensitivity of the visual-motor system at diabetes diagnosis is consistent with a growing body of neuroimaging research implicating posterior white matter tracts and associated gray matter regions (particularly cuneus/precuneus) as areas of vulnerability in type 1 diabetes (3032). These regions form part of the neural system responsible for integrating visual inputs with motor outputs, and in adults with type 1 diabetes, structural pathology in these regions is directly correlated to performance on GP [grooved pegboard test] (30,31). Arbelaez et al. (33) noted that these brain areas form part of the “default network” (34), a system engaged during internally focused cognition that has high resting glucose metabolism and may be especially vulnerable to glucose variability.”

“It should be noted that previous studies (e.g., Northam et al. [3]) have not found evidence of neurocognitive dysfunction around the time of diabetes diagnosis. This may be due to study differences in measures, outcomes, and/or time frame. We know of no other studies that completed neuropsychological testing within days of diagnosis. Given our time frame, it is possible that our findings reflect transient effects rather than more permanent changes in the CNS. Contrary to predictions, we found no association between DKA at diagnosis and neurocognitive performance […] However, even transient effects could be considered potential indicators of CNS vulnerability. Neurophysiological changes at the time of diagnosis have been shown to persist under certain circumstances or for some patients. […] [Some] findings suggest that some individuals may be particularly susceptible to the effects of glycemic extremes on neurocognitive function, consistent with a large body of research in developmental neuroscience indicating individual differences in neurobiological vulnerability to adverse events. Thus, although it is possible that the neurocognitive impairments observed in our study might resolve with euglycemia, deficits at diagnosis could still be considered a potential marker of CNS vulnerability to metabolic perturbations (both acute and chronic).”

“In summary, this study provides the first demonstration that type 1 diabetes–associated neurocognitive impairment can be detected at the time of diagnosis, supporting the possibility that deficits arise secondary to peri-onset effects. Whether these effects are transient markers of vulnerability or represent more persistent changes in CNS awaits further study.”

ii. Association Between Impaired Cardiovascular Autonomic Function and Hypoglycemia in Patients With Type 1 Diabetes.

“Cardiovascular autonomic neuropathy (CAN) is a chronic complication of diabetes and an independent predictor of cardiovascular disease (CVD) morbidity and mortality (13). The mechanisms of CAN are complex and not fully understood. It can be assessed by simple cardiovascular reflex tests (CARTs) and heart rate variability (HRV) studies that were shown to be sensitive, noninvasive, and reproducible (3,4).”

“HbA1c fails to capture information on the daily fluctuations in blood glucose levels, termed glycemic variability (GV). Recent observations have fostered the notion that GV, independent of HbA1c, may confer an additional risk for the development of micro- and macrovascular diabetes complications (8,9). […] the relationship between GV and chronic complications, specifically CAN, in patients with type 1 diabetes has not been systematically studied. In addition, limited data exist on the relationship between hypoglycemic components of the GV and measures of CAN among subjects with type 1 diabetes (11,12). Therefore, we have designed a prospective study to evaluate the impact and the possible sustained effects of GV on measures of cardiac autonomic function and other cardiovascular complications among subjects with type 1 diabetes […] In the present communication, we report cross-sectional analyses at baseline between indices of hypoglycemic stress on measures of cardiac autonomic function.”

“The following measures of CAN were predefined as outcomes of interests and analyzed: expiration-to-inspiration ratio (E:I), Valsalva ratio, 30:15 ratios, low-frequency (LF) power (0.04 to 0.15 Hz), high-frequency (HF) power (0.15 to 0.4 Hz), and LF/HF at rest and during CARTs. […] We found that LBGI [low blood glucose index] and AUC [area under the curve] hypoglycemia were associated with reduced LF and HF power of HRV [heart rate variability], suggesting an impaired autonomic function, which was independent of glucose control as assessed by the HbA1c.”

“Our findings are in concordance with a recent report demonstrating attenuation of the baroreflex sensitivity and of the sympathetic response to various cardiovascular stressors after antecedent hypoglycemia among healthy subjects who were exposed to acute hypoglycemic stress (18). Similar associations […] were also reported in a small study of subjects with type 2 diabetes (19). […] higher GV and hypoglycemic stress may have an acute effect on modulating autonomic control with inducing a sympathetic/vagal imbalance and a blunting of the cardiac vagal control (18). The impairment in the normal counter-regulatory autonomic responses induced by hypoglycemia on the cardiovascular system could be important in healthy individuals but may be particularly detrimental in individuals with diabetes who have hitherto compromised cardiovascular function and/or subclinical CAN. In these individuals, hypoglycemia may also induce QT interval prolongation, increase plasma catecholamine levels, and lower serum potassium (19,20). In concert, these changes may lower the threshold for serious arrhythmia (19,20) and could result in an increased risk of cardiovascular events and sudden cardiac death. Conversely, the presence of CAN may increase the risk of hypoglycemia through hypoglycemia unawareness and subsequent impaired ability to restore euglycemia (21) through impaired sympathoadrenal response to hypoglycemia or delayed gastric emptying. […] A possible pathogenic role of GV/hypoglycemic stress on CAN development and progressions should be also considered. Prior studies in healthy and diabetic subjects have found that higher exposure to hypoglycemia reduces the counter-regulatory hormone (e.g., epinephrine, glucagon, and adrenocorticotropic hormone) and blunts autonomic nervous system responses to subsequent hypoglycemia (21). […] Our data […] suggest that wide glycemic fluctuations, particularly hypoglycemic stress, may increase the risk of CAN in patients with type 1 diabetes.”

“In summary, in this cohort of relatively young and uncomplicated patients with type 1 diabetes, GV and higher hypoglycemic stress were associated with impaired HRV reflective of sympathetic/parasympathetic dysfunction with potential important clinical consequences.”

iii. Elevated Levels of hs-CRP Are Associated With High Prevalence of Depression in Japanese Patients With Type 2 Diabetes: The Diabetes Distress and Care Registry at Tenri (DDCRT 6).

“In the last decade, several studies have been published that suggest a close association between diabetes and depression. Patients with diabetes have a high prevalence of depression (1) […] and a high prevalence of complications (3). In addition, depression is associated with mortality in these patients (4). […] Because of this strong association, several recent studies have suggested the possibility of a common biological pathway such as inflammation as an underlying mechanism of the association between depression and diabetes (5). […] Multiple mechanisms are involved in the association between diabetes and inflammation, including modulation of lipolysis, alteration of glucose uptake by adipose tissue, and an indirect mechanism involving an increase in free fatty acid levels blocking the insulin signaling pathway (10). Psychological stress can also cause inflammation via innervation of cytokine-producing cells and activation of the sympathetic nervous systems and adrenergic receptors on macrophages (11). Depression enhances the production of inflammatory cytokines (1214). Overproduction of inflammatory cytokines may stimulate corticotropin-releasing hormone production, a mechanism that leads to hypothalamic-pituitary axis activity. Conversely, cytokines induce depressive-like behaviors; in studies where healthy participants were given endotoxin infusions to trigger cytokine release, the participants developed classic depressive symptoms (15). Based on this evidence, it could be hypothesized that inflammation is the common biological pathway underlying the association between diabetes and depression.”

“[F]ew studies have examined the clinical role of inflammation and depression as biological correlates in patients with diabetes. […] In this study, we hypothesized that high CRP [C-reactive protein] levels were associated with the high prevalence of depression in patients with diabetes and that this association may be modified by obesity or glycemic control. […] Patient data were derived from the second-year survey of a diabetes registry at Tenri Hospital, a regional tertiary care teaching hospital in Japan. […] 3,573 patients […] were included in the study. […] Overall, mean age, HbA1c level, and BMI were 66.0 years, 7.4% (57.8 mmol/mol), and 24.6 kg/m2, respectively. Patients with major depression tended to be relatively young […] and female […] with a high BMI […], high HbA1c levels […], and high hs-CRP levels […]; had more diabetic nephropathy […], required more insulin therapy […], and exercised less […]”.

“In conclusion, we observed that hs-CRP levels were associated with a high prevalence of major depression in patients with type 2 diabetes with a BMI of ≥25 kg/m2. […] In patients with a BMI of <25 kg/m2, no significant association was found between hs-CRP quintiles and major depression […] We did not observe a significant association between hs-CRP and major depression in either of HbA1c subgroups. […] Our results show that the association between hs-CRP and diabetes is valid even in an Asian population, but it might not be extended to nonobese subjects. […] several factors such as obesity and glycemic control may modify the association between inflammation and depression. […] Obesity is strongly associated with chronic inflammation.”

iv. A Novel Association Between Nondipping and Painful Diabetic Polyneuropathy.

“Sleep problems are common in painful diabetic polyneuropathy (PDPN) (1) and contribute to the effect of pain on quality of life. Nondipping (the absence of the nocturnal fall in blood pressure [BP]) is a recognized feature of diabetic cardiac autonomic neuropathy (CAN) and is attributed to the abnormal prevalence of nocturnal sympathetic activity (2). […] This study aimed to evaluate the relationship of the circadian pattern of BP with both neuropathic pain and pain-related sleep problems in PDPN […] Investigating the relationship between PDPN and BP circadian pattern, we found patients with PDPN exhibited impaired nocturnal decrease in BP compared with those without neuropathy, as well as higher nocturnal systolic BP than both those without DPN and with painless DPN. […] in multivariate analysis including comorbidities and most potential confounders, neuropathic pain was an independent determinant of ∆ in BP and nocturnal systolic BP.”

“PDPN could behave as a marker for the presence and severity of CAN. […] PDPN should increasingly be regarded as a condition of high cardiovascular risk.”

v. Reduced Testing Frequency for Glycated Hemoglobin, HbA1c, Is Associated With Deteriorating Diabetes Control.

I think a potentially important take-away from this paper, which they don’t really talk about, is that when you’re analyzing time series data in research contexts where the HbA1c variable is available at the individual level at some base frequency and you then encounter individuals for whom the HbA1c variable is unobserved in such a data set for some time periods/is not observed at the frequency you’d expect, such (implicit) missing values may not be missing at random (for more on these topics see e.g. this post). More specifically, in light of the findings of this paper I think it would make a lot of sense to default to an assumption of missing values being an indicator of worse-than-average metabolic control during the unobserved period of the time series in question when doing time-to-event analyses, especially in contexts where the values are missing for an extended period of time.

The authors of the paper consider metabolic control an outcome to be explained by the testing frequency. That’s one way to approach these things, but it’s not the only one and I think it’s also important to keep in mind that some patients also sometimes make a conscious decision not to show up for their appointments/tests; i.e. the testing frequency is not necessarily fully determined by the medical staff, although they of course have an important impact on this variable.

Some observations from the paper:

“We examined repeat HbA1c tests (400,497 tests in 79,409 patients, 2008–2011) processed by three U.K. clinical laboratories. We examined the relationship between retest interval and 1) percentage change in HbA1c and 2) proportion of cases showing a significant HbA1c rise. The effect of demographics factors on these findings was also explored. […] Figure 1 shows the relationship between repeat requesting interval (categorized in 1-month intervals) and percentage change in HbA1c concentration in the total data set. From 2 months onward, there was a direct relationship between retesting interval and control. A testing frequency of >6 months was associated with deterioration in control. The optimum testing frequency in order to maximize the downward trajectory in HbA1c between two tests was approximately four times per year. Our data also indicate that testing more frequently than 2 months has no benefit over testing every 2–4 months. Relative to the 2–3 month category, all other categories demonstrated statistically higher mean change in HbA1c (all P < 0.001). […] similar patterns were observed for each of the three centers, with the optimum interval to improvement in overall control at ∼3 months across all centers.”

“[I]n patients with poor control, the pattern was similar to that seen in the total group, except that 1) there was generally a more marked decrease or more modest increase in change of HbA1c concentration throughout and, consequently, 2) a downward trajectory in HbA1c was observed when the interval between tests was up to 8 months, rather than the 6 months as seen in the total group. In patients with a starting HbA1c of <6% (<42 mmol/mol), there was a generally linear relationship between interval and increase in HbA1c, with all intervals demonstrating an upward change in mean HbA1c. The intermediate group showed a similar pattern as those with a starting HbA1c of <6% (<42 mmol/mol), but with a steeper slope.”

“In order to examine the potential link between monitoring frequency and the risk of major deterioration in control, we then assessed the relationship between testing interval and proportion of patients demonstrating an increase in HbA1c beyond the normal biological and analytical variation in HbA1c […] Using this definition of significant increase as a ≥9.9% rise in subsequent HbA1c, our data show that the proportion of patients showing this magnitude of rise increased month to month, with increasing intervals between tests for each of the three centers. […] testing at 2–3-monthly intervals would, at a population level, result in a marked reduction in the proportion of cases demonstrating a significant increase compared with annual testing […] irrespective of the baseline HbA1c, there was a generally linear relationship between interval and the proportion demonstrating a significant increase in HbA1c, though the slope of this relationship increased with rising initial HbA1c.”

“Previous data from our and other groups on requesting patterns indicated that relatively few patients in general practice were tested annually (5,6). […] Our data indicate that for a HbA1c retest interval of more than 2 months, there was a direct relationship between retesting interval and control […], with a retest frequency of greater than 6 months being associated with deterioration in control. The data showed that for diabetic patients as a whole, the optimum repeat testing interval should be four times per year, particularly in those with poorer diabetes control (starting HbA1c >7% [≥53 mmol/mol]). […] The optimum retest interval across the three centers was similar, suggesting that our findings may be unrelated to clinical laboratory factors, local policies/protocols on testing, or patient demographics.”

It might be important to mention that there are important cross-country differences in terms of how often people with diabetes get HbA1c measured – I’m unsure of whether or not standards have changed since then, but at least in Denmark a specific treatment goal of the Danish Regions a few years ago was whether or not 95% of diabetics had had their HbA1c measured within the last year (here’s a relevant link to some stuff I wrote about related topics a while back).

October 2, 2017 Posted by | Cardiology, Diabetes, Immunology, Medicine, Neurology, Psychology, Statistics, Studies | Leave a comment

National EM Board Review Course: Toxicology

Some links:

Flumazenil.
Naloxone.
Alcoholic Ketoacidosis.
Gastrointestinal decontamination in the acutely poisoned patient.
Chelation in Metal Intoxication.
Mudpiles – causes of high anion-gap metabolic acidosis.
Toxidromes.
Whole-bowel irrigation: Background, indications, contraindications…
Organophosphate toxicity.
Withdrawal syndromes.
Acetaminophen toxicity.
Alcohol withdrawal.
Wernicke syndrome.
Methanol toxicity.
Ethylene glycol toxicity.
Sympathomimetic toxicity.
Disulfiram toxicity.
Arsenic toxicity.
Barbiturate toxicity.
Beta-blocker toxicity.
Calcium channel blocker toxicity.
Carbon monoxide toxicity.
Caustic ingestions.
Clonidine toxicity.
Cyanide toxicity.
Digitalis toxicity.
Gamma-hydroxybutyrate toxicity.
Hydrocarbon toxicity.
CDC Facts About Hydrogen Fluoride (Hydrofluoric Acid).
Hydrogen Sulfide Toxicity.
Isoniazid toxicity.
Iron toxicity.
Lead toxicity.
Lithium toxicity.
Mercury toxicity.
Methemoglobinemia.
Mushroom toxicity.
Argyria.
Gyromitra mushroom toxicity.
Neuroleptic agent toxicity.
Neuroleptic malignant syndrome.
Oral hypoglycemic agent toxicity.
PCP toxicity.
Phenytoin toxicity.
Rodenticide toxicity.
Salicylate toxicity.
Serotonin syndrome.
TCA toxicity.

September 29, 2017 Posted by | Lectures, Medicine, Pharmacology, Psychiatry | Leave a comment

Earth System Science

I decided not to rate this book. Some parts are great, some parts I didn’t think were very good.

I’ve added some quotes and links below. First a few links (I’ve tried not to add links here which I’ve also included in the quotes below):

Carbon cycle.
Origin of water on Earth.
Gaia hypothesis.
Albedo (climate and weather).
Snowball Earth.
Carbonate–silicate cycle.
Carbonate compensation depth.
Isotope fractionation.
CLAW hypothesis.
Mass-independent fractionation.
δ13C.
Great Oxygenation Event.
Acritarch.
Grypania.
Neoproterozoic.
Rodinia.
Sturtian glaciation.
Marinoan glaciation.
Ediacaran biota.
Cambrian explosion.
Quarternary.
Medieval Warm Period.
Little Ice Age.
Eutrophication.
Methane emissions.
Keeling curve.
CO2 fertilization effect.
Acid rain.
Ocean acidification.
Earth systems models.
Clausius–Clapeyron relation.
Thermohaline circulation.
Cryosphere.
The limits to growth.
Exoplanet Biosignature Gases.
Transiting Exoplanet Survey Satellite (TESS).
James Webb Space Telescope.
Habitable zone.
Kepler-186f.

A few quotes from the book:

“The scope of Earth system science is broad. It spans 4.5 billion years of Earth history, how the system functions now, projections of its future state, and ultimate fate. […] Earth system science is […] a deeply interdisciplinary field, which synthesizes elements of geology, biology, chemistry, physics, and mathematics. It is a young, integrative science that is part of a wider 21st-century intellectual trend towards trying to understand complex systems, and predict their behaviour. […] A key part of Earth system science is identifying the feedback loops in the Earth system and understanding the behaviour they can create. […] In systems thinking, the first step is usually to identify your system and its boundaries. […] what is part of the Earth system depends on the timescale being considered. […] The longer the timescale we look over, the more we need to include in the Earth system. […] for many Earth system scientists, the planet Earth is really comprised of two systems — the surface Earth system that supports life, and the great bulk of the inner Earth underneath. It is the thin layer of a system at the surface of the Earth […] that is the subject of this book.”

“Energy is in plentiful supply from the Sun, which drives the water cycle and also fuels the biosphere, via photosynthesis. However, the surface Earth system is nearly closed to materials, with only small inputs to the surface from the inner Earth. Thus, to support a flourishing biosphere, all the elements needed by life must be efficiently recycled within the Earth system. This in turn requires energy, to transform materials chemically and to move them physically around the planet. The resulting cycles of matter between the biosphere, atmosphere, ocean, land, and crust are called global biogeochemical cycles — because they involve biological, geological, and chemical processes. […] The global biogeochemical cycling of materials, fuelled by solar energy, has transformed the Earth system. […] It has made the Earth fundamentally different from its state before life and from its planetary neighbours, Mars and Venus. Through cycling the materials it needs, the Earth’s biosphere has bootstrapped itself into a much more productive state.”

“Each major element important for life has its own global biogeochemical cycle. However, every biogeochemical cycle can be conceptualized as a series of reservoirs (or ‘boxes’) of material connected by fluxes (or flows) of material between them. […] When a biogeochemical cycle is in steady state, the fluxes in and out of each reservoir must be in balance. This allows us to define additional useful quantities. Notably, the amount of material in a reservoir divided by the exchange flux with another reservoir gives the average ‘residence time’ of material in that reservoir with respect to the chosen process of exchange. For example, there are around 7 × 1016 moles of carbon dioxide (CO2) in today’s atmosphere, and photosynthesis removes around 9 × 1015 moles of CO2 per year, giving each molecule of CO2 a residence time of roughly eight years in the atmosphere before it is taken up, somewhere in the world, by photosynthesis. […] There are 3.8 × 1019 moles of molecular oxygen (O2) in today’s atmosphere, and oxidative weathering removes around 1 × 1013 moles of O2 per year, giving oxygen a residence time of around four million years with respect to removal by oxidative weathering. This makes the oxygen cycle […] a geological timescale cycle.”

“The water cycle is the physical circulation of water around the planet, between the ocean (where 97 per cent is stored), atmosphere, ice sheets, glaciers, sea-ice, freshwaters, and groundwater. […] To change the phase of water from solid to liquid or liquid to gas requires energy, which in the climate system comes from the Sun. Equally, when water condenses from gas to liquid or freezes from liquid to solid, energy is released. Solar heating drives evaporation from the ocean. This is responsible for supplying about 90 per cent of the water vapour to the atmosphere, with the other 10 per cent coming from evaporation on the land and freshwater surfaces (and sublimation of ice and snow directly to vapour). […] The water cycle is intimately connected to other biogeochemical cycles […]. Many compounds are soluble in water, and some react with water. This makes the ocean a key reservoir for several essential elements. It also means that rainwater can scavenge soluble gases and aerosols out of the atmosphere. When rainwater hits the land, the resulting solution can chemically weather rocks. Silicate weathering in turn helps keep the climate in a state where water is liquid.”

“In modern terms, plants acquire their carbon from carbon dioxide in the atmosphere, add electrons derived from water molecules to the carbon, and emit oxygen to the atmosphere as a waste product. […] In energy terms, global photosynthesis today captures about 130 terrawatts (1 TW = 1012 W) of solar energy in chemical form — about half of it in the ocean and about half on land. […] All the breakdown pathways for organic carbon together produce a flux of carbon dioxide back to the atmosphere that nearly balances photosynthetic uptake […] The surface recycling system is almost perfect, but a tiny fraction (about 0.1 per cent) of the organic carbon manufactured in photosynthesis escapes recycling and is buried in new sedimentary rocks. This organic carbon burial flux leaves an equivalent amount of oxygen gas behind in the atmosphere. Hence the burial of organic carbon represents the long-term source of oxygen to the atmosphere. […] the Earth’s crust has much more oxygen trapped in rocks in the form of oxidized iron and sulphur, than it has organic carbon. This tells us that there has been a net source of oxygen to the crust over Earth history, which must have come from the loss of hydrogen to space.”

“The oxygen cycle is relatively simple, because the reservoir of oxygen in the atmosphere is so massive that it dwarfs the reservoirs of organic carbon in vegetation, soils, and the ocean. Hence oxygen cannot get used up by the respiration or combustion of organic matter. Even the combustion of all known fossil fuel reserves can only put a small dent in the much larger reservoir of atmospheric oxygen (there are roughly 4 × 1017 moles of fossil fuel carbon, which is only about 1 per cent of the O2 reservoir). […] Unlike oxygen, the atmosphere is not the major surface reservoir of carbon. The amount of carbon in global vegetation is comparable to that in the atmosphere and the amount of carbon in soils (including permafrost) is roughly four times that in the atmosphere. Even these reservoirs are dwarfed by the ocean, which stores forty-five times as much carbon as the atmosphere, thanks to the fact that CO2 reacts with seawater. […] The exchange of carbon between the atmosphere and the land is largely biological, involving photosynthetic uptake and release by aerobic respiration (and, to a lesser extent, fires). […] Remarkably, when we look over Earth history there are fluctuations in the isotopic composition of carbonates, but no net drift up or down. This suggests that there has always been roughly one-fifth of carbon being buried in organic form and the other four-fifths as carbonate rocks. Thus, even on the early Earth, the biosphere was productive enough to support a healthy organic carbon burial flux.”

“The two most important nutrients for life are phosphorus and nitrogen, and they have very different biogeochemical cycles […] The largest reservoir of nitrogen is in the atmosphere, whereas the heavier phosphorus has no significant gaseous form. Phosphorus thus presents a greater recycling challenge for the biosphere. All phosphorus enters the surface Earth system from the chemical weathering of rocks on land […]. Phosphorus is concentrated in rocks in grains or veins of the mineral apatite. Natural selection has made plants on land and their fungal partners […] very effective at acquiring phosphorus from rocks, by manufacturing and secreting a range of organic acids that dissolve apatite. […] The average terrestrial ecosystem recycles phosphorus roughly fifty times before it is lost into freshwaters. […] The loss of phosphorus from the land is the ocean’s gain, providing the key input of this essential nutrient. Phosphorus is stored in the ocean as phosphate dissolved in the water. […] removal of phosphorus into the rock cycle balances the weathering of phosphorus from rocks on land. […] Although there is a large reservoir of nitrogen in the atmosphere, the molecules of nitrogen gas (N2) are extremely strongly bonded together, making nitrogen unavailable to most organisms. To split N2 and make nitrogen biologically available requires a remarkable biochemical feat — nitrogen fixation — which uses a lot of energy. In the ocean the dominant nitrogen fixers are cyanobacteria with a direct source of energy from sunlight. On land, various plants form a symbiotic partnership with nitrogen fixing bacteria, making a home for them in root nodules and supplying them with food in return for nitrogen. […] Nitrogen fixation and denitrification form the major input and output fluxes of nitrogen to both the land and the ocean, but there is also recycling of nitrogen within ecosystems. […] There is an intimate link between nutrient regulation and atmospheric oxygen regulation, because nutrient levels and marine productivity determine the source of oxygen via organic carbon burial. However, ocean nutrients are regulated on a much shorter timescale than atmospheric oxygen because their residence times are much shorter—about 2,000 years for nitrogen and 20,000 years for phosphorus.”

“[F]orests […] are vulnerable to increases in oxygen that increase the frequency and ferocity of fires. […] Combustion experiments show that fires only become self-sustaining in natural fuels when oxygen reaches around 17 per cent of the atmosphere. Yet for the last 370 million years there is a nearly continuous record of fossil charcoal, indicating that oxygen has never dropped below this level. At the same time, oxygen has never risen too high for fires to have prevented the slow regeneration of forests. The ease of combustion increases non-linearly with oxygen concentration, such that above 25–30 per cent oxygen (depending on the wetness of fuel) it is hard to see how forests could have survived. Thus oxygen has remained within 17–30 per cent of the atmosphere for at least the last 370 million years.”

“[T]he rate of silicate weathering increases with increasing CO2 and temperature. Thus, if something tends to increase CO2 or temperature it is counteracted by increased CO2 removal by silicate weathering. […] Plants are sensitive to variations in CO2 and temperature, and together with their fungal partners they greatly amplify weathering rates […] the most pronounced change in atmospheric CO2 over Phanerozoic time was due to plants colonizing the land. This started around 470 million years ago and escalated with the first forests 370 million years ago. The resulting acceleration of silicate weathering is estimated to have lowered the concentration of atmospheric CO2 by an order of magnitude […], and cooled the planet into a series of ice ages in the Carboniferous and Permian Periods.”

“The first photosynthesis was not the kind we are familiar with, which splits water and spits out oxygen as a waste product. Instead, early photosynthesis was ‘anoxygenic’ — meaning it didn’t produce oxygen. […] It could have used a range of compounds, in place of water, as a source of electrons with which to fix carbon from carbon dioxide and reduce it to sugars. Potential electron donors include hydrogen (H2) and hydrogen sulphide (H2S) in the atmosphere, or ferrous iron (Fe2+) dissolved in the ancient oceans. All of these are easier to extract electrons from than water. Hence they require fewer photons of sunlight and simpler photosynthetic machinery. The phylogenetic tree of life confirms that several forms of anoxygenic photosynthesis evolved very early on, long before oxygenic photosynthesis. […] If the early biosphere was fuelled by anoxygenic photosynthesis, plausibly based on hydrogen gas, then a key recycling process would have been the biological regeneration of this gas. Calculations suggest that once such recycling had evolved, the early biosphere might have achieved a global productivity up to 1 per cent of the modern marine biosphere. If early anoxygenic photosynthesis used the supply of reduced iron upwelling in the ocean, then its productivity would have been controlled by ocean circulation and might have reached 10 per cent of the modern marine biosphere. […] The innovation that supercharged the early biosphere was the origin of oxygenic photosynthesis using abundant water as an electron donor. This was not an easy process to evolve. To split water requires more energy — i.e. more high-energy photons of sunlight — than any of the earlier anoxygenic forms of photosynthesis. Evolution’s solution was to wire together two existing ‘photosystems’ in one cell and bolt on the front of them a remarkable piece of biochemical machinery that can rip apart water molecules. The result was the first cyanobacterial cell — the ancestor of all organisms performing oxygenic photosynthesis on the planet today. […] Once oxygenic photosynthesis had evolved, the productivity of the biosphere would no longer have been restricted by the supply of substrates for photosynthesis, as water and carbon dioxide were abundant. Instead, the availability of nutrients, notably nitrogen and phosphorus, would have become the major limiting factors on the productivity of the biosphere — as they still are today.” [If you’re curious to know more about how that fascinating ‘biochemical machinery’ works, this is a great book on these and related topics – US].

“On Earth, anoxygenic photosynthesis requires one photon per electron, whereas oxygenic photosynthesis requires two photons per electron. On Earth it took up to a billion years to evolve oxygenic photosynthesis, based on two photosystems that had already evolved independently in different types of anoxygenic photosynthesis. Around a fainter K- or M-type star […] oxygenic photosynthesis is estimated to require three or more photons per electron — and a corresponding number of photosystems — making it harder to evolve. […] However, fainter stars spend longer on the main sequence, giving more time for evolution to occur.”

“There was a lot more energy to go around in the post-oxidation world, because respiration of organic matter with oxygen yields an order of magnitude more energy than breaking food down anaerobically. […] The revolution in biological complexity culminated in the ‘Cambrian Explosion’ of animal diversity 540 to 515 million years ago, in which modern food webs were established in the ocean. […] Since then the most fundamental change in the Earth system has been the rise of plants on land […], beginning around 470 million years ago and culminating in the first global forests by 370 million years ago. This doubled global photosynthesis, increasing flows of materials. Accelerated chemical weathering of the land surface lowered atmospheric carbon dioxide levels and increased atmospheric oxygen levels, fully oxygenating the deep ocean. […] Although grasslands now cover about a third of the Earth’s productive land surface they are a geologically recent arrival. Grasses evolved amidst a trend of declining atmospheric carbon dioxide, and climate cooling and drying, over the past forty million years, and they only became widespread in two phases during the Miocene Epoch around seventeen and six million years ago. […] Since the rise of complex life, there have been several mass extinction events. […] whilst these rolls of the extinction dice marked profound changes in evolutionary winners and losers, they did not fundamentally alter the operation of the Earth system.” [If you’re interested in this kind of stuff, the evolution of food webs and so on, Herrera et al.’s wonderful book is a great place to start – US]

“The Industrial Revolution marks the transition from societies fuelled largely by recent solar energy (via biomass, water, and wind) to ones fuelled by concentrated ‘ancient sunlight’. Although coal had been used in small amounts for millennia, for example for iron making in ancient China, fossil fuel use only took off with the invention and refinement of the steam engine. […] With the Industrial Revolution, food and biomass have ceased to be the main source of energy for human societies. Instead the energy contained in annual food production, which supports today’s population, is at fifty exajoules (1 EJ = 1018 joules), only about a tenth of the total energy input to human societies of 500 EJ/yr. This in turn is equivalent to about a tenth of the energy captured globally by photosynthesis. […] solar energy is not very efficiently converted by photosynthesis, which is 1–2 per cent efficient at best. […] The amount of sunlight reaching the Earth’s land surface (2.5 × 1016 W) dwarfs current total human power consumption (1.5 × 1013 W) by more than a factor of a thousand.”

“The Earth system’s primary energy source is sunlight, which the biosphere converts and stores as chemical energy. The energy-capture devices — photosynthesizing organisms — construct themselves out of carbon dioxide, nutrients, and a host of trace elements taken up from their surroundings. Inputs of these elements and compounds from the solid Earth system to the surface Earth system are modest. Some photosynthesizers have evolved to increase the inputs of the materials they need — for example, by fixing nitrogen from the atmosphere and selectively weathering phosphorus out of rocks. Even more importantly, other heterotrophic organisms have evolved that recycle the materials that the photosynthesizers need (often as a by-product of consuming some of the chemical energy originally captured in photosynthesis). This extraordinary recycling system is the primary mechanism by which the biosphere maintains a high level of energy capture (productivity).”

“[L]ike all stars on the ‘main sequence’ (which generate energy through the nuclear fusion of hydrogen into helium), the Sun is burning inexorably brighter with time — roughly 1 per cent brighter every 100 million years — and eventually this will overheat the planet. […] Over Earth history, the silicate weathering negative feedback mechanism has counteracted the steady brightening of the Sun by removing carbon dioxide from the atmosphere. However, this cooling mechanism is near the limits of its operation, because CO2 has fallen to limiting levels for the majority of plants, which are key amplifiers of silicate weathering. Although a subset of plants have evolved which can photosynthesize down to lower CO2 levels [the author does not go further into this topic, but here’s a relevant link – US], they cannot draw CO2 down lower than about 10 ppm. This means there is a second possible fate for life — running out of CO2. Early models projected either CO2 starvation or overheating […] occurring about a billion years in the future. […] Whilst this sounds comfortingly distant, it represents a much shorter future lifespan for the Earth’s biosphere than its past history. Earth’s biosphere is entering its old age.”

September 28, 2017 Posted by | Astronomy, Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Words

The words below are words which I encountered while reading the Rex Stout novels The Broken Vase, Double for Death, The Sound of Murder, Mountain Cat, and the Flashman/Fraser novels Flashman and the Dragon & Flashman at the Charge.

Asperity. Tantalus. Whizbang. Hammy. Regnant. Mordacity. Blotter. Quietus. Debouch. Acidulous. Aniline. Prolegomenon. Suasion. Spoor. Mangy. Clematis. Whittle. Palmistry. Carnality. Clangor.

Cerise. Coruscation. Fluster. Conviviality. Interstice. Chirography. Dub. Grubstake. Pilaster. Sagebrush. Pronghorn. Prognathous. Greensward. Palomino. Spelter. Puggle. Lorcha. Kampilan. Caulk. Cherub.

Thew. Effulgence. Poppet. Colander. Brolly. Bund. Pennon. Cove. Lamasery. Lamé. Patter. Gibber. Snickersnee. Blub. Beckon. Tog. Inveigle. Fuddle. Spoony. Roué.

Equerry. Gazette. Rig-out. Lashing. Clamber. Wainscot. Saunter. Tootle. Latterly. Serge. Redoubt. Charabanc. Indaba. Cess. Gotch. Bailiwick. Reveler. Exult. Hawse. Recreant.

September 27, 2017 Posted by | Books, Language | Leave a comment

Type 1 Diabetes Mellitus and Cardiovascular Disease

“Despite the known higher risk of cardiovascular disease (CVD) in individuals with type 1 diabetes mellitus (T1DM), the pathophysiology underlying the relationship between cardiovascular events, CVD risk factors, and T1DM is not well understood. […] The present review will focus on the importance of CVD in patients with T1DM. We will summarize recent observations of potential differences in the pathophysiology of T1DM compared with T2DM, particularly with regard to atherosclerosis. We will explore the implications of these concepts for treatment of CVD risk factors in patients with T1DM. […] The statement will identify gaps in knowledge about T1DM and CVD and will conclude with a summary of areas in which research is needed.”

The above quote is from this paper: Type 1 Diabetes Mellitus and Cardiovascular Disease: A Scientific Statement From the American Heart Association and American Diabetes Association.

I originally intended to cover this one in one of my regular diabetes posts, but I decided in the end that there was simply too much stuff to cover here for it to make sense not to devote an entire post to it. I have quoted extensively from the paper/statement below and I also decided to bold a few of the observations I found particularly important/noteworthy(/worth pointing out to people reading along?).

“T1DM has strong human leukocyte antigen associations to the DQA, DQB, and DRB alleles (2). One or more autoantibodies, including islet cell, insulin, glutamic acid decarboxylase 65 (GAD65), zinc transporter 8 (3), and tyrosine phosphatase IA-2β and IA-2β antibodies, can be detected in 85–90% of individuals on presentation. The rate of β-cell destruction varies, generally occurring more rapidly at younger ages. However, T1DM can also present in adults, some of whom can have enough residual β-cell function to avoid dependence on insulin until many years later. When autoantibodies are present, this is referred to as latent autoimmune diabetes of adulthood. Infrequently, T1DM can present without evidence of autoimmunity but with intermittent episodes of ketoacidosis; between episodes, the need for insulin treatment can come and go. This type of DM, called idiopathic diabetes (1) or T1DM type B, occurs more often in those of African and Asian ancestry (4). Because of the increasing prevalence of obesity in the United States, there are also obese individuals with T1DM, particularly children. Evidence of insulin resistance (such as acanthosis nigricans); fasting insulin, glucose, and C-peptide levels; and the presence of islet cell, insulin, glutamic acid decarboxylase, and phosphatase autoantibodies can help differentiate between T1DM and T2DM, although both insulin resistance and insulin insufficiency can be present in the same patient (5), and rarely, T2DM can present at an advanced stage with low C-peptide levels and minimal islet cell function.”

Overall, CVD events are more common and occur earlier in patients with T1DM than in nondiabetic populations; women with T1DM are more likely to have a CVD event than are healthy women. CVD prevalence rates in T1DM vary substantially based on duration of DM, age of cohort, and sex, as well as possibly by race/ethnicity (8,11,12). The Pittsburgh Epidemiology of Diabetes Complications (EDC) study demonstrated that the incidence of major coronary artery disease (CAD) events in young adults (aged 28–38 years) with T1DM was 0.98% per year and surpassed 3% per year after age 55 years, which makes it the leading cause of death in that population (13). By contrast, incident first CVD in the nondiabetic population ranges from 0.1% in 35- to 44-year-olds to 7.4% in adults aged 85–94 years (14). An increased risk of CVD has been reported in other studies, with the age-adjusted relative risk (RR) for CVD in T1DM being ≈10 times that of the general population (1517). One of the most robust analyses of CVD risk in this disease derives from the large UK General Practice Research Database (GPRD), comprising data from >7,400 patients with T1DM with a mean ± SD age of 33 ± 14.5 years and a mean DM duration of 15 ± 12 years (8). CVD events in the UK GPRD study occurred on average 10 to 15 years earlier than in matched nondiabetic control subjects.”

“When types of CVD are reported separately, CHD [coronary heart disease] predominates […] The published cumulative incidence of CHD ranges between 2.1% (18) and 19% (19), with most studies reporting cumulative incidences of ≈15% over ≈15 years of follow-up (2022). […] Although stroke is less common than CHD in T1DM, it is another important CVD end point. Reported incidence rates vary but are relatively low. […] the Wisconsin Epidemiologic Study of Diabetic Retinopathy (WESDR) reported an incidence rate of 5.9% over 20 years (≈0.3%) (21); and the European Diabetes (EURODIAB) Study reported a 0.74% incidence of cerebrovascular disease per year (18). These incidence rates are for the most part higher than those reported in the general population […] PAD [peripheral artery disease] is another important vascular complication of T1DM […] The rate of nontraumatic amputation in T1DM is high, occurring at 0.4–7.2% per year (28). By 65 years of age, the cumulative probability of lower-extremity amputation in a Swedish administrative database was 11% for women with T1DM and 20.7% for men (10). In this Swedish population, the rate of lower-extremity amputation among those with T1DM was nearly 86-fold that of the general population.

“Abnormal vascular findings associated with atherosclerosis are also seen in patients with T1DM. Coronary artery calcification (CAC) burden, an accepted noninvasive assessment of atherosclerosis and a predictor of CVD events in the general population, is greater in people with T1DM than in nondiabetic healthy control subjects […] With regard to subclinical carotid disease, both carotid intima-media thickness (cIMT) and plaque are increased in children, adolescents, and adults with T1DM […] compared with age- and sex-matched healthy control subjects […] Endothelial function is altered even at a very early stage of T1DM […] Taken together, these data suggest that preclinical CVD can be seen more frequently and to a greater extent in patients with T1DM, even at an early age. Some data suggest that its presence may portend CVD events; however, how these subclinical markers function as end points is not clear.”

“Neuropathy in T1DM can lead to abnormalities in the response of the coronary vasculature to sympathetic stimulation, which may manifest clinically as resting tachycardia or bradycardia, exercise intolerance, orthostatic hypotension, loss of the nocturnal decline in BP, or silent myocardial ischemia on cardiac testing. These abnormalities can lead to delayed presentation of CVD. An early indicator of cardiac autonomic neuropathy is reduced heart rate variability […] Estimates of the prevalence of cardiac autonomic neuropathy in T1DM vary widely […] Cardiac neuropathy may affect as many as ≈40% of individuals with T1DM (45).”

CVD events occur much earlier in patients with T1DM than in the general population, often after 2 decades of T1DM, which in some patients may be by age 30 years. Thus, in the EDC study, CVD was the leading cause of death in T1DM patients after 20 years of disease duration, at rates of >3% per year (13). Rates of CVD this high fall into the National Cholesterol Education Program’s high-risk category and merit intensive CVD prevention efforts (48). […] CVD events are not generally expected to occur during childhood, even in the setting of T1DM; however, the atherosclerotic process begins during childhood. Children and adolescents with T1DM have subclinical CVD abnormalities even within the first decade of DM diagnosis according to a number of different methodologies”.

Rates of CVD are lower in premenopausal women than in men […much lower: “Cardiovascular disease develops 7 to 10 years later in women than in men” – US]. In T1DM, these differences are erased. In the United Kingdom, CVD affects men and women with T1DM equally at <40 years of age (23), although after age 40 years, men are affected more than women (51). Similar findings on CVD mortality rates were reported in a large Norwegian T1DM cohort study (52) and in the Allegheny County (PA) T1DM Registry (13), which reported the relative impact of CVD compared with the general population was much higher for women than for men (standardized mortality ratio [SMR] 13.2 versus 5.0 for total mortality and 24.7 versus 8.8 for CVD mortality, women versus men). […] Overall, T1DM appears to eliminate most of the female sex protection seen in the nondiabetic population.”

“The data on atherosclerosis in T1DM are limited. A small angiographic study compared 32 individuals with T1DM to 31 nondiabetic patients matched for age and symptoms (71). That study found atherosclerosis in the setting of T1DM was characterized by more severe (tighter) stenoses, more extensive involvement (multiple vessels), and more distal coronary findings than in patients without DM. A quantitative coronary angiographic study in T1DM suggested more severe, distal disease and an overall increased burden compared with nondiabetic patients (up to fourfold higher) (72).”

“In the general population, inflammation is a central pathological process of atherosclerosis (79). Limited pathology data suggest that inflammation is more prominent in patients with DM than in nondiabetic control subjects (70), and those with T1DM in particular are affected. […] Knowledge of the clinical role of inflammatory markers in T1DM and CVD prediction and management is in its infancy, but early data suggest a relationship with preclinical atherosclerosis. […] Studies showed C-reactive protein is elevated within the first year of diagnosis of T1DM (80), and interleukin-6 and fibrinogen levels are high in individuals with an average disease duration of 2 years (81), independent of adiposity and glycemia (82). Other inflammatory markers such as soluble interleukin-2 receptor (83) and CD40 ligand (84,85) are higher in patients with T1DM than in nondiabetic subjects. Inflammation is evident in youth, even soon after the diagnosis of T1DM. […] The mechanisms by which inflammation operates in T1DM are likely multiple but may include hyperglycemia and hypoglycemia, excess adiposity or altered body fat distribution, thrombosis, and adipokines. Several recent studies have demonstrated a relationship between acute hypoglycemia and indexes of systemic inflammation […] These studies suggest that acute hypoglycemia in T1DM produces complex vascular effects involved in the activation of proinflammatory, prothrombotic, and proatherogenic mechanisms. […] Fibrinogen, a prothrombotic acute phase reactant, is increased in T1DM and is associated with premature CVD (109), and it may be important in vessel thrombosis at later stages of CVD.”

“Genetic polymorphisms appear to influence the progression and prognosis of CVD in T1DM […] Like fibrinogen, haptoglobin is an acute phase protein that inhibits hemoglobin-induced oxidative tissue damage by binding to free hemoglobin (110). […] In humans, there are 2 classes of alleles at the haptoglobin locus, giving rise to 3 possible genotypes: haptoglobin 1-1, haptoglobin 2-1, and haptoglobin 2-2. […] In T1DM, there is an independent twofold increased incidence of CAD in haptoglobin 2-2 carriers compared with those with the haptoglobin 1-1 genotype (117); the 2-1 genotype is associated with an intermediate effect of increased CVD risk. More recently, an independent association was reported in T1DM between the haptoglobin 2-2 genotype and early progression to end-stage renal disease (ESRD) (118). In the CACTI study group, the presence of the haptoglobin 2-2 genotype also doubled the risk of CAC [coronary artery calcification] in patients free from CAC at baseline, after adjustment for traditional CVD risk factors (119). […] At present, genetic testing for polymorphisms in T1DM [however] has no clear clinical utility in CVD prediction or management.”

“Dysglycemia is often conceived of as a vasculopathic process. Preclinical atherosclerosis and epidemiological studies generally support this relationship. Clinical trial data from the DCCT supplied definitive findings strongly in favor of beneficial effects of better glycemic control on CVD outcomes. Glycemia is associated with preclinical atherosclerosis in studies that include tests of endothelial function, arterial stiffness, cIMT, autonomic neuropathy, and left ventricular (LV) function in T1DM […] LV mass and function improve with better glycemic control (126,135,136). Epidemiological evidence generally supports the relationship between hyperglycemia and clinical CHD events in T1DM. […] A large Swedish database review recently reported a reasonably strong association between HbA1c and CAD in T1DM (HR, 1.3 per 1% HbA1c increase) (141). […] findings support the recommendation that early optimal glycemic control in T1DM will have long-term benefits for CVD reduction.”

“Obesity is a known independent risk factor for CVD in nondiabetic populations, but the impact of obesity in T1DM has not been fully established. Traditionally, T1DM was a condition of lean individuals, yet the prevalence of overweight and obesity in T1DM has increased significantly […] This is related to epidemiological shifts in the population overall, tighter glucose control leading to less glucosuria, more frequent/greater caloric intake to fend off real and perceived hypoglycemia, and the specific effects of intensive DM therapy, which has been shown to increase the prevalence of obesity (152). Indeed, several clinical trials, including the DCCT, demonstrate that intensive insulin therapy can lead to excessive weight gain in a subset of patients with T1DM (152). […] No systematic evaluation has been conducted to assess whether improving insulin sensitization lowers rates of CVD. Ironically, the better glycemic control associated with insulin therapy may lead to weight gain, with a superimposed insulin resistance, which may be approached by giving higher doses of insulin. However, some evidence from the EDC study suggests that weight gain in the presence of improved glycemic control is associated with an improved CVD risk profile (162). […] Although T1DM is characteristically a disease of absolute insulin deficiency (154), insulin resistance appears to contribute to CHD risk in patients with T1DM. For example, having a family history of T2DM, which suggests a genetic predisposition for insulin resistance, has been associated with an increased CVD risk in patients with T1DM (155).”

“In general, the lipid levels of adults with well-controlled T1DM are similar to those of individuals without DM […] Worse glycemic control, higher weight (164), and more insulin resistance as measured by euglycemic clamp (165) are associated with a more atherogenic cholesterol distribution in men and women with T1DM […] Studies in pediatric and young adult populations suggest higher lipid values than in youth without T1DM, with glycemic control being a significant contributor (148). […] Most studies show that as is true for the general population, dyslipidemia is a risk factor for CVD in T1DM. Qualitative differences in lipid and lipoprotein fractions are being investigated to determine whether abnormal lipid function may contribute to this. The HDL-C fraction has been of particular interest because the metabolism of HDL-C in T1DM may be altered because of abnormal lipoprotein lipase and hepatic lipase activities related to exogenously administered insulin […] Additionally, as noted earlier, the less efficient handling of heme by the haptoglobin 2-2 genotype in patients with T1DM leaves these complexes less capable of being removed by macrophages, which allows them to associate with HDL, which renders it less functional (116). […] Conventionally, pharmacotherapy is used more aggressively for patients with T1DM and lipid disorders than for nondiabetic patients; however, recommendations for treatment are mostly extrapolated from interventional trials in adults with T2DM, in which rates of CVD events are equivalent to those in secondary prevention populations. Whether this is appropriate for T1DM is not clear […] Awareness of CVD risk and screening for hypercholesterolemia in T1DM have increased over time, yet recent data indicate that control is suboptimal, particularly in younger patients who have not yet developed long-term complications and might therefore benefit from prevention efforts (173). Adults with T1DM who have abnormal lipids and additional risk factors for CVD (e.g., hypertension, obesity, or smoking) who have not developed CVD should be treated with statins. Adults with CVD and T1DM should also be treated with statins, regardless of whether they have additional risk factors.”

“Diabetic kidney disease (DKD) is a complication of T1DM that is strongly linked to CVD. DKD can present as microalbuminuria or macroalbuminuria, impaired GFR, or both. These represent separate but complementary manifestations of DKD and are often, but not necessarily, sequential in their presentation. […] the risk of all-cause mortality increased with the severity of DKD, from microalbuminuria to macroalbuminuria to ESRD. […] Microalbuminuria is likely an indicator of diffuse vascular injury. […] Microalbuminuria is highly correlated with CVD (49,180182). In the Steno Diabetes Center (Gentofte, Denmark) cohort, T1DM patients with isolated microalbuminuria had a 4.2-fold increased risk of CVD (49,180). In the EDC study, microalbuminuria was associated with mortality risk, with an SMR of 6.4. In the FinnDiane study, mortality risk was also increased with microalbuminuria (SMR, 2.8). […] A recent review summarized these data. In patients with T1DM and microalbuminuria, there was an RR of all-cause mortality of 1.8 (95% CI, 1.5–2.1) that was unaffected by adjustment for confounders (183). Similar RRs were found for mortality from CVD (1.9; 95% CI, 1.3–2.9), CHD (2.1; 95% CI, 1.2–3.5), and aggregate CVD mortality (2.0; 95% CI, 1.5–2.6).”

“Macroalbuminuria represents more substantial kidney damage and is also associated with CVD. Mechanisms may be more closely related to functional consequences of kidney disease, such as higher LDL-C and lower HDL-C. Prospective data from Finland indicate the RR for CVD is ≈10 times greater in patients with macroalbuminuria than in those without macroalbuminuria (184). Historically, in the [Danish] Steno cohort, patients with T1DM and macroalbuminuria had a 37-fold increased risk of CVD mortality compared with the general population (49,180); however, a more recent report from EURODIAB suggests a much lower RR (8.7; 95% CI, 4.03–19.0) (185). […] In general, impaired GFR is a risk factor for CVD, independent of albuminuria […] ESRD [end-stage renal disease, US], the extreme form of impaired GFR, is associated with the greatest risk of CVD of all varieties of DKD. In the EDC study, ESRD was associated with an SMR for total mortality of 29.8, whereas in the FinnDiane study, it was 18.3. It is now clear that GFR loss and the development of eGFR <60 mL · min−1 · 1.73 m−2 can occur without previous manifestation of microalbuminuria or macroalbuminuria (177,178). In T1DM, the precise incidence, pathological basis, and prognosis of this phenotype remain incompletely described.”

“Prevention of DKD remains challenging. Although microalbuminuria and macroalbuminuria are attractive therapeutic targets for CVD prevention, there are no specific interventions directed at the kidney that prevent DKD. Inhibition of the renin-angiotensin-aldosterone system is an attractive option but has not been demonstrated to prevent DKD before it is clinically apparent. […] In contrast to prevention efforts, treatment of DKD with agents that inhibit the renin-angiotensin-aldosterone system is effective. […] angiotensin-converting enzyme (ACE) inhibitors reduce the progression of DKD and death in T1DM (200). Thus, once DKD develops, treatment is recommended to prevent progression and to reduce or minimize other CVD risk factors, which has a positive effect on CVD risk. All patients with T1DM and hypertension or albuminuria should be treated with an ACE inhibitor. If an ACE inhibitor is not tolerated, an angiotensin II receptor blocker (ARB) is likely to have similar efficacy, although this has not been studied specifically in patients with T1DM. Optimal dosing for ACE inhibitors or ARBs in the setting of DKD is not well defined; titration may be guided by BP, albuminuria, serum potassium, and creatinine. Combination therapy of ACE and ARB blockade cannot be specifically recommended at this time.”

“Hypertension is more common in patients with T1DM and is a powerful risk factor for CVD, regardless of whether an individual has DKD. In the CACTI [Coronary Artery Calcification in Type 1 Diabetes] study, hypertension was much more common in patients with T1DM than in age- and sex-matched control subjects (43% versus 15%, P < 0.001); in fact, only 42% of all T1DM patients met the Joint National Commission 7 goal (BP <130/80 mmHg) (201). Hypertension also affects youth with T1DM. The SEARCH trial of youth aged 3–17 years with T1DM (n = 3,691) found the prevalence of elevated BP was 5.9% […] Abnormalities in BP can stem from DKD or obesity. Hyperglycemia may also contribute to hypertension over the long term. In the DCCT/EDIC cohort, higher HbA1c was strongly associated with increased risk of hypertension, and intensive DM therapy reduced the long-term risk of hypertension by 24% (203). […] There are few published trials about the ideal pharmacotherapeutic agent(s) for hypertension in T1DM.”

“Smoking is a major risk factor for CVD, particularly PAD (213); however, there is little information on the prevalence or effects of smoking in T1DM. […] The added CVD risk of smoking may be particularly important in patients with DM, who are already vulnerable. In patients with T1DM, cigarette smoking [has been shown to increase] the risk of DM nephropathy, retinopathy, and neuropathy (214,215) […] Smoking increases CVD risk factors in T1DM via deterioration in glucose metabolism, lipids, and endothelial function (216). Unfortunately, smoking cessation can result in weight gain, which may deter smokers with DM from quitting (217). […] Smoking cessation should be strongly recommended to all patients with T1DM as part of an overall strategy to lower CVD, in particular PAD.”

“CVD risk factors are more common in children with T1DM than in the general pediatric population (218). Population-based studies estimate that 14–45% of children with T1DM have ≥2 CVD risk factors (219221). As with nondiabetic children, the prevalence of CVD risk factors increases with age (221). […] The American Academy of Pediatrics, the American Heart Association, and the ADA recognize patients with DM, and particularly T1DM, as being in a higher-risk group who should receive more aggressive risk factor screening and treatment than nondiabetic children […] The available data suggest many children and adolescents with T1DM do not receive the recommended treatment for their dyslipidemia and hypertension (220,222).”

“There are no CVD risk-prediction algorithms for patients with T1DM in widespread use. […] Use of the Framingham Heart Study and UK Prospective Diabetes Study (UKPDS) algorithms in the EDC study population did not provide good predictive results, which suggests that neither general or T2DM risk algorithms are sufficient for risk prediction in T1DM (235). On the basis of these findings, a model has been developed with the use of EDC cohort data (236) that incorporates measures outside the Framingham construct (white blood cell count, albuminuria, DM duration). Although this algorithm was validated in the EURODIAB Study cohort (237), it has not been widely adopted, and diagnostic and therapeutic decisions are often based on global CVD risk-estimation methods (i.e., Framingham risk score or T2DM-specific UKPDS risk engine [http://www.dtu.ox.ac.uk/riskengine/index.php]). Other options for CVD risk prediction in patients with T1DM include the ADA risk-assessment tool (http://main.diabetes.org/dorg/mha/main_en_US.html?loc=dorg-mha) and the Atherosclerosis Risk in Communities (ARIC) risk predictor (http://www.aricnews.net/riskcalc/html/RC1.html), but again, accuracy for T1DM is not clear.”

September 25, 2017 Posted by | Cardiology, Diabetes, Epidemiology, Genetics, Medicine, Nephrology, Neurology, Pharmacology, Studies | Leave a comment

National EM Board Review Course: Environmental Emergencies

Some links to resources on stuff covered in the lecture:

Drowning.
Diving disorders.
Henry’s law/Boyle’s law/Dalton’s law.
Nitrogen narcosis.
Decompression Sickness.
Hyperbaric Oxygen Therapy.
Blast Injuries.
Altitude sickness.
High Altitude Flatus Expulsion (HAFE).
High-Altitude Pulmonary Edema.
Hypothermia.
Cold-induced vasodilation.
Osborn Waves.
Frostbite (‘think of this as a thermal burn equivalent caused by cold’).
Trench foot.
Heat stroke.
Heat cramps.
Thermal Burns.
Parkland formula.
Escharotomy and Burns.
Electrical Injuries in Emergency Medicine.
Lightning Injuries.
Radiation exposure.
Inhalation Anthrax.
Botulism As a Bioterrorism Agent.
Chemical weapon/vessicants/nerve agent.
Bite injuries.
Cat scratch disease.
Rabies.
Rattlesnake Bite.
Snakebites: First aid.
Snake bite: coral snakes.
Black widow spider bite.
Brown recluse spider bite.
Marine envenomation.

September 22, 2017 Posted by | Lectures, Medicine | Leave a comment

The Biology of Moral Systems (III)

This will be my last post about the book. It’s an important work which deserves to be read by far more people than have already read it. I have added some quotes and observations from the last chapters of the book below.

“If egoism, as self-interest in the biologists’ sense, is the reason for the promotion of ethical behavior, then, paradoxically, it is expected that everyone will constantly promote the notion that egoism is not a suitable theory of action, and, a fortiori, that he himself is not an egoist. Most of all he must present this appearance to his closest associates because it is in his best interests to do so – except, perhaps, to his closest relatives, to whom his egoism may often be displayed in cooperative ventures from which some distant- or non-relative suffers. Indeed, it may be arguable that it will be in the egoist’s best interest not to know (consciously) or to admit to himself that he is an egoist because of the value to himself of being able to convince others he is not.”

“The function of [societal] punishments and rewards, I have suggested, is to manipulate the behavior of participating individuals, restricting individual efforts to serve their own interests at others’ expense so as to promote harmony and unity within the group. The function of harmony and unity […] is to allow the group to compete against hostile forces, especially other human groups. It is apparent that success of the group may serve the interests of all individuals in the group; but it is also apparent that group success can be achieved with different patterns of individual success differentials within the group. So […] it is in the interests of those who are differentially successful to promote both unity and the rules so that group success will occur without necessitating changes deleterious to them. Similarly, it may be in the interests of those individuals who are relatively unsuccessful to promote dissatisfaction with existing rules and the notion that group success would be more likely if the rules were altered to favor them. […] the rules of morality and law alike seem not to be designed explicitly to allow people to live in harmony within societies but to enable societies to be sufficiently united to deter their enemies. Within-society harmony is the means not the end. […] extreme within-group altruism seems to correlate with and be historically related to between-group strife.”

“There are often few or no legitimate or rational expectations of reciprocity or “fairness” between social groups (especially warring or competing groups such as tribes or nations). Perhaps partly as a consequence, lying, deceit, or otherwise nasty or even heinous acts committed against enemies may sometimes not be regarded as immoral by others withing the group of those who commit them. They may even be regarded as highly moral if they seem dramatically to serve the interests of the group whose members commit them.”

“Two major assumptions, made universally or most of the time by philosophers, […] are responsible for the confusion that prevents philosophers from making sense out of morality […]. These assumptions are the following: 1. That proximate and ultimate mechanisms or causes have the same kind of significance and can be considered together as if they were members of the same class of causes; this is a failure to understand that proximate causes are evolved because of ultimate causes, and therefore may be expected to serve them, while the reverse is not true. Thus, pleasure is a proximate mechanism that in the usual environments of history is expected to impel us toward behavior that will contribute to our reproductive success. Contrarily, acts leading to reproductive success are not proximate mechanisms that evolved because they served the ultimate function of bringing us pleasure. 2. That morality inevitably involves some self-sacrifice. This assumption involves at least three elements: a. Failure to consider altruism as benefits to the actor. […] b. Failure to comprehend all avenues of indirect reciprocity within groups. c. Failure to take into account both within-group and between-group benefits.”

“If morality means true sacrifice of one’s own interests, and those of his family, then it seems to me that we could not have evolved to be moral. If morality requires ethical consistency, whereby one does not do socially what he would not advocate and assist all others also to do, then, again, it seems to me that we could not have evolved to be moral. […] humans are not really moral at all, in the sense of “true sacrifice” given above, but […] the concept of morality is useful to them. […] If it is so, then we might imagine that, in the sense and to the extent that they are anthropomorphized, the concepts of saints and angels, as well as that of God, were also created because of their usefulness to us. […] I think there have been far fewer […] truly self-sacrificing individuals than might be supposed, and most cases that might be brought forward are likely instead to be illustrations of the complexity and indirectness of reciprocity, especially the social value of appearing more altruistic than one is. […] I think that […] the concept of God must be viewed as originally generated and maintained for the purpose – now seen by many as immoral – of furthering the interests of one group of humans at the expense of one or more other groups. […] Gods are inventions originally developed to extend the notion that some have greater rights than others to design and enforce rules, and that some are more destined to be leaders, others to be followers. This notion, in turn, arose out of prior asymmetries in both power and judgment […] It works when (because) leaders are (have been) valuable, especially in the context of intergroup competition.”

“We try to move moral issues in the direction of involving no conflict of interest, always, I suggest, by seeking universal agreement with our own point of view.”

“Moral and legal systems are commonly distinguished by those, like moral philosophers, who study them formally. I believe, however, that the distinction between them is usually poorly drawn, and based on a failure to realize that moral as well as legal behavior occurs as a result of probably and possible punishments and reward. […] we often internalize the rules of law as well as the rules of morality – and perhaps by the same process […] It would seem that the rules of law are simply a specialized, derived aspect of what in earlier societies would have been a part of moral rules. On the other hand, law covers only a fraction of the situations in which morality is involved […] Law […] seems to be little more than ethics written down.”

“Anyone who reads the literature on dispute settlement within different societies […] will quickly understand that genetic relatedness counts: it allows for one-way flows of benefits and alliances. Long-term association also counts; it allows for reliability and also correlates with genetic relatedness. […] The larger the social group, the more fluid its membership; and the more attenuated the social interactions of its membership, the more they are forced to rely on formal law”.

“[I]ndividuals have separate interests. They join forces (live in groups; become social) when they share certain interests that can be better realized for all by close proximity or some forms of cooperation. Typically, however, the overlaps of interests rarely are completely congruent with those of either other individuals or the rest of the group. This means that, even during those times when individual interests within a group are most broadly overlapping, we may expect individuals to temper their cooperation with efforts to realize their own interests, and we may also expect them to have evolved to be adept at using others, or at thwarting the interests of others, to serve themselves (and their relatives). […] When the interests of all are most nearly congruent, it is essentially always due to a threat shared equally. Such threats almost always have to be external (or else they are less likely to affect everyone equally […] External threats to societies are typically other societies. Maintenance of such threats can yield situations in which everyone benefits from rigid, hierarchical, quasi-military, despotic government. Liberties afforded leaders – even elaborate perquisites of dictators – may be tolerated because such threats are ever-present […] Extrinsic threats, and the governments they produce, can yield inflexibilities of political structures that can persist across even lengthy intervals during which the threats are absent. Some societies have been able to structure their defenses against external threats as separate units (armies) within society, and to keep them separate. These rigidly hierarchical, totalitarian, and dictatorial subunits rise and fall in size and influence according to the importance of the external threat. […] Discussion of liberty and equality in democracies closely parallels discussions of morality and moral systems. In either case, adding a perspective from evolutionary biology seems to me to have potential for clarification.”

“It is indeed common, if not universal, to regard moral behavior as a kind of altruism that necessarily yields the altruist less than he gives, and to see egoism as either the opposite of morality or the source of immorality; but […] this view is usually based on an incomplete understanding of nepotism, reciprocity, and the significance of within-group unity for between-group competition. […] My view of moral systems in the real world, however, is that they are systems in which costs and benefits of specific actions are manipulated so as to produce reasonably harmonious associations in which everyone nevertheless pursues his own (in evolutionary terms) self-interest. I do not expect that moral and ethical arguments can ever be finally resolved. Compromises and contracts, then, are (at least currently) the only real solutions to actual conflicts of interest. This is why moral and ethical decisions must arise out of decisions of the collective of affected individuals; there is no single source of right and wrong.

I would also argue against the notion that rationality can be easily employed to produce a world of humans that self-sacrifice in favor of other humans, not to say nonhuman animals, plants, and inanimate objects. Declarations of such intentions may themselves often be the acts of self-interested persons developing, consciously or not, a socially self-benefiting view of themselves as extreme altruists. In this connection it is not irrelevant that the more dissimilar a species or object is to one’s self the less likely it is to provide a competitive threat by seeking the same resources. Accordingly, we should not be surprised to find humans who are highly benevolent toward other species or inanimate objects (some of which may serve them uncomplainingly), yet relatively hostile and noncooperative with fellow humans. As Darwin (1871) noted with respect to dogs, we have selected our domestic animals to return our altruism with interest.”

“It is not easy to discover precisely what historical differences have shaped current male-female differences. If, however, humans are in a general way similar to other highly parental organisms that live in social groups […] then we can hypothesize as follows: for men much of sexual activity has had as a main (ultimate) significance the initiating of pregnancies. It would follow that when a man avoids copulation it is likely to be because (1) there is no likelihood of pregnancy or (2) the costs entailed (venereal disease, danger from competition with other males, lowered status if the event becomes public, or an undesirable commitment) are too great in comparison with the probability that pregnancy will be induced. The man himself may be judging costs against the benefits of immediate sensory pleasures, such as orgasms (i.e., rather than thinking about pregnancy he may say that he was simply uninterested), but I am assuming that selection has tuned such expectations in terms of their probability of leading to actual reproduction […]. For women, I hypothesize, sexual activity per se has been more concerned with the securing of resources (again, I am speaking of ultimate and not necessarily conscious concerns) […]. Ordinarily, when women avoid or resist copulation, I speculate further, the disinterest, aversion, or inhibition may be traceable eventually to one (or more) of three causes: (1) there is no promise of commitment (of resources), (2) there is a likelihood of undesirable commitment (e.g., to a man with inadequate resources), or (3) there is a risk of loss of interest by a man with greater resources, than the one involved […] A man behaving so as to avoid pregnancies, and who derives from an evolutionary background of avoiding pregnancies, should be expected to favor copulation with women who are for age or other reasons incapable of pregnancy. A man derived from an evolutionary process in which securing of pregnancies typically was favored, may be expected to be most interested sexually in women most likely to become pregnant and near the height of the reproductive probability curve […] This means that men should usually be expected to anticipate the greatest sexual pleasure with young, healthy, intelligent women who show promise of providing superior parental care. […] In sexual competition, the alternatives of a man without resources are to present himself as a resource (i.e., as a mimic of one with resources or as one able and likely to secure resources because of his personal attributes […]), to obtain sex by force (rape), or to secure resources through a woman (e.g., allow himself to be kept by a relatively undesired woman, perhaps as a vehicle to secure liaisons with other women). […] in nonhuman species of higher animals, control of the essential resources of parenthood by females correlates with lack of parental behavior by males, promiscuous polygyny, and absence of long-term pair bonds. There is some evidence of parallel trends within human societies (cf. Flinn, 1981).” [It’s of some note that quite a few good books have been written on these topics since Alexander first published his book, so there are many places to look for detailed coverage of topics like these if you’re curious to know more – I can recommend both Kappeler & van Schaik (a must-read book on sexual selection, in my opinion) & Bobby Low. I didn’t think too highly of Miller or Meston & Buss, but those are a few other books on these topics which I’ve read – US].

“The reason that evolutionary knowledge has no moral content is [that] morality is a matter of whose interests one should, by conscious and willful behavior, serve, and how much; evolutionary knowledge contains no messages on this issue. The most it can do is provide information about the reasons for current conditions and predict some consequences of alternative courses of action. […] If some biologists and nonbiologists make unfounded assertions into conclusions, or develop pernicious and fallible arguments, then those assertions and arguments should be exposed for what they are. The reason for doing this, however, is not […should not be..? – US] to prevent or discourage any and all analyses of human activities, but to enable us to get on with a proper sort of analysis. Those who malign without being specific; who attack people rather than ideas; who gratuitously translate hypotheses into conclusions and then refer to them as “explanations,” “stories,” or “just-so-stories”; who parade the worst examples of argument and investigation with the apparent purpose of making all efforts at human self-analysis seem silly and trivial, I see as dangerously close to being ideologues at least as worrisome as those they malign. I cannot avoid the impression that their purpose is not to enlighten, but to play upon the uneasiness of those for whom the approach of evolutionary biology is alien and disquieting, perhaps for political rather than scientific purposes. It is more than a little ironic that the argument of politics rather than science is their own chief accusation with respect to scientists seeking to analyze human behavior in evolutionary terms (e.g. Gould and Levontin, 1979 […]).”

“[C]urrent selective theory indicates that natural selection has never operated to prevent species extinction. Instead it operates by saving the genetic materials of those individuals or families that outreproduce others. Whether species become extinct or not (and most have) is an incidental or accidental effect of natural selection. An inference from this is that the members of no species are equipped, as a direct result of their evolutionary history, with traits designed explicitly to prevent extinction when that possibility looms. […] Humans are no exception: unless their comprehension of the likelihood of extinction is so clear and real that they perceive the threat to themselves as individuals, and to their loved ones, they cannot be expected to take the collective action that will be necessary to reduce the risk of extinction.”

“In examining ourselves […] we are forced to use the attributes we wish to analyze to carry out the analysis, while resisting certain aspects of the analysis. At the very same time, we pretend that we are not resisting at all but are instead giving perfectly legitimate objections; and we use our realization that others will resist the analysis, for reasons as arcane as our own, to enlist their support in our resistance. And they very likely will give it. […] If arguments such as those made here have any validity it follows that a problem faced by everyone, in respect to morality, is that of discovering how to subvert or reduce some aspects of individual selfishness that evidently derive from our history of genetic individuality.”

“Essentially everyone thinks of himself as well-meaning, but from my viewpoint a society of well-meaning people who understand themselves and their history very well is a better milieu than a society of well-meaning people who do not.”

September 22, 2017 Posted by | Anthropology, Biology, Books, Evolutionary biology, Genetics, Philosophy, Psychology, Religion | Leave a comment

The fall of Rome

“According to the conventional view of things, the military and political disintegration of Roman power in the West precipitated the end of a civilization. Ancient sophistication died, leaving the western world in the grip of a ‘Dark Age’ of material and intellectual poverty, out of which it was only slowly to emerge. […] a much more comfortable vision of the end of empire [has been] spreading in recent years through the English-speaking world. […] There has been a sea change in the language used to describe post-Roman times. Words like ‘decline’ and ‘crisis’ […] have largely disappeared from historians’ vocabularies, to be replaced by neutral terms, like ‘transition’, ‘change’, and ‘transformation’. […] some historians in recent decades have also questioned the entire premiss that the dissolution of the Roman empire in the West was caused by hostile and violent invasion. […] some recent works […] present the theory of peaceful accommodation as a universally applicable model to explain the end of the Roman empire.”

Ward Perkins’ book is a work which sets out to show why he thinks those people are wrong, presenting along the way much evidence for widespread violence and disruption throughout the Western Empire towards the end. Despite the depressing topics covered therein I really enjoyed the book; Perkins spends a lot of time on material culture aspects and archaeological remains – it’s perhaps a telling fact that the book’s appendix deals with the properties of pottery and potsherds, and how important these kinds of material remains might be in terms of helping to make sense of things which happened in the far past. A general problem in a collapse setting is that when conditions deteriorate a lot, the sort of high-quality evidence that historians and archaeologists love to look at tend to disappear; censuses stop being taken (so you have to guess at how many people were around, instead of knowing it reasonably well – which can be particularly annoying if the disrupting factor was also killing people), innumeracy and illiteracy increase (translating to fewer written sources available), and so on. I should perhaps interpose that these sorts of issues do not just pertain to historical sources from the past; similar problems also arise in various analytical contexts today. Countries in a state of crisis (war, epidemics) tend to produce poor and questionable data, if any data can be gathered at all, a point I recall being covered in Newman & DeRouen’s book; related topics were also discussed in M’ikanatha & Iskander’s book as people working in public health sometimes face these problems as well (that work was of course focused on disease surveillance aspects, and in that context I might mention that the authors mentioned that poor data availability does not really necessarily mean that no data is ‘available’; for example in such settings (cheap) proxy data of various kinds may sometimes be usefully employed to inform resource allocation decisions, even if the use of such data would not be cost-effective or meaningful in a different setting). Another point of relevance is of course that some types of evidence survive the passage of time much better than others; pottery is much harder to destroy than low-quality parchment.

The point of looking at things like pottery and coins (a related topic I recall Webster covering in some detail in his book about The Roman Invasion of Britain) is not mainly that it’s super interesting to look at different types of pottery or coins – the point is that these types of material remains tend to be extremely informative about many things besides the artifacts themselves. Pottery was used for storing goods, and those goods aren’t around any longer but the pottery still is. And ‘pottery’ is not just ‘pottery’; different types of pottery required different levels of skill, and an important variable here is the level of standardization – Roman pottery was in general of high quality and was highly standardized; by examining e.g. clay content you can actually often tell where the pottery was made; specific producers produced pottery that was easily date-able. Coins were used for purchasing things and widespread use of them implies the existence of trading networks not relying on barter trade. Different coins had different values and there are important insights to be gathered from the properties of these artifacts; Joseph Tainter e.g. talks in his book about how the silver content of Roman coins gradually decreased over time, indicating at some periods that the empire was apparently undergoing quite severe inflation (the Roman military was compensated in coin, not goods, so by tweaking the amount of copper or silver in those coins the emperors could save a bit of money – which many of them did). If the amount of low-denomination coins drops a lot this might be an indication that people were reverting to barter trade. And so on. If you find some Roman coins in a field in Britain, it might mean that there used to be a Roman garrison there. If people used to use roof tiles and build buildings out of stone, rather than wood, and you observe that they stopped doing that, that’s also a clue that something changed.

A lot of the kind of evidence Perkins looks at in his book is to some extent indirect evidence, but the point is that there’s a lot of it, and if different sources tell roughly similar stories it sort of starts getting hard to argue against. To give a sense of the scale of the material remains available, one single source in Rome, Monte Testaccio, is made up entirely of broken oil amphorae imported to Rome from south-western Spain during the 2nd and 3rd century and is estimated to contain the remains of 53 million amphorae. An image of how the remains of one particular pottery manufacturer operating in Oxford in the 3rd and 4th century are distributed throughout Britain yield something like 100 different English sites where that pottery has been found. Again, the interesting thing here is not only the pottery itself, but also all the things people transported using those vessels, and all those other things (lost from the archaeological record) that might have been transported from A to B if they were willing to transport brittle pottery vessels that far around. And it’s very interesting to see distributions like that and then start comparing them with the sort of distributions you’ll get if you look for stuff produced, say, 200 years later. Coins, pottery, roof tiles, amphorae, animal bones (there’s evidence that Roman cows were larger than their Early Medieval counterparts), new construction (e.g. temples) – look at what people left behind, compare the evidence you get from the time of the Empire with what came after; this is a very big part of what Perkins does in his book.

While looking at the evidence it becomes obvious that some regions were more severely affected than others, and Perkins goes into those details as well. In general it seems that Britain was the most severely affected region, with other regions doing somewhat better; the timing also varied greatly. Greece (and much of the Eastern Empire) actually experienced a period of expansion (increased density of settlements, new churches and monasteries, stone rural houses) during the fifth century but around 600 AD the Aegean was severely hit and experienced severe disruption where former great cities became little but abandoned ghost towns. Perkins also takes care to deal with the ‘barbarians’ in at least some detail (Peter Heather covers that stuff in a lot more detail in his book Empires and Barbarians, if people are curious to know more about these topics), not lumping them all together into One Great Alliance to Take Down the Empire (quite often these guys were at war with each other). The evidence is presented in some detail, which also means that if you walk away from the book still thinking Perkins hasn’t made a good case for his beliefs, well, you’ll at least know where the author is coming from and why he holds the views he does.

I’ve added some more quotes from the book below. If you’re interested in these topics this book is a must read.

“The Germanic invaders of the western empire seized or extorted through the threat of force the vast majority of the territories in which they settled, without any formal agreement on how to share resources with their new Roman subjects. The impression given by some recent historians that most Roman territory was formally ceded to them as part of a treaty is quite simply wrong. Whenever the evidence is moderately full, as it is from the Mediterranean provinces, conquest or surrender to the threat of force was definitely the norm, not peaceful settlement. […] The experience of conquest was, of course, very varied across the empire. Some regions were overrun brutally but swiftly. […] Other regions, particularly those near the frontiers of the empire, suffered much more prolonged violence. […] Even those few regions that eventually passed relatively peacefully into Germanic control had all previously experienced invasion and devastation.”

“Throughout the time that the Roman empire existed, the soldiery of many towns were maintained at public expense for the defence of the frontier. When this practice fell into abeyance, both these troops and the frontier disappeared. […] It has rightly been observed that the deposition in 476 of the last emperor resident in Italy, Romulus Augustulus, caused remarkably little stir: the great historian of Antiquity, Momigliano, called it the ‘noiseless fall of an empire’.39 But the principal reason why this event passed almost unnoticed was because contemporaries knew that the western empire, and with it autonomous Roman power, had already disappeared in all but name. […] The story of the loss of the West is not a story of great set-piece battles, like Hadrianopolis, heroically lost by the Romans in the field. […] The West was lost mainly through failure to engage the invading forces successfully and to drive them back. This caution in the face of the enemy, and the ultimate failure to drive him out, are best explained by the severe problems that there were in putting together armies large enough to feel confident of victory. Avoiding battle led to a slow attrition of the Roman position, but engaging the enemy on a large scale would have risked immediate disaster […] Roman military dominance over the Germanic peoples was considerable, but never absolute and unshakable. […] even at the best of times, the edge that the Romans enjoyed over their enemies, through their superior equipment and organization, was never remotely comparable, say, to that of Europeans in the nineteenth century […] although normally the Romans defeated barbarians when they met them in battle, they could and did occasionally suffer disasters.”

“Italy suffered from the presence of large hostile armies in 401-2 (Alaric and the Goths), in 405-6 (Radagaisus), and again from 408 to 412 (Alaric, for the second time); Gaul was devastated in the years 407-9 by the Vandals, Alans, and Sueves; and the Iberian peninsula by the same peoples, from 409. The only regions of the western empire that had not been profoundly affected by violence by 410 were Africa and the islands of the Mediterranean […] Radagaisus’ incursion was successfully crushed, but it was immediately followed by a disastrous sequence of events: the crossing of the Rhine by Vandals, Sueves, and Alans at the very end of 406; the usurpation of Constantine III in 407, taking with him the resources of Britain and much of Gaul; and the Goths’ return to Italy in 408. […] Some of the lost territories were temporarily recovered in the second decade of the century; but much (the whole of Britain and a large part of Gaul and Spain) was never regained, and even reconquered provinces took many years to get back to full health […] the imperial recovery was only short-lived; in 429 it was brought definitely to an end by the successful crossing of the Vandals into Africa, and the devastation of the western empire’s last remaining secure tax base. […] There was, of course, a close connection between failure ‘abroad’ and the usurpations and rebellions ‘at home’. […] As in other periods of history, failure against foreign enemies and civil war were very closely linked, indeed feeding off each other.”

“Some accounts of the invasions [and maps of them] […] seem to be describing successive campaigns in a single war, with the systematic and progressive seizure of territory by the various armies of a united German coalition. If this had really been the case, the West would almost certainly have fallen definitely in the very early fifth century, and far less of the structures of imperial times would have survived into the post-Roman period. The reality was very much more messy and confused […] The different groups of incomers were never united, and fought each other, sometimes bitterly, as often as they fought the ‘Romans’ – just as the Roman side often gave civil strife priority over warfare against the invaders.35 When looked at in detail, the ‘Germanic invasions’ of the fifth century break down into a complex mosaic of different groups, some imperial, some local, and some Germanic, each jockeying for position against or in alliance with the others, with the Germanic groups eventually coming out on top. [As already mentioned, Heather is the book to read if you’re interested in these topics – US] […] Because the military position of the imperial government in the fifth century was weak, and because the Germanic invaders could be appeased, the Romans on occasion made treaties with particular groups, formally granting them territory on which to settle in return for their alliance. […] The interests of the centre when settling Germanic peoples, and those of the locals who had to live with the arrangements, certainly did not always coincide. […] The imperial government was entirely capable of selling its provincial subjects downriver, in the interests of short-term political and military gain. […] Sidonius Apollinaris, bishop of Clermont and a leader of the resistance to the Visigoths, recorded his bitterness: ‘We have been enslaved, as the price of other people’s security.41‘”

“[A]rchaeological evidence now available […] shows a startling decline in western standards of living during the fifth to seventh centuries.1 […] Ceramic vessels, of different shapes and sizes, play an essential part in the storage, preparation, cooking, and consumption of foodstuffs. They certainly did so in Roman times […] amphorae, not barrels, were the normal containers for transport and domestic storage of liquids. […] Pots are low-value, high-bulk items, with the additional disadvantage of being brittle […] and they are difficult and expensive to pack and transport, being heavy, bulky, and easy to break. If, despite these disadvantages, vessels (both fine tableware and more functional items) were being made to a high standard and in large quantities, and if they were travelling widely and percolating through even the lower levels of society – as they were in the Roman period – then it is much more likely than not that other goods, whose distribution we cannot document with the same confidence, were doing the same. […] There is, for instance, no reason to suppose that the huge markets in clothing, footware, and tools were less sophisticated than that in pottery. […] In the post-Roman West, almost all this material sophistication disappeared. Specialized production and all of the most local distribution became rare, unless for luxury goods; and the impressive range and quantity of high-quality functional goods, which had characterized the Roman period, vanished, or, at the very least, were drastically reduced. The middle and lower markets, which under the Romans had absorbed huge quantities of basic, but good-quality, items, seem to have almost entirely disappeared. […] There is no area of the post-Roman West that I know of where the range of pottery available in the sixth and seventh centuries matches that of the Roman period, and in most areas the decline in quality is startling. Furthermore, it was not only quality and diversity that declined; the overall quantities of pottery in circulation also fell dramatically. […] what had once been widely diffused products had become luxury items.”

“What we observe at the end of the Roman world is not a ‘recession’ […] with an essentially similar economy continuing to work at a reduced pace. Instead, what we see is a remarkable qualitative change, with the disappearance of entire industries and commercial networks. The economy of the post-Roman West is not that of the fourth century reduced in scale, but a very different and far less sophisticated entity.43 This is at its starkest and most obvious in Britain. A number of basic skills disappeared entirely during the fifth century, to be reintroduced only centuries later. […] All over Britain the art of making pottery on a wheel disappeared in the early fifth century, and was not reintroduced for almost 300 years. The potter’s wheel is not an instrument of cultural identity. Rather, it is a functional innovation that facilitates the rapid production of thin-walled ceramics; and yet it disappeared from Britain. […] post-Roman Britain in fact sank to a level of economic complexity well below that of the pre-Roman Iron Age. Southern Britain, in the years before the Roman conquest of AD 43, was importing quantities of Gaulish wine and Gaulish pottery; it had its own native pottery industries with regional distribution of their wares; it even had native silver coinages […] The settlement pattern of later iron-age Britain also reflects emerging economic complexity, with substantial coastal settlements […] which were at least partly dependent on trade. None of these features can be found reliably in fifth- and sixth-century post-Roman Britain. It is really only in about AD 700, three centuries after the disintegration of the Romano-British economy, that southern Britain crawled back to the level of economic complexity foudn in the pre-Roman Iron Age, with evidence of pots imported from the Continents, the first substantial and wheel-turned Anglo-Saxon pottery industry […], the striking of silver coins, and the emergence of coastal trading towns […] In the western Mediterranean, the economic regression was by no means as total as it was in Britain. […] But it must be remembered that in the Mediterranean world the level of economic complexity and sophistication reached in the Roman period was very considerably higher than anything ever attained in Britain. The fall in economic complexity may in fact have been as remarkable as that in Britain; but, since in the Mediterranean it started from a much higher point, it also bottomed out at a higher level. […] in some areas at least a very similar picture can be found to that sketched out above – of a regression, taking the economy way below levels of complexity reached in the pre-Roman period.”

“The enormity of the economic disintegration that occurred at the end of the empire was almost certainly a direct result of […] specialization. The post-Roman world reverted to levels of economic simplicity […] with little movement of goods, poor housing, and only the most basic manufactured items. The sophistication of the Roman period, by spreading high-quality goods widely in society, had destroyed the local skills and local networks that, in pre-Roman times, had provided lower-level economic complexity. It took centuries for people in the former empire to reacquire the skills and the regional networks that would take them back to these pre-Roman levels of sophistication. […] The Roman period is sometimes seen as enriching only the elite, rather than enhancing the standard of living of the population at large. […] I think this, and similar views, are mistaken. For, me, what is most striking about the Roman economy is precisely the fact that it was not solely an elite phenomenon, but one that made basic good-quality items available right down the social scale. […] good-quality pottery was widely available, and in regions like Italy even the comfort of tiled roofs. I would also seriously question the romantic assumption that economic simplicity meant a freer or more equal society.”

“There was no single moment, nor even a single century of collapse. The ancient economy disappeared at different times and at varying speeds across the empire. […] It was […] the fifth-century invasions that […] brought down the ancient economy in the West. However, this does not mean that the death of the sophisticated ancient world was intended by the Germanic peoples. The invaders entered the empire with a wish to share in its high standard of living, not to destroy it […] But, although the Germanic peoples did not intend it, their invasions, the disruptions these caused, and the consequent dismembering of the Roman state were undoubtedly the principal cause of death of the Roman economy.”

“Reading and writing (and a grounding in classical literature) were in Roman times an essential mark of status. […] illiterates amongst the Roman upper classes were very rare indeed. […] In a much simpler world, the urgent need to read and write declined, and with it went the social pressure on the secular elite to be literate. Widespread literacy in the post-Roman West definitely became confined to the clergy. […] It is a striking fact, and a major contrast with Roman times, that even great rulers could be illiterate in the early Middle Ages.”

“The changing perspectives of scholarship are always shaped in part by wider developments in modern society. There is inevitably a close connection between the way we view our own world and the way we interpret the past. […] [T]here is a real danger for the present day in a vision of the past that explicitly sets out to eliminate all crisis and all decline. The end of the Roman West […] destroyed a complex civilization, throwing the inhabitants of the West back to a standard of living typical of prehistoric times. Romans before the fall were as certain as we are today that their world would continue for ever substantially unchanged. They were wrong.”

 

September 18, 2017 Posted by | Archaeology, Books, History | Leave a comment

A few diabetes papers of interest

i. Glycated Hemoglobin and All-Cause and Cause-Specific Mortality in Singaporean Chinese Without Diagnosed Diabetes: The Singapore Chinese Health Study.

“Previous studies have reported that elevated levels of HbA1c below the diabetes threshold (<6.5%) are associated with an increased risk for cardiovascular morbidity and mortality (312). Yet, this research base is not comprehensive, and data from Chinese populations are scant, especially in those without diabetes. This gap in the literature is important since Southeast Asian populations are experiencing epidemic rates of type 2 diabetes and related comorbidities with a substantial global health impact (1316).

Overall, there are few cohort studies that have examined the etiologic association between HbA1c levels and all-cause and cause-specific mortality. There is even lesser insight on the nature of the relationship between HbA1c and significant clinical outcomes in Southeast Asian populations. Therefore, we examined the association between HbA1c and all-cause and cause-specific mortality in the Singapore Chinese Health Study (SCHS).”

“The design of the SCHS has been previously summarized (17). Briefly, the cohort was drawn from men and women, aged 45–74 years, who belonged to one of the major dialect groups (Hokkien or Cantonese) of Chinese in Singapore. […] Between April 1993 and December 1998, 63,257 individuals completed an in-person interview that included questions on usual diet, demographics, height and weight, use of tobacco, usual physical activity, menstrual and reproductive history (women only), medical history including history of diabetes diagnosis by a physician, and family history of cancer. […] At the follow-up interview (F1), which occurred in 1999–2004, subjects were asked to update their baseline interview information. […] The study population derived from 28,346 participants of the total 54,243 who were alive and participated at F1, who provided consent at F1 to collect subsequent blood samples (a consent rate of ∼65%). The participants for this study were a random selection of individuals from the full study population who did not report a history of diabetes or CVD at the baseline or follow-up interview and reported no history of cancer.”

“During 74,890 person-years of follow-up, there were 888 total deaths, of which 249 were due to CVD, 388 were due to cancer, and 169 were recorded as respiratory mortality. […] There was a positive association between HbA1c and age, BMI, and prevalence of self-reported hypertension, while an inverse association was observed between educational attainment and HbA1c. […] The crude mortality rate was 1,186 deaths per 100,000 person-years. The age- and sex-standardized mortality rates for all-cause, CVD, and cerebrovascular each showed a J-shaped pattern according to HbA1c level. The CHD and cancer mortality rates were higher for HbA1c ≥6.5% (≥48 mmol/mol) and otherwise displayed no apparent pattern. […] There was no association between any level of HbA1c and respiratory causes of death.”

“Chinese men and women with no history of cancer, reported diabetes, or CVD with an HbA1c level ≥6.5% (≥48 mmol/mol) were at a significant increased risk of mortality during follow-up relative to their peers with an HbA1c of 5.4–5.6% (36–38 mmol/mol). No other range of HbA1c was significantly associated with risk of mortality during follow-up, and in secondary analyses, when the HbA1c level ≥6.5% (≥48 mmol/mol) was divided into four categories, this increased risk was observed in all four categories; thus, these data represent a clear threshold association between HbA1c and mortality in this population. These results are consistent with previous prospective cohort studies identifying chronically high HbA1c, outside of diabetes, to be associated with increased risk for all-cause and CVD-related mortality (312,22).”

“Hyperglycemia is a known risk factor for CVD, not limited to individuals with diabetes. This may be in part due to the vascular damage caused by oxidative stress in periods of hypo- and hyperglycemia (23,24). For individuals with impaired fasting glucose and impaired glucose tolerance, increased oxidative stress and endothelial dysfunction are present before the onset of diabetes (25). The association between chronically high levels of HbA1c and development of and death from cancer is not as well defined (9,2630). Abnormal metabolism may play a role in cancer development and death. This is important, considering cancer is the leading cause of death in Singapore for adults 15–59 years of age (31). Increased risk for cancer mortality was found in individuals with impaired glucose tolerance (30). […] Hyperinsulinemia and IGF-I are associated with increased cancer risk, possibly through mitogenic effects and tumor formation (27,28,37). This is the basis for the insulin-cancer hypothesis. Simply put, chronic levels of hyperinsulinemia reduce the production of IGF binding proteins 1 and 2. The absence of these proteins results in excess bioactive IGF-I, supporting tumor development (38). Chronic hyperglycemia, indicating high levels of insulin and IGF-I, may explain inhibition of cell apoptosis, increased cell proliferation, and increased cancer risk (39).”

ii. The Cross-sectional and Longitudinal Associations of Diabetic Retinopathy With Cognitive Function and Brain MRI Findings: The Action to Control Cardiovascular Risk in Diabetes (ACCORD) Trial.

“Brain imaging studies suggest that type 2 diabetes–related microvascular disease may affect the central nervous system in addition to its effects on other organs, such as the eye and kidney. Histopathological evidence indicates that microvascular disease in the brain can lead to white matter lesions (WMLs) visible with MRI of the brain (1), and risk for them is often increased by type 2 diabetes (26). Type 2 diabetes also has recently been associated with lower brain volume, particularly gray matter volume (79).

The association between diabetic retinopathy and changes in brain tissue is of particular interest because retinal and cerebral small vessels have similar anatomy, physiology, and embryology (10). […] the preponderance of evidence suggests diabetic retinopathy is associated with increased WML burden (3,1214), although variation exists. While cross-sectional studies support a correlation between diabetic retinopathy and WMLs (2,3,6,15), diabetic retinopathy and brain atrophy (16), diabetic retinopathy and psychomotor speed (17,18), and psychomotor speed and WMLs (5,19,20), longitudinal evidence demonstrating the assumed sequence of disease development, for example, vascular damage of eye and brain followed by cognitive decline, is lacking.

Using Action to Control Cardiovascular Risk in Diabetes (ACCORD) data, in which a subset of participants received longitudinal measurements of diabetic retinopathy, cognition, and MRI variables, we analyzed the 1) cross-sectional associations between diabetic retinopathy and evidence of brain microvascular disease and 2) determined whether baseline presence or severity of diabetic retinopathy predicts 20- or 40-month changes in cognitive performance or brain microvascular disease.”

“The ACCORD trial (21) was a multicenter randomized trial examining the effects of intensive glycemic control, blood pressure, and lipids on cardiovascular disease events. The 10,251 ACCORD participants were aged 40–79 years, had poorly controlled type 2 diabetes (HbA1c > 7.5% [58.5 mmol/mol]), and had or were at high risk for cardiovascular disease. […] The ACCORD-Eye sample comprised 3,472 participants who did not report previous vitrectomy or photocoagulation surgery for proliferative diabetic retinopathy at baseline […] ACCORD-MIND included a subset of 2,977 ACCORD participants who completed a 30-min cognitive testing battery, 614 of whom also had useable scans from the MRI substudy (23,24). […] ACCORD-MIND had visits at three time points: baseline, 20 months, and 40 months. MRI of the brain was completed at baseline and the 40-month time point.”

“Baseline diabetic retinopathy was associated with more rapid 40-month declines in DSST and MMSE [Mini-Mental State Examination] when adjusting for demographics and lifestyle factors in model 1 […]. Moreover, increasing severity of diabetic retinopathy was associated with increased amounts of decline in DSST [Digit Symbol Substitution Test] performance (−1.30, −1.76, and −2.81 for no, mild, and moderate/severe NPDR, respectively; P = 0.003) […Be careful about how to interpret that p-value – see below, US] . The associations remained virtually unchanged after further adjusting for vascular and diabetes risk factors, depression, and visual acuity using model 2.”

“This longitudinal study provides new evidence that diabetic retinopathy is associated with future cognitive decline in persons with type 2 diabetes and confirms the finding from the Edinburgh Type 2 Diabetes Study derived from cross-sectional data that lifetime cognitive decline is associated with diabetic retinopathy (32). We found that the presence of diabetic retinopathy, independent of visual acuity, predicts greater declines in global cognitive function measured with the MMSE and that the magnitude of decline in processing speed measured with the DSST increased with increasing severity of baseline diabetic retinopathy. The association with psychomotor speed is consistent with prior cross-sectional findings in community-based samples of middle-aged (18) and older adults (17), as well as prospective studies of a community-based sample of middle-aged adults (33) and patients with type 1 diabetes (34) showing that retinopathy with different etiologies predicted a subsequent decline in psychomotor speed. This study extends these findings to patients with type 2 diabetes.”

“we tested a number of different associations but did not correct P values for multiple testing” [Aargh!, US.]

iii. Incidence of Remission in Adults With Type 2 Diabetes: The Diabetes & Aging Study.

(Note to self before moving on to the paper: these people identified type 1 diabetes by self-report or diabetes onset at <30 years of age, treated with insulin only and never treated with oral agents).

“It is widely believed that type 2 diabetes is a chronic progressive condition, which at best can be controlled, but never cured (1), and that once treatment with glucose-lowering medication is initiated, it is required indefinitely and is intensified over time (2,3). However, a growing body of evidence from clinical trials and case-control studies (46) has reported the remission of type 2 diabetes in certain populations, most notably individuals who received bariatric surgery. […] Despite the clinical relevance and importance of remission, little is known about the incidence of remission in community settings (11,12). Studies to date have focused largely on remission after gastric bypass or relied on data from clinical trials, which have limited generalizability. Therefore, we conducted a retrospective cohort study to describe the incidence rates and variables associated with remission among adults with type 2 diabetes who received usual care, excluding bariatric surgery, in a large, ethnically diverse population. […] 122,781 individuals met our study criteria, yielding 709,005 person-years of total follow-up time.”

“Our definitions of remission were based on the 2009 ADA consensus statement (10). “Partial remission” of diabetes was defined as having two or more consecutive subdiabetic HbA1c measurements, all of which were in the range of 5.7–6.4% [39–46 mmol/mol] over a period of at least 12 months. “Complete remission” was defined as having two or more consecutive normoglycemic HbA1c measurements, all of which were <5.7% [<39 mmol/mol] over a period of at least 12 months. “Prolonged remission” was defined as having two or more consecutive normoglycemic HbA1c measurements, all of which were <5.7% [<39 mmol/mol] over a period of at least 60 months. Each definition of remission requires the absence of pharmacologic treatment during the defined observation period.”

“The average age of participants was 62 years, 47.1% were female, and 51.6% were nonwhite […]. The mean (SD) interval between HbA1c tests in the remission group was 256 days (139 days). The mean interval (SD) between HbA1c tests among patients not in the remission group was 212 days (118 days). The median time since the diagnosis of diabetes in our cohort was 5.9 years, and the average baseline HbA1c level was 7.4% [57 mmol/mol]. The 18,684 individuals (15.2%) in the subset with new-onset diabetes, defined as ≤2 years since diagnosis, were younger, were more likely to have their diabetes controlled by diet, and had fewer comorbidities […] The incidence densities of partial, complete, and prolonged remission in the full cohort were 2.8 (95% CI 2.6–2.9), 0.24 (95% CI 0.20–0.28), and 0.04 (95% CI 0.01–0.06) cases per 1,000 person-years, respectively […] The 7-year cumulative incidences of partial, complete, and prolonged remission were 1.5% (95% CI 1.4–1.5%), 0.14% (95% CI 0.12–0.16%), and 0.01% (95% CI 0.003–0.02%), respectively. The 7-year cumulative incidence of any remission decreased with longer time since diagnosis from a high of 4.6% (95% CI 4.3–4.9%) for individuals diagnosed with diabetes in the past 2 years to a low of 0.4% (95% CI 0.3–0.5%) in those diagnosed >10 years ago. The 7-year cumulative incidence of any remission was much lower for individuals using insulin (0.05%; 95% CI 0.03–0.1%) or oral agents (0.3%; 95% CI 0.2–0.3%) at baseline compared with diabetes patients not using medication at baseline (12%; 95% CI 12–13%).”

“In this large cohort of insured adults with type 2 diabetes not treated with bariatric surgery, we found that 1.5% of individuals with recent evidence of clinical diabetes achieved at least partial remission over a 7-year period. If these results were generalized to the 25.6 million U.S. adults living with type 2 diabetes in 2010 (25), they would suggest that 384,000 adults could experience remission over the next 7 years. However, the rate of prolonged remission was extremely rare (0.007%), translating into only 1,800 adults in the U.S. experiencing remission lasting at least 5 years. To provide context, 1.7% of the cohort died, while only 0.8% experienced any level of remission, during the calendar year 2006. Thus, the chances of dying were higher than the chances of any remission. […] Although remission of type 2 diabetes is uncommon, it does occur in patients who have not undergone surgical interventions. […] Our analysis shows that remission is rare and variable. The likelihood of remission is more common among individuals with early-onset diabetes and those not treated with glucose-lowering medications at the point of diabetes diagnosis. Although rare, remission can also occur in individuals with more severe diabetes and those previously treated with insulin.”

iv. Blood pressure control for diabetic retinopathy (Cochrane review).

“Diabetic retinopathy is a common complication of diabetes and a leading cause of visual impairment and blindness. Research has established the importance of blood glucose control to prevent development and progression of the ocular complications of diabetes. Simultaneous blood pressure control has been advocated for the same purpose, but findings reported from individual studies have supported varying conclusions regarding the ocular benefit of interventions on blood pressure. […] The primary aim of this review was to summarize the existing evidence regarding the effect of interventions to control or reduce blood pressure levels among diabetics on incidence and progression of diabetic retinopathy, preservation of visual acuity, adverse events, quality of life, and costs. A secondary aim was to compare classes of anti-hypertensive medications with respect to the same outcomes.”

“We included 15 RCTs, conducted primarily in North America and Europe, that had enrolled 4157 type 1 and 9512 type 2 diabetic participants, ranging from 16 to 2130 participants in individual trials. […] Study designs, populations, interventions, and lengths of follow-up (range one to nine years) varied among the included trials. Overall, the quality of the evidence for individual outcomes was low to moderate.”

“The evidence from these trials supported a benefit of more intensive blood pressure control intervention with respect to 4- to 5-year incidence of diabetic retinopathy (estimated risk ratio (RR) 0.80; 95% confidence interval (CI) 0.71 to 0.92) and the combined outcome of incidence and progression (estimated RR 0.78; 95% CI 0.63 to 0.97). The available evidence provided less support for a benefit with respect to 4- to 5-year progression of diabetic retinopathy (point estimate was closer to 1 than point estimates for incidence and combined incidence and progression, and the CI overlapped 1; estimated RR 0.88; 95% CI 0.73 to 1.05). The available evidence regarding progression to proliferative diabetic retinopathy or clinically significant macular edema or moderate to severe loss of best-corrected visual acuity did not support a benefit of intervention on blood pressure: estimated RRs and 95% CIs 0.95 (0.83 to 1.09) and 1.06 (0.85 to 1.33), respectively, after 4 to 5 years of follow-up. Findings within subgroups of trial participants (type 1 and type 2 diabetics; participants with normal blood pressure levels at baseline and those with elevated levels) were similar to overall findings.”

“The available evidence supports a beneficial effect of intervention to reduce blood pressure with respect to preventing diabetic retinopathy for up to 4 to 5 years. However, the lack of evidence to support such intervention to slow progression of diabetic retinopathy or to prevent other outcomes considered in this review, along with the relatively modest support for the beneficial effect on incidence, weakens the conclusion regarding an overall benefit of intervening on blood pressure solely to prevent diabetic retinopathy.”

v. Early Atherosclerosis Relates to Urinary Albumin Excretion and Cardiovascular Risk Factors in Adolescents With Type 1 Diabetes: Adolescent Type 1 Diabetes cardio-renal Intervention Trial (AdDIT).

“Children with type 1 diabetes are at greatly increased risk for the development of both renal and cardiovascular disease in later life (1,2). Evidence is accumulating that these two complications may have a common pathophysiology, with endothelial dysfunction a key early event.

Microalbuminuria is a recognized marker of endothelial damage (3) and predicts progression to proteinuria and diabetic nephropathy, as well as to atherosclerosis (4) and increased cardiovascular risk (5). It is, however, rare in adolescents with type 1 diabetes who more often have higher urinary albumin excretion rates within the normal range, which are associated with later progression to microalbuminuria and proteinuria (6).”

“The Adolescent Type 1 Diabetes cardio-renal Intervention Trial (AdDIT) (10) is designed to examine the impact of minor differences in albumin excretion in adolescents on the initiation and progression of cardiovascular and renal disease. The primary cardiovascular end point in AdDIT is carotid intima-media thickness (cIMT). Subclinical atherosclerosis can be detected noninvasively using high-resolution ultrasound to measure the intima-media thickness (IMT) of the carotid arteries, which predicts cardiovascular morbidity and mortality (11,12). […] The primary aim of this study was to examine the relationship of increased urinary albumin excretion and cardiovascular risk factors in adolescents with type 1 diabetes with structural arterial wall changes. We hypothesized that even minor increases in albumin excretion would be associated with early atherosclerosis but that this would be detectable only in the abdominal aorta. […] A total of 406 adolescents, aged 10–16 years, with type 1 diabetes for more than 1 year, recruited in five centers across Australia, were enrolled in this cross-sectional study”.

“Structural changes in the aorta and carotid arteries could be detected in >50% of adolescents with type 1 diabetes […] The difference in aIMT [aortic intima-media thickness] between type 1 diabetic patients and age- and sex-matched control subjects was equivalent to that seen with a 5- to 6-year age increase in the type 1 diabetic patients. […] Aortic IMT was […] able to better differentiate adolescents with type 1 diabetes from control subjects than was carotid wall changes. Aortic IMT enabled detection of the very early wall changes that are present with even small differences in urinary albumin excretion. This not only supports the concept of early intervention but provides a link between renal and cardiovascular disease.

The independent relationship between aIMT and urinary albumin excretion extends our knowledge of the pathogenesis of cardiovascular and renal disease in type 1 diabetes by showing that the first signs of the development of cardiovascular disease and diabetic nephropathy are related. The concept that microalbuminuria is a marker of a generalized endothelial damage, as well as a marker of renal disease, has been recognized for >20 years (3,20,21). Endothelial dysfunction is the first critical step in the development of atherosclerosis (22). Early rises in urinary albumin excretion precede the development of microalbuminuria and proteinuria (23). It follows that the first structural changes of atherosclerosis could relate to the first biochemical changes of diabetic nephropathy. To our knowledge, this is the first study to provide evidence of this.”

“In conclusion, atherosclerosis is detectable from early adolescence in type 1 diabetes. Its early independent associations are male sex, age, systolic blood pressure, LDL cholesterol, and, importantly, urinary albumin excretion. […] Early rises in urinary albumin excretion during adolescence not only are important for determining risk of progression to microalbuminuria and diabetic nephropathy but also may alert the clinician to increased risk of cardiovascular disease.”

vi. Impact of Islet Autoimmunity on the Progressive β-Cell Functional Decline in Type 2 Diabetes.

“Historically, type 2 diabetes (T2D) has not been considered to be immune mediated. However, many notable discoveries in recent years have provided evidence to support the concept of immune system involvement in T2D pathophysiology (15). Immune cells have been identified in the pancreases of phenotypic T2D patients (35). Moreover, treatment with interleukin-1 receptor agonist improves β-cell function in T2D patients (68). These studies suggest that β-cell damage/destruction mediated by the immune system may be a component of T2D pathophysiology.

Although the β-cell damage and destruction in autoimmune diabetes is most likely T-cell mediated (T), immune markers of autoimmune diabetes have primarily centered on the presence of circulating autoantibodies (Abs) to various islet antigens (915). Abs commonly positive in type 1 diabetes (T1D), especially GAD antibody (GADA) and islet cell Abs (ICA), have been shown to be more common in patients with T2D than in nondiabetic control populations, and the presence of multiple islet Abs, such as GADA, ICA, and tyrosine phosphatase-2 (insulinoma-associated protein 2 [IA-2]), have been demonstrated to be associated with an earlier need for insulin treatment in adult T2D patients (14,1620).”

“In this study, we observed development of islet autoimmunity, measured by islet Abs and islet-specific T-cell responses, in 61% of the phenotypic T2D patients. We also observed a significant association between positive islet-reactive T-cell responses and a more rapid decline in β-cell function as assessed by FCP and glucagon-SCP responses. […] The results of this pilot study led us to hypothesize that islet autoimmunity is present or will develop in a large portion of phenotypic T2D patients and that the development of islet autoimmunity is associated with a more rapid decline in β-cell function. Moreover, the prevalence of islet autoimmunity in most previous studies is grossly underestimated because these studies have not tested for islet-reactive T cells in T2D patients but have based the presence of autoimmunity on antibody testing alone […] The results of this pilot study suggest important changes to our understanding of T2D pathogenesis by demonstrating that the prevalence of islet autoimmune development is not only more prevalent in T2D patients than previously estimated but may also play an important role in β-cell dysfunction in the T2D disease process.”

September 18, 2017 Posted by | Cancer/oncology, Cardiology, Diabetes, Epidemiology, Immunology, Medicine, Nephrology, Neurology, Ophthalmology, Studies | Leave a comment

Ophthalmology – National EM Board Review Course

The lecture covers a lot of different stuff. Some links:

Blepharitis.
Dacryocystitis.
Dacryoadenitis.
Chalazion.
Orbital Cellulitis.
Cranial Nerves III, IV, and VI: The Oculomotor System.
Argyll Robertson pupil.
Marcus Gunn pupil.
Horner syndrome.
Third nerve palsy.
Homonymous hemianopsia.
Central Retinal Artery Occlusion.
Central Retinal Vein Occlusion.
Optic Neuritis.
Retinal detachment.
Temporal Arteritis.
Conjunctivitis.
Epidemic Keratoconjunctivitis (EKC).
Uveitis.
Hypopyon.
Keratitis.
Herpes Zoster Ophthalmicus.
Subconjunctival Hemorrhage.
Corneal Abrasion.
Corneal Laceration.
Globe Rupture.
Acute Angle-Closure Glaucoma.
Hyphema.
Endophthalmitis.
Retrobulbar hemorrhage.

September 15, 2017 Posted by | Lectures, Medicine, Ophthalmology, Pharmacology | Leave a comment