Econstudentlog

A few diabetes papers of interest

i. Impact of Parental Socioeconomic Status on Excess Mortality in a Population-Based Cohort of Subjects With Childhood-Onset Type 1 Diabetes.

“Numerous reports have shown that individuals with lower SES during childhood have increased morbidity and all-cause mortality at all ages (10–14). Although recent epidemiological studies have shown that all-cause mortality in patients with T1D increases with lower SES in the individuals themselves (15,16), the association between parental SES and mortality among patients with childhood-onset T1D has not been reported to the best of our knowledge. Our hypothesis was that low parental SES additionally increases mortality in subjects with childhood-onset T1D. In this study, we used large population-based Swedish databases to 1) explore in a population-based study how parental SES affects mortality in a patient with childhood-onset T1D, 2) describe and compare how the effect differs among various age-at-death strata, and 3) assess whether the adult patient’s own SES affects mortality independently of parental SES.”

“The Swedish Childhood Diabetes Registry (SCDR) is a dynamic population-based cohort reporting incident cases of T1D since 1 July 1977, which to date has collected >16,000 prospective cases. […] All patients recorded in the SCDR from 1 January 1978 to 31 December 2008 were followed until death or 31 December 2010. The cohort was subjected to crude analyses and stratified analyses by age-at-death groups (0–17, 18–24, and ≥25 years). Time at risk was calculated from date of birth until death or 31 December 2010. Kaplan-Meier analyses and log-rank tests were performed to compare the effect of low maternal educational level, low paternal educational level, and family income support (any/none). Cox regression analyses were performed to estimate and compare the hazard ratios (HRs) for the socioeconomic variables and to adjust for the potential confounding variables age at onset and sex.”

“The study included 14,647 patients with childhood-onset T1D. A total of 238 deaths (male 154, female 84) occurred in 349,762 person-years at risk. The majority of mortalities occurred among the oldest age-group (≥25 years of age), and most of the deceased subjects had onset of T1D at the ages of 10–14.99 years […]. Mean follow-up was 23.9 years and maximum 46.5 years. The overall standardized mortality ratio up to the age of 47 years was 2.3 (95% CI 1.35–3.63); for females, it was 2.6 (1.28–4.66) and for males, 2.1 (1.27–3.49). […] Analyses on the effect of low maternal educational level showed an increased mortality for male patients (HR 1.43 [95% CI 1.01–2.04], P = 0.048) and a nonsignificant increased mortality for female patients (1.21 [0.722–2.018], P = 0.472). Paternal educational level had no significant effect on mortality […] Having parents who ever received income support was associated with an increased risk of death in both males (HR 1.89 [95% CI 1.36–2.64], P < 0.001) and females (2.30 [1.43–3.67], P = 0.001) […] Excluding the 10% of patients with the highest accumulated income support to parents during follow-up showed that having parents who ever received income support still was a risk factor for mortality.”

“A Cox model including maternal educational level together with parental income support, adjusting for age at onset and sex, showed that having parents who received income support was associated with a doubled mortality risk (HR 1.96 [95% CI 1.49–2.58], P < 0.001) […] In a Cox model including the adult patient’s own SES, having parents who received income support was still an independent risk factor in the younger age-at-death group (18–24 years). Among those who died at age ≥25 years of age, the patient’s own SES was a stronger predictor for mortality (HR 2.46 [95% CI 1.54–3.93], P < 0.001)”

“Despite a well-developed health-care system in Sweden, overall mortality up to the age of 47 years is doubled in both males and females with childhood-onset T1D. These results are in accordance with previous Swedish studies and reports from other comparable countries […] Previous studies indicated that low SES during childhood is associated with low glycemic control and diabetes-related morbidity in patients with T1D (8,9), and the current study implies that mortality in adulthood is also affected by parental SES. […] The findings, when stratified by age-at-death group, show that adult patients’ own need of income support independently predicted mortality in those who died at ≥25 years of age, whereas among those who died in the younger age-group (18–24 years), parental requirement of income support was still a strong independent risk factor. None of the present SES measures seem to predict mortality in the ages 0–17 years perhaps due to low numbers and, thus, power.”

ii. Exercise Training Improves but Does Not Normalize Left Ventricular Systolic and Diastolic Function in Adolescents With Type 1 Diabetes.

“Adults and adolescents with type 1 diabetes have reduced exercise capacity (810), which increases their risk for cardiovascular morbidity and mortality (11). The causes for this reduced exercise capacity are unclear. However, recent studies have shown that adolescents with type 1 diabetes have lower stroke volume during exercise, which has been attributed to alterations in left ventricular function (9,10). Reduced left ventricular compliance resulting in an inability to fill the left ventricle appropriately during exercise has been shown to contribute to the lower stroke volume during exercise in both adults and adolescents with type 1 diabetes (12).

Exercise training is recommended as part of the management of type 1 diabetes. However, the effects of exercise training on left ventricular function at rest and during exercise in adolescents with type 1 diabetes have not been investigated. In particular, it is unclear whether exercise training improves cardiac hemodynamics during exercise in adolescents with diabetes. Therefore, we aimed to assess left ventricular volumes at rest and during exercise in a group of adolescents with type 1 diabetes compared with adolescents without diabetes before and after a 20-week exercise-training program. We hypothesized that exercise training would improve exercise capacity and exercise stroke volume in adolescents with diabetes.”

RESEARCH DESIGN AND METHODS Fifty-three adolescents with type 1 diabetes (aged 15.6 years) were divided into two groups: exercise training (n = 38) and nontraining (n = 15). Twenty-two healthy adolescents without diabetes (aged 16.7 years) were included and, with the 38 participants with type 1 diabetes, participated in a 20-week exercise-training intervention. Assessments included VO2max and body composition. Left ventricular parameters were obtained at rest and during acute exercise using MRI.

RESULTS Exercise training improved aerobic capacity (10%) and stroke volume (6%) in both trained groups, but the increase in the group with type 1 diabetes remained lower than trained control subjects. […]

CONCLUSIONS These data demonstrate that in adolescents, the impairment in left ventricular function seen with type 1 diabetes can be improved, although not normalized, with regular intense physical activity. Importantly, diastolic dysfunction, a common mechanism causing heart failure in older subjects with diabetes, appears to be partially reversible in this age group.”

“This study confirms that aerobic capacity is reduced in [diabetic] adolescents and that this, at least in part, can be attributed to impaired left ventricular function and a blunted cardiac response to exercise (9). Importantly, although an aerobic exercise-training program improved the aerobic capacity and cardiac function in adolescents with type 1 diabetes, it did not normalize them to the levels seen in the training group without diabetes. Both left ventricular filling and contractility improved after exercise training in adolescents with diabetes, suggesting that aerobic fitness may prevent or delay the well-described impairment in left ventricular function in diabetes (9,10).

The increase in peak aerobic capacity (∼12%) seen in this study was consistent with previous exercise interventions in adults and adolescents with diabetes (14). However, the baseline peak aerobic capacity was lower in the participants with diabetes and improved with training to a level similar to the baseline observed in the participants without diabetes; therefore, trained adolescents with diabetes remained less fit than equally trained adolescents without diabetes. This suggests there are persistent differences in the cardiovascular function in adolescents with diabetes that are not overcome by exercise training.”

“Although regular exercise potentially could improve HbA1c, the majority of studies have failed to show this (3134). Exercise training improved aerobic capacity in this study without affecting glucose control in the participants with diabetes, suggesting that the effects of glycemic status and exercise training may work independently to improve aerobic capacity.”

….

iii. Change in Medical Spending Attributable to Diabetes: National Data From 1987 to 2011.

“Diabetes care has changed substantially in the past 2 decades. We examined the change in medical spending and use related to diabetes between 1987 and 2011. […] Using the 1987 National Medical Expenditure Survey and the Medical Expenditure Panel Surveys in 2000–2001 and 2010–2011, we compared per person medical expenditures and uses among adults ≥18 years of age with or without diabetes at the three time points. Types of medical services included inpatient care, emergency room (ER) visits, outpatient visits, prescription drugs, and others. We also examined the changes in unit cost, defined by the expenditure per encounter for medical services.”

RESULTS The excess medical spending attributed to diabetes was $2,588 (95% CI, $2,265 to $3,104), $4,205 ($3,746 to $4,920), and $5,378 ($5,129 to $5,688) per person, respectively, in 1987, 2000–2001, and 2010–2011. Of the $2,790 increase, prescription medication accounted for 55%; inpatient visits accounted for 24%; outpatient visits accounted for 15%; and ER visits and other medical spending accounted for 6%. The growth in prescription medication spending was due to the increase in both the volume of use and unit cost, whereas the increase in outpatient expenditure was almost entirely driven by more visits. In contrast, the increase in inpatient and ER expenditures was caused by the rise of unit costs. […] The increase was observed across all components of medical spending, with the greatest absolute increase in the spending on prescription medications ($1,528 increase), followed by inpatient visits ($680 increase) and outpatient visits ($430 increase). The absolute change in the spending on ER and other medical services use was relatively small. In relative terms, the spending on ER visits grew more than five times, faster than that of prescription medication and other medical components. […] Among the total annual diabetes-attributable medical spending, the spending on inpatient and outpatient visits dropped from 40% and 23% to 31% and 19%, respectively, between 1987 and 2011, whereas spending on prescription medication increased from 27% to 41%.”

“The unit costs rose universally in all five measures of medical care in adults with and without diabetes. For each hospital admission, diabetes patients spent significantly more than persons without diabetes. The gap increased from $1,028 to $1,605 per hospital admission between 1987 and 2001, and dropped slightly to $1,360 per hospital admission in 2011. Diabetes patients also had higher spending per ER visit and per purchase of prescription medications.”

“From 1999 to 2011, national data suggest that growth in the use and price of prescription medications in the general population is 2.6% and 3.6% per year, respectively; and the growth has decelerated in recent years (22). Our analysis suggests that the growth rates in the use and prices of prescription medications for diabetes patients are considerably higher. The higher rate of growth is likely, in part, due to the growing emphasis on achieving glycemic targets, the use of newer medications, and the use of multidrug treatment strategies in modern diabetes care practice (23,24). In addition, the growth of medication spending is fueled by the rising prices per drug, particularly the drugs that are newly introduced in the market. For example, the prices for newer drug classes such as glitazones, dipeptidyl peptidase-4 inhibitors, and incretins have been 8 to 10 times those of sulfonylureas and 5 to 7 times those of metformin (9).”

“Between 1987 and 2011, medical spending increased both in persons with and in persons without diabetes; and the increase was substantially greater among persons with diabetes. As a result, the medical spending associated with diabetes nearly doubled. The growth was primarily driven by the spending in prescription medications. Further studies are needed to assess the cost-effectiveness of increased spending on drugs.”

iv. Determinants of Adherence to Diabetes Medications: Findings From a Large Pharmacy Claims Database.

“Adults with type 2 diabetes are often prescribed multiple medications to treat hyperglycemia, diabetes-associated conditions such as hypertension and dyslipidemia, and other comorbidities. Medication adherence is an important determinant of outcomes in patients with chronic diseases. For those with diabetes, adherence to medications is associated with better control of intermediate risk factors (14), lower odds of hospitalization (3,57), lower health care costs (5,79), and lower mortality (3,7). Estimates of rates of adherence to diabetes medications vary widely depending on the population studied and how adherence is defined. One review found that adherence to oral antidiabetic agents ranged from 36 to 93% across studies and that adherence to insulin was ∼63% (10).”

“Using a large pharmacy claims database, we assessed determinants of adherence to oral antidiabetic medications in >200,000 U.S. adults with type 2 diabetes. […] We selected a cohort of members treated for diabetes with noninsulin medications (oral agents or GLP-1 agonists) in the second half of 2010 who had continuous prescription benefits eligibility through 2011. Each patient was followed for 12 months from their index diabetes claim date identified during the 6-month targeting period. From each patient’s prescription history, we collected the date the prescription was filled, how many days the supply would last, the National Drug Code number, and the drug name. […] Given the difficulty in assessing insulin adherence with measures such as medication possession ratio (MPR), we excluded patients using insulin when defining the cohort.”

“We looked at a wide range of variables […] Predictor variables were defined a priori and grouped into three categories: 1) patient factors including age, sex, education, income, region, past exposure to therapy (new to diabetes therapy vs. continuing therapy), and concurrent chronic conditions; 2) prescription factors including refill channel (retail vs. mail order), total pill burden per day, and out of pocket costs; and 3) prescriber factors including age, sex, and specialty. […] Our primary outcome of interest was adherence to noninsulin antidiabetic medications. To assess adherence, we calculated an MPR for each patient. The ratio captures how often patients refill their medications and is a standard metric that is consistent with the National Quality Forum’s measure of adherence to medications for chronic conditions. MPR was defined as the proportion of days a patient had a supply of medication during a calendar year or equivalent period. We considered patients to be adherent if their MPR was 0.8 or higher, implying that they had their medication supplies for at least 80% of the days. An MPR of 0.8 or above is a well-recognized index of adherence (11,12). Studies have suggested that patients with chronic diseases need to achieve at least 80% adherence to derive the full benefits of their medications (13). […] [W]e [also] determined whether a patient was persistent, that is whether they had not discontinued or had at least a 45-day gap in their targeted therapy.”

“Previous exposure to diabetes therapy had a significant impact on adherence. Patients new to therapy were 61% less likely to be adherent to their diabetes medication. There was also a clear age effect. Patients 25–44 years of age were 49% less likely to be adherent when compared with patients 45–64 years of age. Patients aged 65–74 years were 27% more likely to be adherent, and those aged 75 years and above were 41% more likely to be adherent when compared with the 45–64 year age-group. Men were significantly more likely to be adherent than women […I dislike the use of the word ‘significant’ in such contexts; there is a difference in the level of adherence, but it is not large in absolute terms; the male vs female OR is 1.14 (CI 1.12-1.16) – US]. Education level and household income were both associated with adherence. The higher the estimated academic achievement, the more likely the patient was to be adherent. Patients completing graduate school were 41% more likely to be adherent when compared with patients with a high school equivalent education. Patients with an annual income >$60,000 were also more likely to be adherent when compared with patients with a household income <$30,000.”

“The largest effect size was observed for patients obtaining their prescription antidiabetic medications by mail. Patients using the mail channel were more than twice as likely to be adherent to their antidiabetic medications when compared with patients filling their prescriptions at retail pharmacies. Total daily pill burden was positively associated with antidiabetic medication adherence. For each additional pill a patient took per day, adherence to antidiabetic medications increased by 22%. Patient out-of-pocket costs were negatively associated with adherence. For each additional $15 in out-of-pocket costs per month, diabetes medication adherence decreased by 11%. […] We found few meaningful differences in patient adherence according to prescriber factors.”

“In our study, characteristics that suggest a “healthier” patient (being younger, new to diabetes therapy, and taking few other medications) were all associated with lower odds of adherence to antidiabetic medications. This suggests that acceptance of a chronic illness diagnosis and the potential consequences may be an important, but perhaps overlooked, determinant of medication-taking behavior. […] Our findings regarding income and costs are important reminders that prescribers should consider the impact of medication costs on patients with diabetes. Out-of-pocket costs are an important determinant of adherence to statins (26) and a self-reported cause of underuse of medications in one in seven insured patients with diabetes (27). Lower income has previously been shown to be associated with poor adherence to diabetes medications (15) and a self-reported cause of cost-related medication underuse (27).”

v. The Effect of Alcohol Consumption on Insulin Sensitivity and Glycemic Status: A Systematic Review and Meta-analysis of Intervention Studies.

“Moderate alcohol consumption, compared with abstaining and heavy drinking, is related to a reduced risk of type 2 diabetes (1,2). Although the risk is reduced with moderate alcohol consumption in both men and women, the association may differ for men and women. In a meta-analysis, consumption of 24 g alcohol/day reduced the risk of type 2 diabetes by 40% among women, whereas consumption of 22 g alcohol/day reduced the risk by 13% among men (1).

The association of alcohol consumption with type 2 diabetes may be explained by increased insulin sensitivity, anti-inflammatory effects, or effects of adiponectin (3). Several intervention studies have examined the effect of moderate alcohol consumption on these potential underlying pathways. A meta-analysis of intervention studies by Brien et al. (4) showed that alcohol consumption significantly increased adiponectin levels but did not affect inflammatory factors. Unfortunately, the effect of alcohol consumption on insulin sensitivity has not been summarized quantitatively. A review of cross-sectional studies by Hulthe and Fagerberg (5) suggested a positive association between moderate alcohol consumption and insulin sensitivity, although the three intervention studies included in their review did not show an effect (68). Several other intervention studies also reported inconsistent results (9,10). Consequently, consensus is lacking about the effect of moderate alcohol consumption on insulin sensitivity. Therefore, we aimed to conduct a systematic review and meta-analysis of intervention studies investigating the effect of alcohol consumption on insulin sensitivity and other relevant glycemic measures.”

“22 articles met criteria for inclusion in the qualitative synthesis. […] Of the 22 studies, 15 used a crossover design and 7 a parallel design. The intervention duration of the studies ranged from 2 to 12 weeks […] Of the 22 studies, 2 were excluded from the meta-analysis because they did not include an alcohol-free control group (14,19), and 4 were excluded because they did not have a randomized design […] Overall, 14 studies were included in the meta-analysis”

“A random-effects model was used because heterogeneity was present (P < 0.01, I2 = 91%). […] For HbA1c, a random-effects model was used because the I2 statistic indicated evidence for some heterogeneity (I2 = 30%).” [Cough, you’re not supposed to make these decisions that way, coughUS. This is not the first time I’ve seen this approach applied, and I don’t like it; it’s bad practice to allow the results of (frequently under-powered) heterogeneity tests to influence model selection decisions. As Bohrenstein and Hedges point out in their book, “A report should state the computational model used in the analysis and explain why this model was selected. A common mistake is to use the fixed-effect model on the basis that there is no evidence of heterogeneity. As [already] explained […], the decision to use one model or the other should depend on the nature of the studies, and not on the significance of this test”]

“This meta-analysis shows that moderate alcohol consumption did not affect estimates of insulin sensitivity or fasting glucose levels, but it decreased fasting insulin concentrations and HbA1c. Sex-stratified analysis suggested that moderate alcohol consumption may improve insulin sensitivity and decrease fasting insulin concentrations in women but not in men. The meta-regression suggested no influence of dosage and duration on the results. However, the number of studies may have been too low to detect influences by dosage and duration. […] The primary finding that alcohol consumption does not influence insulin sensitivity concords with the intervention studies included in the review of Hulthe and Fagerberg (5). This is in contrast with observational studies suggesting a significant association between moderate alcohol consumption and improved insulin sensitivity (34,35). […] We observed lower levels of HbA1c in subjects consuming moderate amounts of alcohol compared with abstainers. This has also been shown in several observational studies (39,43,44). Alcohol may decrease HbA1c by suppressing the acute rise in blood glucose after a meal and increasing the early insulin response (45). This would result in lower glucose concentrations over time and, thus, lower HbA1c concentrations. Unfortunately, the underlying mechanism of glycemic control by alcohol is not clearly understood.”

vi. Predictors of Lower-Extremity Amputation in Patients With an Infected Diabetic Foot Ulcer.

“Infection is a frequent complication of diabetic foot ulcers, with up to 58% of ulcers being infected at initial presentation at a diabetic foot clinic, increasing to 82% in patients hospitalized for a diabetic foot ulcer (1). These diabetic foot infections (DFIs) are associated with poor clinical outcomes for the patient and high costs for both the patient and the health care system (2). Patients with a DFI have a 50-fold increased risk of hospitalization and 150-fold increased risk of lower-extremity amputation compared with patients with diabetes and no foot infection (3). Among patients with a DFI, ∼5% will undergo a major amputation and 20–30% a minor amputation, with the presence of peripheral arterial disease (PAD) greatly increasing amputation risk (46).”

“As infection of a diabetic foot wound heralds a poor outcome, early diagnosis and treatment are important. Unfortunately, systemic signs of inflammation such as fever and leukocytosis are often absent even with a serious foot infection (10,11). As local signs and symptoms of infection are also often diminished, because of concomitant peripheral neuropathy and ischemia (12), diagnosing and defining resolution of infection can be difficult.”

“The system developed by the International Working Group on the Diabetic Foot (IWGDF) and the Infectious Diseases Society of America (IDSA) provides criteria for the diagnosis of infection of ulcers and classifies it into three categories: mild, moderate, or severe. The system was validated in three relatively small cohorts of patients […] The European Study Group on Diabetes and the Lower Extremity (Eurodiale) prospectively studied a large cohort of patients with a diabetic foot ulcer (17), enabling us to determine the prognostic value of the IWGDF system for clinically relevant lower-extremity amputations. […] We prospectively studied 575 patients with an infected diabetic foot ulcer presenting to 1 of 14 diabetic foot clinics in 10 European countries. […] Among these patients, 159 (28%) underwent an amputation. […] Patients were followed monthly until healing of the foot ulcer(s), major amputation, or death — up to a maximum of 1 year.”

“One hundred and ninety-nine patients had a grade 2 (mild) infection, 338 a grade 3 (moderate), and 38 a grade 4 (severe). Amputations were performed on 159 (28%) patients (126 minor and 33 major) within the year of follow-up; 103 patients (18%) underwent amputations proximal to and including the hallux. […] The independent predictors of any amputation were as follows: periwound edema, HR 2.01 (95% CI 1.33–3.03); foul smell, HR 1.74 (1.17–2.57); purulent and nonpurulent exudate, HR 1.67 (1.17–2.37) and 1.49 (1.02–2.18), respectively; deep ulcer, HR 3.49 (1.84–6.60); positive probe-to-bone test, HR 6.78 (3.79–12.15); pretibial edema, HR 1.53 (1.02–2.31); fever, HR 2.00 (1.15–3.48); elevated CRP levels but less than three times the upper limit of normal, HR 2.74 (1.40–5.34); and elevated CRP levels more than three times the upper limit, HR 3.84 (2.07–7.12). […] In comparison with mild infection, the presence of a moderate infection increased the hazard for any amputation by a factor of 2.15 (95% CI 1.25–3.71) and 3.01 (1.51–6.01) for amputations excluding the lesser toes. For severe infection, the hazard for any amputation increased by a factor of 4.12 (1.99–8.51) and for amputations excluding the lesser toes by a factor of 5.40 (2.20–13.26). Larger ulcer size and presence of PAD were also independent predictors of both any amputation and amputations excluding the lesser toes, with HRs between 1.81 and 3 (and 95% CIs between 1.05 and 6.6).”

“Previously published studies that have aimed to identify independent risk factors for lower-extremity amputation in patients with a DFI have noted an association with older age (5,22), the presence of fever (5), elevated acute-phase reactants (5,22,23), higher HbA1c levels (24), and renal insufficiency (5,22).”

“The new risk scores we developed for any amputation, and amputations excluding the lesser toes had higher prognostic capability, based on the area under the ROC curve (0.80 and 0.78, respectively), than the IWGDF system (0.67) […] which is currently the only one in use for infected diabetic foot ulcers. […] these Eurodiale scores were developed based on the available data of our cohort, and they will need to be validated in other populations before any firm conclusions can be drawn. The advantage of these newly developed scores is that they are easier for clinicians to perform […] These newly developed risk scores can be readily used in daily clinical practice without the necessity of obtaining additional laboratory testing.”

Advertisements

September 12, 2017 Posted by | Cardiology, Diabetes, Economics, Epidemiology, Health Economics, Infectious disease, Medicine, Microbiology, Statistics | Leave a comment

Gastroenterology – Amal Mattu

If I hadn’t just read Horowitz & Samsom’s book I’m fairly sure this lecture would have been difficult to follow, but a lot of the stuff covered here is (naturally) closely related to the stuff covered in that book; this is mostly a revision lecture aimed at reminding you of stuff you already (supposedly?) know and/or dealing with topics closely related to stuff you already know, I don’t think it’s the right lecture for someone who knows very little about gastroenterology. I like Mattu’s approach to lecturing; this lecture was both fun and enjoyable to watch, despite (?) including a lot of information.

A few links to stuff covered/mentioned in the lecture:

Mediastinitis.
Boerhaave syndrome.
Does This Patient Have a Severe Upper Gastrointestinal Bleed? (JAMA).
Acute Liver Failure (NEJM review article).
Charcot’s cholangitis triad.
Ranson criteria.
Volvulus.
Crohn’s disease.
Ulcerative colitis.
Abdominal aortic aneurysm.
Mesenteric ischemia.
Shigella infection.
Amebiasis.
Clostridium perfringens.
Pseudomembranous colitis.

September 11, 2017 Posted by | Gastroenterology, Lectures, Medicine, Microbiology | Leave a comment

The Ageing Immune System and Health (I)

as we age, we observe a greater heterogeneity of ability and health. The variation in, say, walking speed is far greater in a group of 70 year olds, than in a group on 20 year olds. This makes the study of ageing and the factors driving that heterogeneity of health and functional ability in old age vital. […] The study of the immune system across the lifespan has demonstrated that as we age the immune system undergoes a decline in function, termed immunosenescence. […] the decline in function is not universal across all aspects of the immune system, and neither is the magnitude of functional loss similar between individuals. The theory of inflammageing, which represents a chronic low grade inflammatory state in older people, has been described as a major consequence of immunosenescence, though lifestyle factors such as reduced physical activity and increased adiposity also play a major role […] In poor health, older people accumulate disease, described as multimorbidity. This in turn means traditional single system based health care becomes less valid as each system affected by disease impacts on other systems. This leads some older people to be at greater risk of adverse events such as disability and death. The syndrome of this increased vulnerability is described as frailty, and increasing fundamental evidence is emerging that suggests immunosenescence and inflammageing may underpin frailty […] Thus frailty is seen as one clinical manifestation of immunosenescence.”

The above quotes are from the book‘s preface. I gave it 3 stars on goodreads. I should probably, considering that this topic is mentioned in the preface, mention explicitly that the book doesn’t actually go into a lot of details about the downsides of ‘traditional single system based health care’; the book is mainly about immunology and related topics, and although it provides coverage of intervention studies etc., it doesn’t really provide detailed coverage about issues like the optimization of organizational structures/systems analysis etc.. The book I was currently reading while I started out writing this post – Integrated Diabetes Care – A Multidisciplinary Approach (blog coverage here) – is incidentally pretty much exclusively devoted to providing coverage of these sorts of topics (and it did a fine job).

If you have never read any sort of immunology text before the book will probably be unreadable to you – “It is aimed at fundamental scientists and clinicians with an interest in ageing or the immune system.” In my coverage below I have not made any efforts towards picking out quotes which would be particularly easy for the average reader to read and understand; this is another way of saying that the post is mainly written for my own benefit, perhaps even more so than is usually the case, not for the benefit of potential readers reading along here.

“Physiological ageing is associated with significant re-modelling of the immune system. Termed immunosenescence, age-related changes have been described in the composition, phenotype and function of both the innate and adaptive arms of the immune system. […] Neutrophils are the most abundant leukocyte in circulation […] The first step in neutrophil anti-microbial defence is their extravasation from the bloodstream and migration to the site of infection. Whilst age appears to have no effect upon the speed at which neutrophils migrate towards chemotactic signals in vitro [15], the directional accuracy of neutrophil migration to inflammatory agonists […] as well as bacterial peptides […] is significantly reduced [15]. […] neutrophils from older adults clearly exhibit defects in several key defensive mechanisms, namely chemotaxis […], phagocytosis of opsonised pathogens […] and NET formation […]. Given this near global impairment in neutrophil function, alterations to a generic signalling element rather than defects in molecules specific to each anti-microbial defence strategy is likely to explain the aberrations in neutrophil function that occur with age. In support of this idea, ageing in rodents is associated with a significant increase in neutrophil membrane fluidity, which coincides with a marked reduction in neutrophil function […] ageing results in a reduction in NK cell production and proliferation […] Numerous studies have examined the impact of age […], with the general consensus that at the single cell level, NK cell cytotoxicity (NKCC) is reduced with age […] retrospective and prospective studies have reported relationships between low NK cell activity in older adults and (1) a past history of severe infection, (2) an increased risk of future infection, (3) a reduced probability of surviving infectious episodes and (4) infectious morbidity [49–51]. Related to this increased risk of infection, reduced NKCC prior to and following influenza vaccination in older adults has been shown to be associated with reduced protective anti-hemagglutinin titres, worsened health status and an increased incidence of respiratory tract infection […] Whilst age has no effect upon the frequency or absolute number of monocytes [54, 55], the composition of the monocyte pool is markedly different in older adults, who present with an increased frequency of non-classical and intermediate monocytes, and fewer classical monocytes when compared to their younger counterparts”.

“Via their secretion of growth factors, pro-inflammatory cytokines, and proteases, senescent cells compromise tissue homeostasis and function, and their presence has been causally implicated in the development of such age-associated conditions as sarcopenia and cataracts [92]. Several studies have demonstrated a role for innate immune cells in the recognition and clearance of senescent cells […] ageing is associated with a low-grade systemic up-regulation of circulating inflammatory mediators […] Results from longitudinal-based studies suggest inflammageing is deleterious to human health with studies in older cohorts demonstrating that low-grade increases in the circulating levels of TNF-α [103], IL-6 […] and CRP [105] are associated with both all-cause […] and cause-specific […] mortality. Furthermore, inflammageing is a predictor of frailty [106] and is considered a major factor in the development of several age-related pathologies, such as atherosclerosis [107], Alzheimer’s disease [100] and sarcopenia [108].”

“Persistent viral infections, reduced vaccination responses, increased autoimmunity, and a rise in inflammatory syndromes all typify immune ageing. […] These changes can be in part attributed to the accumulation of highly differentiated senescent T cells, characterised by their decreased proliferative capacity and the activation of senescence signaling pathways, together with alterations in the functional competence of regulatory cells, allowing inflammation to go unchecked. […] Immune senescence results from defects in different leukocyte populations, however the dysfunction is most profound in T cells [6, 7]. The responses of T cells from aged individuals are typically slower and of a lower magnitude than those of young individuals […] while not all equally affected by age, the overall T cell number does decline dramatically as a result of thymic atrophy […] T cell differentiation is a highly complex process controlled not only by costimulation but also by the strength and duration of T cell receptor (TCR) signalling [34]. Nearly all TCR signalling pathways have been found altered during ageing […] two phenotypically distinct subsets of B cells […] have been demonstrated to exert immunosuppressive functions. The frequency and function of both these Breg subsets declines with age”.

“The immune impairments in patients with chronic hyperglycemia resemble those seen during ageing, namely poor control of infections and reduced vaccination response [99].” [This is hardly surprising. ‘Hyperglycemia -> accelerated ageing’ seems generally to be a good (over-)simplified model in many contexts. To give another illustrative example from Czernik & Fowlkes text, “approximately 4–6 years of diabetes exposure in some children may be sufficient to increase skin AGEs to levels that would naturally accumulate only after ~25 years of chronological aging”].

“The term “immunosenescence” is commonly taken to mean age-associated changes in immune parameters hypothesized to contribute to increased susceptibility and severity of the older adult to infectious disease, autoimmunity and cancer. In humans, it is characterized by lower numbers and frequencies of naïve T and B cells and higher numbers and frequencies of late-differentiated T cells, especially CD8+ T cells, in the peripheral blood. […] Low numbers of naïve cells render the aged highly susceptible to pathogens to which they have not been previously exposed, but are not otherwise associated with an “immune risk profile” predicting earlier mortality. […] many of the changes, or most often, differences, in immune parameters of the older adult relative to the young have not actually been shown to be detrimental. The realization that compensatory changes may be developing over time is gaining ground […] Several studies have now shown that lower percentages and absolute numbers of naïve CD8+ T cells are seen in all older subjects whereas the accumulation of very large numbers of CD8+ late-stage differentiated memory cells is seen in a majority but not in all older adults [2]. The major difference between this majority of subjects with such accumulations of memory cells and those without is that the former are infected with human herpesvirus 5 (Cytomegalovirus, CMV). Nevertheless, the question of whether CMV is associated with immunosenescence remains so far uncertain as no causal relationship has been unequivocally established [5]. Because changes are seen rapidly after primary infection in transplant patients [6] and infants [7], it is highly likely that CMV does drive the accumulation of CD8+ late-stage memory cells, but the relationship of this to senescence remains unclear. […] In CMV-seropositive people, especially older people, a remarkably high fraction of circulating CD8+ T lymphocytes is often found to be specific for CMV. However, although the proportion of naïve CD8+ T cells is lower in the old than the young whether or not they are CMV-infected, the gross accumulation of late-stage differentiated CD8+ T cells only occurs in CMV-seropositive individuals […] It is not clear whether this is adaptive or pathological […] The total CMV-specific T-cell response in seropositive subjects constitutes on average approximately 10 % of both the CD4+ and CD8+ memory compartments, and can be far greater in older people. […] there are some published data suggesting that that in young humans or young mice, CMV may improve immune responses to some antigens and to influenza virus, probably by way of increased pro-inflammatory responses […] observations suggest that the effect of CMV on the immune system may be highly dependent also on an individuals’ age and circumstances, and that what is viewed as ageing is in fact later collateral damage from immune reactivity that was beneficial in earlier life [47, 48]. This is saying nothing more than that the same immune pathology that always accompanies immune responses to acute viruses is also caused by CMV, but over a chronic time scale and usually subclinical. […] data suggest that the remodeling of the T-cell compartment in the presence of a latent infection with CMV represents a crucial adaptation of the immune system towards the chronic challenge of lifelong CMV.”

The authors take issue with using the term ‘senescence’ to describe some of the changes discussed above, because this term by definition should be employed only in the context of changes that are demonstrably deleterious to health. It should be kept in mind in this context that insufficient immunological protection against CMV in old age could easily be much worse than the secondary inflammatory effects, harmful though these may well be; CMV in the context of AIDS, organ transplantation (“CMV is the most common and single most important viral infection in solid organ transplant recipients” – medscape) and other disease states involving compromised immune systems can be really bad news (“Disease caused by human herpesviruses tends to be relatively mild and self-limited in immunocompetent persons, although severe and quite unusual disease can be seen with immunosuppression.” Holmes et al.)

“The role of CMV in the etiology of […] age-associated diseases is currently under intensive investigation […] in one powerful study, the impact of CMV infection on mortality was investigated in a cohort of 511 individuals aged at least 65 years at entry, who were then followed up for 18 years. Infection with CMV was associated with an increased mortality rate in healthy older individuals due to an excess of vascular deaths. It was estimated that those elderly who were CMV- seropositive at the beginning of the study had a near 4-year reduction in lifespan compared to those who were CMV-seronegative, a striking result with major implications for public health [59]. Other data, such as those from the large US NHANES-III survey, have shown that CMV seropositivity together with higher than median levels of the inflammatory marker CRP correlate with a significantly lower 10-year survival rate of individuals who were mostly middle-aged at the start of the study [63]. Further evidence comes from a recently published Newcastle 85+ study of the immune parameters of 751 octogenarians investigated for their power to predict survival during a 65-month follow-up. It was documented that CMV-seropositivity was associated with increased 6-year cardiovascular mortality or death from stroke and myocardial infarction. It was therefore concluded that CMV-seropositivity is linked to a higher incidence of coronary heart disease in octogenarians and that senescence in both the CD4+ and CD8+ T-cell compartments is a predictor of overall cardiovascular mortality”.

“The incidence and severity of many infections are increased in older adults. Influenza causes approximately 36,000 deaths and more than 100,000 hospitalizations in the USA every year […] Vaccine uptake differs tremendously between European countries with more than 70 % of the older population being vaccinated against influenza in The Netherlands and the United Kingdom, but below 10 % in Poland, Latvia and Estonia during the 2012–2013 season […] several systematic reviews and meta-analyses have estimated the clinical efficacy and/or effectiveness of a given influenza vaccine, taking into consideration not only randomized trials, but also cohort and case-control studies. It can be concluded that protection is lower in the old than in young adults […] [in one study including “[m]ore than 84,000 pneumococcal vaccine-naïve persons above 65 years of age”] the effect of age on vaccine efficacy was studied and the statistical model showed a decline of vaccine efficacy for vaccine-type CAP and IPD [Invasive Pneumococcal Disease] from 65 % (95 % CI 38–81) in 65-year old subjects, to 40 % (95 % CI 17–56) in 75-year old subjects […] The most effective measure to prevent infectious disease is vaccination. […] Over the last 20–30 years tremendous progress has been achieved in developing novel/improved vaccines for children, but a lot of work still needs to be done to optimize vaccines for the elderly.”

December 12, 2016 Posted by | Books, Cardiology, Diabetes, Epidemiology, Immunology, Infectious disease, Medicine, Microbiology | Leave a comment

Water Supply in Emergency Situations (II)

Here’s my first post about the book. In this post I’ve added a few more quotes from a couple of the last chapters of the book:

“Due to the high complexity of the [water supply] systems, and the innumerable possible points of contaminant insertion, complete prevention of all possible terror attacks (chemical, biological, or radiological) on modern drinking water supplying systems […] seems to be an impossible goal. For example, in the USA there are about 170,000 water systems, with about 8,100 very large systems that serve 90% of the population who get water from a community water system […] The prevailing approach to the problem of drinking water contamination is based on the implementation of surveillance measures and technologies for “risk reduction” such as improvement of physical security measures of critical assets (high-potential vulnerability to attacks), [and] installation of online contaminant monitoring systems (OCMS) with capabilities to detect and warn in real time on relevant contaminants, as part of standard operating procedures for quality control (QC) and supervisory control and data acquisition (SCADA) systems. […] Despite the impressive technical progress in online water monitoring technologies […] detection with complete certainty of pollutants is expensive, and remains problematic.”

“A key component of early warning systems is the availability of a mathematical model for predicting the transport and fate of the spill or contaminant so that downstream utilities can be warned. […] Simulation tools (i.e. well-calibrated hydraulic and water quality models) can be linked to SCADA real-time databases allowing for continuous, high-speed modeling of the pressure, flow, and water quality conditions throughout the water distribution network. Such models provide the operator with computed system status data within the distribution network. These “virtual sensors” complement the measured data. Anomalies between measured and modeled data are automatically observed, and computed values that exceed predetermined alarm thresholds are automatically flagged by the SCADA system.”

“Any given tap receives water, which arrives though a number of pipes in the supply network, the transport route, and ultimately comes from a source […] in order to achieve maximum supply security in case of pipe failures or unusual demand patterns (e.g. fire flows) water supply networks are generally designed as complicated, looped systems, where each tap typically can receive water from several sources and intermediate storage facilities. This means that the water from any given tap can arrive through several different routes and can be a mixture of water from several sources. The routes and sources for a given tap can vary over time […] A model can show: *Which sources (well-fields, reservoirs, and tanks) contribute to the supply of which parts of the city? *Where does the water come from (percentage distribution) at any specific location in the system (any given tap or pipe)? *How long has the water been traveling in the pipe system, before it reaches a specific location?
One way to reduce the risk – and simplify the response to incidents – is by compartmentalizing the water supply system. If each tap receives water from one and only one reservoir pollution of one reservoir will affect one well-defined and relatively smaller part of the city. Compartmentalizing the water supply system will reduce the spreading of toxic substances. On the flip side, it may increase the concentration of the toxic substance. It is also likely to have a negative impact on the supply of water for fire flow and on the robustness of the water supply network in case of failures of pipes or other elements.”

An important point in the context of that part of the coverage is that if you want online (i.e. continuous, all-the-time) monitoring of drinking water, well, that’s going to be expensive regardless of how precisely you’re going to go about doing it. Another related problem is that it’s actually not really a simple matter to figure out what it even makes sense to test for when you’re analyzing the water (you can’t test for ‘everything’ all the time, and so the leading approach in monitoring systems employed today is according to the authors based on the idea of using ‘surrogate parameters’ which may be particularly informative about any significant changes in the quality of the drinking water taking place.

“After the collapse of the Soviet Union, the countries of the South Caucasus gained their independence. However, they faced problems associated with national and transboundary water management. Transboundary water management remains one of the key issues leading to conflict in the region today. The scarcity of water especially in downstream areas is a major problem […] The fresh surface water resources of the South Caucasus mainly consist of runoff from the KuraAraz River basins. […] Being a water-poor region, water supply over the Azerbaijan Republic territory totals about 100,000 /km2, which amounts to an average of about 1,000 of water per person per year. Accordingly, Azerbaijan Republic occupies one of the lowest рlaces in the world in water availability. Water resources of the Republic are distributed very irregularly over administrative districts.”

Water provision [in Azerbaijan] […] is carried out by means of active hydrotechnical constructions, which are old-fashioned and many water intake facilities and treatment systems cannot operate during high flooding, water turbidity, and extreme pollution. […] Tap water satisfies [the] needs of only 50% of the population, and some areas experience lack of drinking water. Due to the lack of water supply networks and deteriorated conditions of those existing, about half of the water is lost within the distribution system. […] The sewage system of the city of Baku covers only 70% of its territory and only about half of sewage is treated […] Owing to rapid growth of turbidity of Kura (and its inflows) during high water the water treatment facilities are rendered inoperable thus causing failures in the water supply of the population of the city of Baku. Such situations mainly take place in autumn and spring on the average 3–5 times a year for 1–2 days. In the system of centralized water supply of the city of Baku about 300 emergency cases occur annually […] Practically nobody works with the population to promote efficient water use practices.”

October 31, 2016 Posted by | Books, Engineering, Geography, Microbiology | Leave a comment

Water Supply in Emergency Situations (I)

I didn’t think much of this book (here’s my goodreads review), but I did learn some new things from reading it. Some of the coverage in the book overlapped a little bit with stuff I’d read before, e.g. coverage provided in publications such as Rodricks and Fong and Alibek, but I read those books in 2013 and 2014 respectively (so I’ve already forgot a great deal) and most of the stuff in the book was new stuff. Below I’ve added a few observations and data from the first half of the publication.

“Mediterranean basin demands for water are high. Today, the region uses around 300 billion cubic meters per year. Two thirds of Mediterranean countries now use over 500  per year per inhabitant mainly because of heavy use of irrigation. But these per capita demands are irregular and vary across a wide range – from a little over 100 to more than 1,000 per year. Globally, demand has doubled since the beginning of the 20th century and increased by 60% over the last 25 years. […] the Middle East ecosystems […]  populate some 6% of the world population, but have only some 1% of its renewable fresh water. […] Seasonality of both supply and demand due to tourism […] aggravate water resource problems. During the summer months, water shortages become more frequent. Distribution networks left unused during the winter period face overload pressures in the summer. On the other hand, designing the system with excess capability to satisfy tourism-related summer peak demands raises construction and maintenance costs significantly.”

“There are over 30,000 km of mains within London and over 30% of these are over 150 years old, they serve 7.5 million people with 2,500 million liters of water a day.”

“A major flooding of the Seine River would have tremendous consequences and would impact very significantly the daily life of the 10 million people living in the Parisian area. A deep study of the impacts of such a catastrophic natural hazard has recently been initiated by the French authorities. […] The rise of the water level in the Seine during the last two major floods occurred slowly over several weeks which may explain their low number of fatalities: 50 deaths in 1658 and only one death in 1910. The damage and destruction to buildings and infrastructure, and the resulting effect on economic activity were, however, of major proportions […] Dams have been constructed on the rivers upstream from Paris, but their capacity to stock water is only 830 million cubic meters, which would be insufficient when compared to the volume of 4 billion cubic meters of water produced by a big flood. […] The drinkable water supply system in Paris, as well as that of the sewer network, is still constrained by the decisions and orientations taken during the second half of the 19th century during the large public works projects realized under Napoleon III. […] two of the three water plants which treat river water and supply half of Paris with drinkable water existed in 1910. Water treatment technology has radically changed, but the production sites have remained the same. New reservoirs for potable water have been added, but the principles of distribution have not changed […] The average drinking water production in Paris is 615,000 /day.”

They note in the chapter from which the above quotes are taken that a flood comparable to that which took place in 1910 would in 2005 have resulted in 20% of the surface of Paris being flooded, and 600.000 people being without electricity, among other things. The water distribution system currently in place would also be unable to deal with the load, however a plan for how to deal with this problem in an emergency setting does exist. In that context it’s perhaps worth noting that Paris is hardly unique in terms of the structure of the distribution system – elsewhere in the book it is observed that: “The water infrastructure developed in Europe during the 19th century and still applied, is almost completely based on options of centralized systems: huge supply and disposal networks with few, but large waterworks and sewage treatment plants.” Having both centralized and decentralized systems working at the same time/in the same area tends to increase costs, but may also lower risk; it’s observed in the book during the coverage of an Indonesian case-study that in that region the centralized service provider may take a long time to repair broken water pipes, which is … not very nice if you live in a tropical climate and prefer to have drinking water available to you.

“Water resources management challenges differ enormously in Romania, depending on the type of human settlement. The spectrum of settlement types stretches from the very low-density scattered single dwellings found in rural areas, through villages and small towns, to the much more dense and crowded cities. […] Water resources management will always face the challenge of balancing the needs of different water users. This is the case both in large urban or relatively small rural communities. The water needs of the agricultural production, energy and industrial sectors are often in competition. […] Romania’s water resources are relatively poor and unequally distributed in time and space […] There is a vast differential between urban and rural settlements when it comes to centralized drinking water systems; all the 263 municipalities and towns have such systems, while only 17% of rural communities benefit from this service. […] In Braila and Harghita counties, no village has a sewage network, and Giurgiu and Ialomita counties have only one a piece each. Around 47 of the largest cities which do not have wastewater treatment plants (Bucharest, Braila, Craiova, Turnu Severin Tulcea, etc.) produce ∼20 /s of wastewater, which is directly discharged untreated into surface water.”

There is a difference in quality between water from centralized and decentralized supply systems [in the Ukraine (and likely elsewhere as well)]. Water quality in decentralized systems is the worst (some 30% of samples fail to meet standards, compared to 5.7% in the centralized supply). […] The Sanitary epidemiological stations draw random samples from 1,139 municipal, 6,899 departmental, and 8,179 rural pipes, and from 158,254 points of decentralized water supply, including 152,440 wells, 996 springs, and 4,818 artesian wells. […] From the first day following the accident at Chernobyl Nuclear Power Plant (ChNPP), one of the most serious problems was to prevent general contamination of the Dnieper water system and to guarantee safe water consumption for people living in the affected zone. The water protection and development of monitoring programs for the affected water bodies were among the most important post-accident countermeasures taken by the Government Bodies in Ukraine. […] To solve the water quality problem for Kiev, an emergency water intake at the Desna River was constructed within a very short period. […] During 1986 and the early months of 1987, over 130 special filtration dams […] with sorbing screens containing zeolite (klinoptilolite) were installed for detaining radionuclides while letting the water through. […] After the spring flood of 1987, the construction of new dams was terminated and the decision was made to destroy most of the existing dams. It was found that the 90Sr concentration reduction by the dams studied was insignificant […] Although some countermeasures and cleanup activities applied to radionuclides sources on catchments proved to have positive effects, many other actions were evaluated as ineffective and even useless. […] The most effective measures to reduce radioactivity in drinking water are those, which operate at the water treatment and distribution stage.

“Diversification and redundancy are important technical features to make infrastructure systems less vulnerable to natural and social (man-made) hazards. […] risk management does not only encompass strategies to avoid the occurrence of certain events which might lead to damages or catastrophes, but also strategies of adaptation to limit damages.

The loss of potable water supply typically leads to waterborne diseases, such as typhus and cholera.”

Water velocity in a water supply system is about 1 \s. Therefore, time is a primordial factor in contamination spread along the system. In order to minimize the damage caused by contamination of water, it is essential to act with maximum speed to achieve minimum spread of the contaminant”

October 21, 2016 Posted by | Books, Engineering, Geography, Infectious disease, Microbiology | Leave a comment

Photosynthesis in the Marine Environment (III)

This will be my last post about the book. After having spent a few hours on the post I started to realize the post would become very long if I were to cover all the remaining chapters, and so in the end I decided not to discuss material from chapter 12 (‘How some marine plants modify the environment for other organisms’) here, even though I actually thought some of that stuff was quite interesting. I may decide to talk briefly about some of the stuff in that chapter in another blogpost later on (but most likely I won’t). For a few general remarks about the book, see my second post about it.

Some stuff from the last half of the book below:

“The light reactions of marine plants are similar to those of terrestrial plants […], except that pigments other than chlorophylls a and b and carotenoids may be involved in the capturing of light […] and that special arrangements between the two photosystems may be different […]. Similarly, the CO2-fixation and -reduction reactions are also basically the same in terrestrial and marine plants. Perhaps one should put this the other way around: Terrestrial-plant photosynthesis is similar to marine-plant photosynthesis, which is not surprising since plants have evolved in the oceans for 3.4 billion years and their descendants on land for only 350–400 million years. […] In underwater marine environments, the accessibility to CO2 is low mainly because of the low diffusivity of solutes in liquid media, and for CO2 this is exacerbated by today’s low […] ambient CO2 concentrations. Therefore, there is a need for a CCM also in marine plants […] CCMs in cyanobacteria are highly active and accumulation factors (the internal vs. external CO2 concentrations ratio) can be of the order of 800–900 […] CCMs in eukaryotic microalgae are not as effective at raising internal CO2 concentrations as are those in cyanobacteria, but […] microalgal CCMs result in CO2 accumulation factors as high as 180 […] CCMs are present in almost all marine plants. These CCMs are based mainly on various forms of HCO3 [bicarbonate] utilisation, and may raise the intrachloroplast (or, in cyanobacteria, intracellular or intra-carboxysome) CO2 to several-fold that of seawater. Thus, Rubisco is in effect often saturated by CO2, and photorespiration is therefore often absent or limited in marine plants.”

“we view the main difference in photosynthesis between marine and terrestrial plants as the latter’s ability to acquire Ci [inorganic carbon] (in most cases HCO3) from the external medium and concentrate it intracellularly in order to optimise their photosynthetic rates or, in some cases, to be able to photosynthesise at all. […] CO2 dissolved in seawater is, under air-equilibrated conditions and given today’s seawater pH, in equilibrium with a >100 times higher concentration of HCO3, and it is therefore not surprising that most marine plants utilise the latter Ci form for their photosynthetic needs. […] any plant that utilises bulk HCO3 from seawater must convert it to CO2 somewhere along its path to Rubisco. This can be done in different ways by different plants and under different conditions”

“The conclusion that macroalgae use HCO3 stems largely from results of experiments in which concentrations of CO2 and HCO3 were altered (chiefly by altering the pH of the seawater) while measuring photosynthetic rates, or where the plants themselves withdrew these Ci forms as they photosynthesised in a closed system as manifested by a pH increase (so-called pH-drift experiments) […] The reason that the pH in the surrounding seawater increases as plants photosynthesise is first that CO2 is in equilibrium with carbonic acid (H2CO3), and so the acidity decreases (i.e. pH rises) as CO2 is used up. At higher pH values (above ∼9), when all the CO2 is used up, then a decrease in HCO3 concentrations will also result in increased pH since the alkalinity is maintained by the formation of OH […] some algae can also give off OH to the seawater medium in exchange for HCO3 uptake, bringing the pH up even further (to >10).”

Carbonic anhydrase (CA) is a ubiquitous enzyme, found in all organisms investigated so far (from bacteria, through plants, to mammals such as ourselves). This may be seen as remarkable, since its only function is to catalyse the inter-conversion between CO2 and HCO3 in the reaction CO2 + H2O ↔ H2CO3; we can exchange the latter Ci form to HCO3 since this is spontaneously formed by H2CO3 and is present at a much higher equilibrium concentration than the latter. Without CA, the equilibrium between CO2 and HCO3 is a slow process […], but in the presence of CA the reaction becomes virtually instantaneous. Since CO2 and HCO3 generate different pH values of a solution, one of the roles of CA is to regulate intracellular pH […] another […] function is to convert HCO3 to CO2 somewhere en route towards the latter’s final fixation by Rubisco.”

“with very few […] exceptions, marine macrophytes are not C 4 plants. Also, while a CAM-like [Crassulacean acid metabolism-like, see my previous post about the book for details] feature of nightly uptake of Ci may complement that of the day in some brown algal kelps, this is an exception […] rather than a rule for macroalgae in general. Thus, virtually no marine macroalgae are C 4 or CAM plants, and instead their CCMs are dependent on HCO3 utilization, which brings about high concentrations of CO2 in the vicinity of Rubisco. In Ulva, this type of CCM causes the intra-cellular CO2 concentration to be some 200 μM, i.e. ∼15 times higher than that in seawater.“

“deposition of calcium carbonate (CaCO3) as either calcite or aragonite in marine organisms […] can occur within the cells, but for macroalgae it usually occurs outside of the cell membranes, i.e. in the cell walls or other intercellular spaces. The calcification (i.e. CaCO3 formation) can sometimes continue in darkness, but is normally greatly stimulated in light and follows the rate of photosynthesis. During photosynthesis, the uptake of CO2 will lower the total amount of dissolved inorganic carbon (Ci) and, thus, increase the pH in the seawater surrounding the cells, thereby increasing the saturation state of CaCO3. This, in turn, favours calcification […]. Conversely, it has been suggested that calcification might enhance the photosynthetic rate by increasing the rate of conversion of HCO3 to CO2 by lowering the pH. Respiration will reduce calcification rates when released CO2 increases Ci and/but lowers intercellular pH.”

“photosynthesis is most efficient at very low irradiances and increasingly inefficient as irradiances increase. This is most easily understood if we regard ‘efficiency’ as being dependent on quantum yield: At low ambient irradiances (the light that causes photosynthesis is also called ‘actinic’ light), almost all the photon energy conveyed through the antennae will result in electron flow through (or charge separation at) the reaction centres of photosystem II […]. Another way to put this is that the chances for energy funneled through the antennae to encounter an oxidised (or ‘open’) reaction centre are very high. Consequently, almost all of the photons emitted by the modulated measuring light will be consumed in photosynthesis, and very little of that photon energy will be used for generating fluorescence […] the higher the ambient (or actinic) light, the less efficient is photosynthesis (quantum yields are lower), and the less likely it is for photon energy funnelled through the antennae (including those from the measuring light) to find an open reaction centre, and so the fluorescence generated by the latter light increases […] Alpha (α), which is a measure of the maximal photosynthetic efficiency (or quantum yield, i.e. photosynthetic output per photons received, or absorbed […] by a specific leaf/thallus area, is high in low-light plants because pigment levels (or pigment densities per surface area) are high. In other words, under low-irradiance conditions where few photons are available, the probability that they will all be absorbed is higher in plants with a high density of photosynthetic pigments (or larger ‘antennae’ […]). In yet other words, efficient photon absorption is particularly important at low irradiances, where the higher concentration of pigments potentially optimises photosynthesis in low-light plants. In high-irradiance environments, where photons are plentiful, their efficient absorption becomes less important, and instead it is reactions downstream of the light reactions that become important in the performance of optimal rates of photosynthesis. The CO2-fixing capability of the enzyme Rubisco, which we have indicated as a bottleneck for the entire photosynthetic apparatus at high irradiances, is indeed generally higher in high-light than in low-light plants because of its higher concentration in the former. So, at high irradiances where the photon flux is not limiting to photosynthetic rates, the activity of Rubisco within the CO2-fixation and -reduction part of photosynthesis becomes limiting, but is optimised in high-light plants by up-regulation of its formation. […] photosynthetic responses have often been explained in terms of adaptation to low light being brought about by alterations in either the number of ‘photosynthetic units’ or their size […] There are good examples of both strategies occurring in different species of algae”.

“In general, photoinhibition can be defined as the lowering of photosynthetic rates at high irradiances. This is mainly due to the rapid (sometimes within minutes) degradation of […] the D1 protein. […] there are defense mechanisms [in plants] that divert excess light energy to processes different from photosynthesis; these processes thus cause a downregulation of the entire photosynthetic process while protecting the photosynthetic machinery from excess photons that could cause damage. One such process is the xanthophyll cycle. […] It has […] been suggested that the activity of the CCM in marine plants […] can be a source of energy dissipation. If CO2 levels are raised inside the cells to improve Rubisco activity, some of that CO2 can potentially leak out of the cells, and so raising the net energy cost of CO2 accumulation and, thus, using up large amounts of energy […]. Indirect evidence for this comes from experiments in which CCM activity is down-regulated by elevated CO2

“Photoinhibition is often divided into dynamic and chronic types, i.e. the former is quickly remedied (e.g. during the day[…]) while the latter is more persistent (e.g. over seasons […] the mechanisms for down-regulating photosynthesis by diverting photon energies and the reducing power of electrons away from the photosynthetic systems, including the possibility of detoxifying oxygen radicals, is important in high-light plants (that experience high irradiances during midday) as well as in those plants that do see significant fluctuations in irradiance throughout the day (e.g. intertidal benthic plants). While low-light plants may lack those systems of down-regulation, one must remember that they do not live in environments of high irradiances, and so seldom or never experience high irradiances. […] If plants had a mind, one could say that it was worth it for them to invest in pigments, but unnecessary to invest in high amounts of Rubisco, when growing under low-light conditions, and necessary for high-light growing plants to invest in Rubisco, but not in pigments. Evolution has, of course, shaped these responses”.

“shallow-growing corals […] show two types of photoinhibition: a dynamic type that remedies itself at the end of each day and a more chronic type that persists over longer time periods. […] Bleaching of corals occurs when they expel their zooxanthellae to the surrounding water, after which they either die or acquire new zooxanthellae of other types (or clades) that are better adapted to the changes in the environment that caused the bleaching. […] Active Ci acquisition mechanisms, whether based on localised active H+ extrusion and acidification and enhanced CO2 supply, or on active transport of HCO3, are all energy requiring. As a consequence it is not surprising that the CCM activity is decreased at lower light levels […] a whole spectrum of light-responses can be found in seagrasses, and those are often in co-ordinance with the average daily irradiances where they grow. […] The function of chloroplast clumping in Halophila stipulacea appears to be protection of the chloroplasts from high irradiances. Thus, a few peripheral chloroplasts ‘sacrifice’ themselves for the good of many others within the clump that will be exposed to lower irradiances. […] While water is an effective filter of UV radiation (UVR)2, many marine organisms are sensitive to UVR and have devised ways to protect themselves against this harmful radiation. These ways include the production of UV-filtering compounds called mycosporine-like amino acids (MAAs), which is common also in seagrasses”.

“Many algae and seagrasses grow in the intertidal and are, accordingly, exposed to air during various parts of the day. On the one hand, this makes them amenable to using atmospheric CO2, the diffusion rate of which is some 10 000 times higher in air than in water. […] desiccation is […] the big drawback when growing in the intertidal, and excessive desiccation will lead to death. When some of the green macroalgae left the seas and formed terrestrial plants some 400 million years ago (the latter of which then ‘invaded’ Earth), there was a need for measures to evolve that on the one side ensured a water supply to the above-ground parts of the plants (i.e. roots1) and, on the other, hindered the water entering the plants to evaporate (i.e. a water-impermeable cuticle). Macroalgae lack those barriers against losing intracellular water, and are thus more prone to desiccation, the rate of which depends on external factors such as heat and humidity and internal factors such as thallus thickness. […] the mechanisms of desiccation tolerance in macroalgae is not well understood on the cellular level […] there seems to be a general correlation between the sensitivity of the photosynthetic apparatus (more than the respiratory one) to desiccation and the occurrence of macroalgae along a vertical gradient in the intertidal: the less sensitive (i.e. the more tolerant), the higher up the algae can grow. This is especially true if the sensitivity to desiccation is measured as a function of the ability to regain photosynthetic rates following rehydration during re-submergence. While this correlation exists, the mechanism of protecting the photosynthetic system against desiccation is largely unknown”.

July 28, 2015 Posted by | Biology, Books, Botany, Chemistry, Evolutionary biology, Microbiology | Leave a comment

Photosynthesis in the Marine Environment (II)

Here’s my first post about the book. I gave the book four stars on goodreads – here’s a link to my short goodreads review of the book.

As pointed out in the review, ‘it’s really mostly a biochemistry text.’ At least there’s a lot of that stuff in there (‘it get’s better towards the end’, would be one way to put it – the last chapters deal mostly with other topics, such as measurement and brief notes on some not-particularly-well-explored ecological dynamics of potential interest), and if you don’t want to read a book which deals in some detail with topics and concepts like alkalinity, crassulacean acid metabolism, photophosphorylation, photosynthetic reaction centres, Calvin cycle (also known straightforwardly as the ‘reductive pentose phosphate cycle’…), enzymes with names like Ribulose-1,5-bisphosphate carboxylase/oxygenase (‘RuBisCO’ among friends…) and phosphoenolpyruvate carboxylase (‘PEP-case’ among friends…), mycosporine-like amino acid, 4,4′-Diisothiocyanatostilbene-2,2′-disulfonic acid (‘DIDS’ among friends), phosphoenolpyruvate, photorespiration, carbonic anhydrase, C4 carbon fixation, cytochrome b6f complex, … – well, you should definitely not read this book. If you do feel like reading about these sorts of things, having a look at the book seems to me a better idea than reading the wiki articles.

I’m not a biochemist but I could follow a great deal of what was going on in this book, which is perhaps a good indication of how well written the book is. This stuff’s interesting and complicated, and the authors cover most of it quite well. The book has way too much stuff for it to make sense to cover all of it here, but I do want to cover some more stuff from the book, so I’ve added some quotes below.

“Water velocities are central to marine photosynthetic organisms because they affect the transport of nutrients such as Ci [inorganic carbon] towards the photosynthesising cells, as well as the removal of by-products such as excess O2 during the day. Such bulk transport is especially important in aquatic media since diffusion rates there are typically some 10 000 times lower than in air […] It has been established that increasing current velocities will increase photosynthetic rates and, thus, productivity of macrophytes as long as they do not disrupt the thalli of macroalgae or the leaves of seagrasses”.

Photosynthesis is the process by which the energy of light is used in order to form energy-rich organic compounds from low-energy inorganic compounds. In doing so, electrons from water (H2O) reduce carbon dioxide (CO2) to carbohydrates. […] The process of photosynthesis can conveniently be separated into two parts: the ‘photo’ part in which light energy is converted into chemical energy bound in the molecule ATP and reducing power is formed as NADPH [another friend with a long name], and the ‘synthesis’ part in which that ATP and NADPH are used in order to reduce CO2 to sugars […]. The ‘photo’ part of photosynthesis is, for obvious reasons, also called its light reactions while the ‘synthesis’ part can be termed CO2-fixation and -reduction, or the Calvin cycle after one of its discoverers; this part also used to be called the ‘dark reactions’ [or light-independent reactions] of photosynthesis because it can proceed in vitro (= outside the living cell, e.g. in a test-tube) in darkness provided that ATP and NADPH are added artificially. […] ATP and NADPH are the energy source and reducing power, respectively, formed by the light reactions, that are subsequently used in order to reduce carbon dioxide (CO2) to sugars (synonymous with carbohydrates) in the Calvin cycle. Molecular oxygen (O2) is formed as a by-product of photosynthesis.”

“In photosynthetic bacteria (such as the cyanobacteria), the light reactions are located at the plasma membrane and internal membranes derived as invaginations of the plasma membrane. […] most of the CO2-fixing enzyme ribulose-bisphosphate carboxylase/oxygenase […] is here located in structures termed carboxysomes. […] In all other plants (including algae), however, the entire process of photosynthesis takes place within intracellular compartments called chloroplasts which, as the name suggests, are chlorophyll-containing plastids (plastids are those compartments in cells that are associated with photosynthesis).”

“Photosynthesis can be seen as a process in which part of the radiant energy from sunlight is ‘harvested’ by plants in order to supply chemical energy for growth. The first step in such light harvesting is the absorption of photons by photosynthetic pigments[1]. The photosynthetic pigments are special in that they not only convert the energy of absorbed photons to heat (as do most other pigments), but largely convert photon energy into a flow of electrons; the latter is ultimately used to provide chemical energy to reduce CO2 to carbohydrates. […] Pigments are substances that can absorb different wavelengths selectively and so appear as the colour of those photons that are less well absorbed (and, therefore, are reflected, or transmitted, back to our eyes). (An object is black if all photons are absorbed, and white if none are absorbed.) In plants and animals, the pigment molecules within the cells and their organelles thus give them certain colours. The green colour of many plant parts is due to the selective absorption of chlorophylls […], while other substances give colour to, e.g. flowers or fruits. […] Chlorophyll is a major photosynthetic pigment, and chlorophyll a is present in all plants, including all algae and the cyanobacteria. […] The molecular sub-structure of the chlorophyll’s ‘head’ makes it absorb mainly blue and red light […], while green photons are hardly absorbed but, rather, reflected back to our eyes […] so that chlorophyll-containing plant parts look green. […] In addition to chlorophyll a, all plants contain carotenoids […] All these accessory pigments act to fill in the ‘green window’ generated by the chlorophylls’ non-absorbance in that band […] and, thus, broaden the spectrum of light that can be utilized […] beyond that absorbed by chlorophyll.”

“Photosynthesis is principally a redox process in which carbon dioxide (CO2) is reduced to carbohydrates (or, in a shorter word, sugars) by electrons derived from water. […] since water has an energy level (or redox potential) that is much lower than that of sugar, or, more precisely, than that of the compound that finally reduces CO2 to sugars (i.e. NADPH), it follows that energy must be expended in the process; this energy stems from the photons of light. […] Redox reactions are those reactions in which one compound, B, becomes reduced by receiving electrons from another compound, A, the latter then becomes oxidised by donating the electrons to B. The reduction of B can only occur if the electron-donating compound A has a higher energy level, or […] has a redox potential that is higher, or more negative in terms of electron volts, than that of compound B. The redox potential, or reduction potential, […] can thus be seen as a measure of the ease by which a compound can become reduced […] the greater the difference in redox potential between compounds B and A, the greater the tendency that B will be reduced by A. In photosynthesis, the redox potential of the compound that finally reduces CO2, i.e. NADPH, is more negative than that from which the electrons for this reduction stems, i.e. H2O, and the entire process can therefore not occur spontaneously. Instead, light energy is used in order to boost electrons from H2O through intermediary compounds to such high redox potentials that they can, eventually, be used for CO2 reduction. In essence, then, the light reactions of photosynthesis describe how photon energy is used to boost electrons from H2O to an energy level (or redox potential) high (or negative) enough to reduce CO2 to sugars.”

“Fluorescence in general is the generation of light (emission of photons) from the energy released during de-excitation of matter previously excited by electromagnetic energy. In photosynthesis, fluorescence occurs as electrons of chlorophyll undergo de-excitation, i.e. return to the original orbital from which they were knocked out by photons. […] there is an inverse (or negative) correlation between fluorescence yield (i.e. the amount of fluorescence generated per photons absorbed by chlorophyll) and photosynthetic yield (i.e. the amount of photosynthesis performed per photons similarly absorbed).”

“In some cases, more photon energy is received by a plant than can be used for photosynthesis, and this can lead to photo-inhibition or photo-damage […]. Therefore, many plants exposed to high irradiances possess ways of dissipating such excess light energy, the most well known of which is the xanthophyll cycle. In principle, energy is shuttled between various carotenoids collectively called xanthophylls and is, in the process, dissipated as heat.”

“In order to ‘fix’ CO2 (= incorporate it into organic matter within the cell) and reduce it to sugars, the NADPH and ATP formed in the light reactions are used in a series of chemical reactions that take place in the stroma of the chloroplasts (or, in prokaryotic autotrophs such as cyanobacteria, the cytoplasm of the cells); each reaction is catalysed by its specific enzyme, and the bottleneck for the production of carbohydrates is often considered to be the enzyme involved in its first step, i.e. the fixation of CO2 [this enzyme is RubisCO] […] These CO2-fixation and -reduction reactions are known as the Calvin cycle […] or the C3 cycle […] The latter name stems from the fact that the first stable product of CO2 fixation in the cycle is a 3-carbon compound called phosphoglyceric acid (PGA): Carbon dioxide in the stroma is fixed onto a 5-carbon sugar called ribulose-bisphosphate (RuBP) in order to form 2 molecules of PGA […] It should be noted that this reaction does not produce a reduced, energy-rich, carbon compound, but is only the first, ‘CO2– fixing’, step of the Calvin cycle. In subsequent steps, PGA is energized by the ATP formed through photophosphorylation and is reduced by NADPH […] to form a 3-carbon phosphorylated sugar […] here denoted simply as triose phosphate (TP); these reactions can be called the CO2-reduction step of the Calvin cycle […] 1/6 of the TPs formed leave the cycle while 5/6 are needed in order to re-form RuBP molecules in what we can call the regeneration part of the cycle […]; it is this recycling of most of the final product of the Calvin cycle (i.e. TP) to re-form RuBP that lends it to be called a biochemical ‘cycle’ rather than a pathway.”

“Rubisco […] not only functions as a carboxylase, but […] also acts as an oxygenase […] When Rubisco reacts with oxygen instead of CO2, only 1 molecule of PGA is formed together with 1 molecule of the 2-carbon compound phosphoglycolate […] Not only is there no gain in organic carbon by this reaction, but CO2 is actually lost in the further metabolism of phosphoglycolate, which comprises a series of reactions termed photorespiration […] While photorespiration is a complex process […] it is also an apparently wasteful one […] and it is not known why this process has evolved in plants altogether. […] Photorespiration can reduce the net photosynthetic production by up to 25%.”

“Because of Rubisco’s low affinity to CO2 as compared with the low atmospheric, and even lower intracellular, CO2 concentration […], systems have evolved in some plants by which CO2 can be concentrated at the vicinity of this enzyme; these systems are accordingly termed CO2 concentrating mechanisms (CCM). For terrestrial plants, this need for concentrating CO2 is exacerbated in those that grow in hot and/or arid areas where water needs to be saved by partly or fully closing stomata during the day, thus restricting also the influx of CO2 from an already CO2-limiting atmosphere. Two such CCMs exist in terrestrial plants: the C4 cycle and the Crassulacean acid metabolism (CAM) pathway. […] The C 4 cycle is called so because the first stable product of CO2-fixation is not the 3-carbon compound PGA (as in the Calvin cycle) but, rather, malic acid (often referred to by its anion malate) or aspartic acid (or its anion aspartate), both of which are 4-carbon compounds. […] C4 [terrestrial] plants are […] more common in areas of high temperature, especially when accompanied with scarce rains, than in areas with higher rainfall […] While atmospheric CO2 is fixed […] via the C4 cycle, it should be noted that this biochemical cycle cannot reduce CO2 to high energy containing sugars […] since the Calvin cycle is the only biochemical system that can reduce CO2 to energy-rich carbohydrates in plants, it follows that the CO2 initially fixed by the C4 cycle […] is finally reduced via the Calvin cycle also in C4 plants. In summary, the C 4 cycle can be viewed as being an additional CO2 sequesterer, or a biochemical CO2 ‘pump’, that concentrates CO2 for the rather inefficient enzyme Rubisco in C4 plants that grow under conditions where the CO2 supply is extremely limited because partly closed stomata restrict its influx into the photosynthesising cells.”

“Crassulacean acid metabolism (CAM) is similar to the C 4 cycle in that atmospheric CO2 […] is initially fixed via PEP-case into the 4-carbon compound malate. However, this fixation is carried out during the night […] The ecological advantage behind CAM metabolism is that a CAM plant can grow, or at least survive, under prolonged (sometimes months) conditions of severe water stress. […] CAM plants are typical of the desert flora, and include most cacti. […] The principal difference between C 4 and CAM metabolism is that in C4 plants the initial fixation of atmospheric CO2 and its final fixation and reduction in the Calvin cycle is separated in space (between mesophyll and bundle-sheath cells) while in CAM plants the two processes are separated in time (between the initial fixation of CO2 during the night and its re-fixation and reduction during the day).”

July 20, 2015 Posted by | Biology, Botany, Chemistry, Ecology, Microbiology | Leave a comment

Wikipedia articles of interest

i. Motte-and-bailey castle (‘good article’).

“A motte-and-bailey castle is a fortification with a wooden or stone keep situated on a raised earthwork called a motte, accompanied by an enclosed courtyard, or bailey, surrounded by a protective ditch and palisade. Relatively easy to build with unskilled, often forced labour, but still militarily formidable, these castles were built across northern Europe from the 10th century onwards, spreading from Normandy and Anjou in France, into the Holy Roman Empire in the 11th century. The Normans introduced the design into England and Wales following their invasion in 1066. Motte-and-bailey castles were adopted in Scotland, Ireland, the Low Countries and Denmark in the 12th and 13th centuries. By the end of the 13th century, the design was largely superseded by alternative forms of fortification, but the earthworks remain a prominent feature in many countries. […]

Various methods were used to build mottes. Where a natural hill could be used, scarping could produce a motte without the need to create an artificial mound, but more commonly much of the motte would have to be constructed by hand.[19] Four methods existed for building a mound and a tower: the mound could either be built first, and a tower placed on top of it; the tower could alternatively be built on the original ground surface and then buried within the mound; the tower could potentially be built on the original ground surface and then partially buried within the mound, the buried part forming a cellar beneath; or the tower could be built first, and the mound added later.[25]

Regardless of the sequencing, artificial mottes had to be built by piling up earth; this work was undertaken by hand, using wooden shovels and hand-barrows, possibly with picks as well in the later periods.[26] Larger mottes took disproportionately more effort to build than their smaller equivalents, because of the volumes of earth involved.[26] The largest mottes in England, such as Thetford, are estimated to have required up to 24,000 man-days of work; smaller ones required perhaps as little as 1,000.[27] […] Taking into account estimates of the likely available manpower during the period, historians estimate that the larger mottes might have taken between four and nine months to build.[29] This contrasted favourably with stone keeps of the period, which typically took up to ten years to build.[30] Very little skilled labour was required to build motte and bailey castles, which made them very attractive propositions if forced peasant labour was available, as was the case after the Norman invasion of England.[19] […]

The type of soil would make a difference to the design of the motte, as clay soils could support a steeper motte, whilst sandier soils meant that a motte would need a more gentle incline.[14] Where available, layers of different sorts of earth, such as clay, gravel and chalk, would be used alternatively to build in strength to the design.[32] Layers of turf could also be added to stabilise the motte as it was built up, or a core of stones placed as the heart of the structure to provide strength.[33] Similar issues applied to the defensive ditches, where designers found that the wider the ditch was dug, the deeper and steeper the sides of the scarp could be, making it more defensive. […]

Although motte-and-bailey castles are the best known castle design, they were not always the most numerous in any given area.[36] A popular alternative was the ringwork castle, involving a palisade being built on top of a raised earth rampart, protected by a ditch. The choice of motte and bailey or ringwork was partially driven by terrain, as mottes were typically built on low ground, and on deeper clay and alluvial soils.[37] Another factor may have been speed, as ringworks were faster to build than mottes.[38] Some ringwork castles were later converted into motte-and-bailey designs, by filling in the centre of the ringwork to produce a flat-topped motte. […]

In England, William invaded from Normandy in 1066, resulting in three phases of castle building in England, around 80% of which were in the motte-and-bailey pattern. […] around 741 motte-and-bailey castles [were built] in England and Wales alone. […] Many motte-and-bailey castles were occupied relatively briefly and in England many were being abandoned by the 12th century, and others neglected and allowed to lapse into disrepair.[96] In the Low Countries and Germany, a similar transition occurred in the 13th and 14th centuries. […] One factor was the introduction of stone into castle building. The earliest stone castles had emerged in the 10th century […] Although wood was a more powerful defensive material than was once thought, stone became increasingly popular for military and symbolic reasons.”

ii. Battle of Midway (featured). Lots of good stuff in there. One aspect I had not been aware of beforehand was that Allied codebreakers also here (I was quite familiar with the works of Turing and others in Bletchley Park) played a key role:

“Admiral Nimitz had one priceless advantage: cryptanalysts had partially broken the Japanese Navy’s JN-25b code.[45] Since the early spring of 1942, the US had been decoding messages stating that there would soon be an operation at objective “AF”. It was not known where “AF” was, but Commander Joseph J. Rochefort and his team at Station HYPO were able to confirm that it was Midway; Captain Wilfred Holmes devised a ruse of telling the base at Midway (by secure undersea cable) to broadcast an uncoded radio message stating that Midway’s water purification system had broken down.[46] Within 24 hours, the code breakers picked up a Japanese message that “AF was short on water.”[47] HYPO was also able to determine the date of the attack as either 4 or 5 June, and to provide Nimitz with a complete IJN order of battle.[48] Japan had a new codebook, but its introduction had been delayed, enabling HYPO to read messages for several crucial days; the new code, which had not yet been cracked, came into use shortly before the attack began, but the important breaks had already been made.[49][nb 8]

As a result, the Americans entered the battle with a very good picture of where, when, and in what strength the Japanese would appear. Nimitz knew that the Japanese had negated their numerical advantage by dividing their ships into four separate task groups, all too widely separated to be able to support each other.[50][nb 9] […] The Japanese, by contrast, remained almost totally unaware of their opponent’s true strength and dispositions even after the battle began.[27] […] Four Japanese aircraft carriers — Akagi, Kaga, Soryu and Hiryu, all part of the six-carrier force that had attacked Pearl Harbor six months earlier — and a heavy cruiser were sunk at a cost of the carrier Yorktown and a destroyer. After Midway and the exhausting attrition of the Solomon Islands campaign, Japan’s capacity to replace its losses in materiel (particularly aircraft carriers) and men (especially well-trained pilots) rapidly became insufficient to cope with mounting casualties, while the United States’ massive industrial capabilities made American losses far easier to bear. […] The Battle of Midway has often been called “the turning point of the Pacific”.[140] However, the Japanese continued to try to secure more strategic territory in the South Pacific, and the U.S. did not move from a state of naval parity to one of increasing supremacy until after several more months of hard combat.[141] Thus, although Midway was the Allies’ first major victory against the Japanese, it did not radically change the course of the war. Rather, it was the cumulative effects of the battles of Coral Sea and Midway that reduced Japan’s ability to undertake major offensives.[9]

One thing which really strikes you (well, struck me) when reading this stuff is how incredibly capital-intensive the war at sea really was; this was one of the most important sea battles of the Second World War, yet the total Japanese death toll at Midway was just 3,057. To put that number into perspective, it is significantly smaller than the average number of people killed each day in Stalingrad (according to one estimate, the Soviets alone suffered 478,741 killed or missing during those roughly 5 months (~150 days), which comes out at roughly 3000/day).

iii. History of time-keeping devices (featured). ‘Exactly what it says on the tin’, as they’d say on TV Tropes.

Clepsydra-Diagram-Fancy
It took a long time to get from where we were to where we are today; the horologists of the past faced a lot of problems you’ve most likely never even thought about. What do you do for example do if your ingenious water clock has trouble keeping time because variation in water temperature causes issues? Well, you use mercury instead of water, of course! (“Since Yi Xing’s clock was a water clock, it was affected by temperature variations. That problem was solved in 976 by Zhang Sixun by replacing the water with mercury, which remains liquid down to −39 °C (−38 °F).”).

iv. Microbial metabolism.

Microbial metabolism is the means by which a microbe obtains the energy and nutrients (e.g. carbon) it needs to live and reproduce. Microbes use many different types of metabolic strategies and species can often be differentiated from each other based on metabolic characteristics. The specific metabolic properties of a microbe are the major factors in determining that microbe’s ecological niche, and often allow for that microbe to be useful in industrial processes or responsible for biogeochemical cycles. […]

All microbial metabolisms can be arranged according to three principles:

1. How the organism obtains carbon for synthesising cell mass:

2. How the organism obtains reducing equivalents used either in energy conservation or in biosynthetic reactions:

3. How the organism obtains energy for living and growing:

In practice, these terms are almost freely combined. […] Most microbes are heterotrophic (more precisely chemoorganoheterotrophic), using organic compounds as both carbon and energy sources. […] Heterotrophic microbes are extremely abundant in nature and are responsible for the breakdown of large organic polymers such as cellulose, chitin or lignin which are generally indigestible to larger animals. Generally, the breakdown of large polymers to carbon dioxide (mineralization) requires several different organisms, with one breaking down the polymer into its constituent monomers, one able to use the monomers and excreting simpler waste compounds as by-products, and one able to use the excreted wastes. There are many variations on this theme, as different organisms are able to degrade different polymers and secrete different waste products. […]

Biochemically, prokaryotic heterotrophic metabolism is much more versatile than that of eukaryotic organisms, although many prokaryotes share the most basic metabolic models with eukaryotes, e. g. using glycolysis (also called EMP pathway) for sugar metabolism and the citric acid cycle to degrade acetate, producing energy in the form of ATP and reducing power in the form of NADH or quinols. These basic pathways are well conserved because they are also involved in biosynthesis of many conserved building blocks needed for cell growth (sometimes in reverse direction). However, many bacteria and archaea utilize alternative metabolic pathways other than glycolysis and the citric acid cycle. […] The metabolic diversity and ability of prokaryotes to use a large variety of organic compounds arises from the much deeper evolutionary history and diversity of prokaryotes, as compared to eukaryotes. […]

Many microbes (phototrophs) are capable of using light as a source of energy to produce ATP and organic compounds such as carbohydrates, lipids, and proteins. Of these, algae are particularly significant because they are oxygenic, using water as an electron donor for electron transfer during photosynthesis.[11] Phototrophic bacteria are found in the phyla Cyanobacteria, Chlorobi, Proteobacteria, Chloroflexi, and Firmicutes.[12] Along with plants these microbes are responsible for all biological generation of oxygen gas on Earth. […] As befits the large diversity of photosynthetic bacteria, there are many different mechanisms by which light is converted into energy for metabolism. All photosynthetic organisms locate their photosynthetic reaction centers within a membrane, which may be invaginations of the cytoplasmic membrane (Proteobacteria), thylakoid membranes (Cyanobacteria), specialized antenna structures called chlorosomes (Green sulfur and non-sulfur bacteria), or the cytoplasmic membrane itself (heliobacteria). Different photosynthetic bacteria also contain different photosynthetic pigments, such as chlorophylls and carotenoids, allowing them to take advantage of different portions of the electromagnetic spectrum and thereby inhabit different niches. Some groups of organisms contain more specialized light-harvesting structures (e.g. phycobilisomes in Cyanobacteria and chlorosomes in Green sulfur and non-sulfur bacteria), allowing for increased efficiency in light utilization. […]

Most photosynthetic microbes are autotrophic, fixing carbon dioxide via the Calvin cycle. Some photosynthetic bacteria (e.g. Chloroflexus) are photoheterotrophs, meaning that they use organic carbon compounds as a carbon source for growth. Some photosynthetic organisms also fix nitrogen […] Nitrogen is an element required for growth by all biological systems. While extremely common (80% by volume) in the atmosphere, dinitrogen gas (N2) is generally biologically inaccessible due to its high activation energy. Throughout all of nature, only specialized bacteria and Archaea are capable of nitrogen fixation, converting dinitrogen gas into ammonia (NH3), which is easily assimilated by all organisms.[14] These prokaryotes, therefore, are very important ecologically and are often essential for the survival of entire ecosystems. This is especially true in the ocean, where nitrogen-fixing cyanobacteria are often the only sources of fixed nitrogen, and in soils, where specialized symbioses exist between legumes and their nitrogen-fixing partners to provide the nitrogen needed by these plants for growth.

Nitrogen fixation can be found distributed throughout nearly all bacterial lineages and physiological classes but is not a universal property. Because the enzyme nitrogenase, responsible for nitrogen fixation, is very sensitive to oxygen which will inhibit it irreversibly, all nitrogen-fixing organisms must possess some mechanism to keep the concentration of oxygen low. […] The production and activity of nitrogenases is very highly regulated, both because nitrogen fixation is an extremely energetically expensive process (16–24 ATP are used per N2 fixed) and due to the extreme sensitivity of the nitrogenase to oxygen.” (A lot of the stuff above was of course for me either review or closely related to stuff I’ve already read in the coverage provided in Beer et al., a book I’ve talked about before here on the blog).

v. Uranium (featured). It’s hard to know what to include here as the article has a lot of stuff, but I found this part in particular, well, interesting:

“During the Cold War between the Soviet Union and the United States, huge stockpiles of uranium were amassed and tens of thousands of nuclear weapons were created using enriched uranium and plutonium made from uranium. Since the break-up of the Soviet Union in 1991, an estimated 600 short tons (540 metric tons) of highly enriched weapons grade uranium (enough to make 40,000 nuclear warheads) have been stored in often inadequately guarded facilities in the Russian Federation and several other former Soviet states.[12] Police in Asia, Europe, and South America on at least 16 occasions from 1993 to 2005 have intercepted shipments of smuggled bomb-grade uranium or plutonium, most of which was from ex-Soviet sources.[12] From 1993 to 2005 the Material Protection, Control, and Accounting Program, operated by the federal government of the United States, spent approximately US $550 million to help safeguard uranium and plutonium stockpiles in Russia.[12] This money was used for improvements and security enhancements at research and storage facilities. Scientific American reported in February 2006 that in some of the facilities security consisted of chain link fences which were in severe states of disrepair. According to an interview from the article, one facility had been storing samples of enriched (weapons grade) uranium in a broom closet before the improvement project; another had been keeping track of its stock of nuclear warheads using index cards kept in a shoe box.[45]

Some other observations from the article below:

“Uranium is a naturally occurring element that can be found in low levels within all rock, soil, and water. Uranium is the 51st element in order of abundance in the Earth’s crust. Uranium is also the highest-numbered element to be found naturally in significant quantities on Earth and is almost always found combined with other elements.[10] Along with all elements having atomic weights higher than that of iron, it is only naturally formed in supernovae.[46] The decay of uranium, thorium, and potassium-40 in the Earth’s mantle is thought to be the main source of heat[47][48] that keeps the outer core liquid and drives mantle convection, which in turn drives plate tectonics. […]

Natural uranium consists of three major isotopes: uranium-238 (99.28% natural abundance), uranium-235 (0.71%), and uranium-234 (0.0054%). […] Uranium-238 is the most stable isotope of uranium, with a half-life of about 4.468×109 years, roughly the age of the Earth. Uranium-235 has a half-life of about 7.13×108 years, and uranium-234 has a half-life of about 2.48×105 years.[82] For natural uranium, about 49% of its alpha rays are emitted by each of 238U atom, and also 49% by 234U (since the latter is formed from the former) and about 2.0% of them by the 235U. When the Earth was young, probably about one-fifth of its uranium was uranium-235, but the percentage of 234U was probably much lower than this. […]

Worldwide production of U3O8 (yellowcake) in 2013 amounted to 70,015 tonnes, of which 22,451 t (32%) was mined in Kazakhstan. Other important uranium mining countries are Canada (9,331 t), Australia (6,350 t), Niger (4,518 t), Namibia (4,323 t) and Russia (3,135 t).[55] […] Australia has 31% of the world’s known uranium ore reserves[61] and the world’s largest single uranium deposit, located at the Olympic Dam Mine in South Australia.[62] There is a significant reserve of uranium in Bakouma a sub-prefecture in the prefecture of Mbomou in Central African Republic. […] Uranium deposits seem to be log-normal distributed. There is a 300-fold increase in the amount of uranium recoverable for each tenfold decrease in ore grade.[75] In other words, there is little high grade ore and proportionately much more low grade ore available.”

vi. Radiocarbon dating (featured).

Radiocarbon dating (also referred to as carbon dating or carbon-14 dating) is a method of determining the age of an object containing organic material by using the properties of radiocarbon (14C), a radioactive isotope of carbon. The method was invented by Willard Libby in the late 1940s and soon became a standard tool for archaeologists. Libby received the Nobel Prize for his work in 1960. The radiocarbon dating method is based on the fact that radiocarbon is constantly being created in the atmosphere by the interaction of cosmic rays with atmospheric nitrogen. The resulting radiocarbon combines with atmospheric oxygen to form radioactive carbon dioxide, which is incorporated into plants by photosynthesis; animals then acquire 14C by eating the plants. When the animal or plant dies, it stops exchanging carbon with its environment, and from that point onwards the amount of 14C it contains begins to reduce as the 14C undergoes radioactive decay. Measuring the amount of 14C in a sample from a dead plant or animal such as piece of wood or a fragment of bone provides information that can be used to calculate when the animal or plant died. The older a sample is, the less 14C there is to be detected, and because the half-life of 14C (the period of time after which half of a given sample will have decayed) is about 5,730 years, the oldest dates that can be reliably measured by radiocarbon dating are around 50,000 years ago, although special preparation methods occasionally permit dating of older samples.

The idea behind radiocarbon dating is straightforward, but years of work were required to develop the technique to the point where accurate dates could be obtained. […]

The development of radiocarbon dating has had a profound impact on archaeology. In addition to permitting more accurate dating within archaeological sites than did previous methods, it allows comparison of dates of events across great distances. Histories of archaeology often refer to its impact as the “radiocarbon revolution”.”

I’ve read about these topics before in a textbook setting (e.g. here), but/and I should note that the article provides quite detailed coverage and I think most people will encounter some new information by having a look at it even if they’re superficially familiar with this topic. The article has a lot of stuff about e.g. ‘what you need to correct for’, which some of you might find interesting.

vii. Raccoon (featured). One interesting observation from the article:

“One aspect of raccoon behavior is so well known that it gives the animal part of its scientific name, Procyon lotor; “lotor” is neo-Latin for “washer”. In the wild, raccoons often dabble for underwater food near the shore-line. They then often pick up the food item with their front paws to examine it and rub the item, sometimes to remove unwanted parts. This gives the appearance of the raccoon “washing” the food. The tactile sensitivity of raccoons’ paws is increased if this rubbing action is performed underwater, since the water softens the hard layer covering the paws.[126] However, the behavior observed in captive raccoons in which they carry their food to water to “wash” or douse it before eating has not been observed in the wild.[127] Naturalist Georges-Louis Leclerc, Comte de Buffon, believed that raccoons do not have adequate saliva production to moisten food thereby necessitating dousing, but this hypothesis is now considered to be incorrect.[128] Captive raccoons douse their food more frequently when a watering hole with a layout similar to a stream is not farther away than 3 m (10 ft).[129] The widely accepted theory is that dousing in captive raccoons is a fixed action pattern from the dabbling behavior performed when foraging at shores for aquatic foods.[130] This is supported by the observation that aquatic foods are doused more frequently. Cleaning dirty food does not seem to be a reason for “washing”.[129] Experts have cast doubt on the veracity of observations of wild raccoons dousing food.[131]

And here’s another interesting set of observations:

“In Germany—where the racoon is called the Waschbär (literally, “wash-bear” or “washing bear”) due to its habit of “dousing” food in water—two pairs of pet raccoons were released into the German countryside at the Edersee reservoir in the north of Hesse in April 1934 by a forester upon request of their owner, a poultry farmer.[186] He released them two weeks before receiving permission from the Prussian hunting office to “enrich the fauna.” [187] Several prior attempts to introduce raccoons in Germany were not successful.[188] A second population was established in eastern Germany in 1945 when 25 raccoons escaped from a fur farm at Wolfshagen, east of Berlin, after an air strike. The two populations are parasitologically distinguishable: 70% of the raccoons of the Hessian population are infected with the roundworm Baylisascaris procyonis, but none of the Brandenburgian population has the parasite.[189] The estimated number of raccoons was 285 animals in the Hessian region in 1956, over 20,000 animals in the Hessian region in 1970 and between 200,000 and 400,000 animals in the whole of Germany in 2008.[158][190] By 2012 it was estimated that Germany now had more than a million raccoons.[191]

June 14, 2015 Posted by | Archaeology, Biology, Botany, Engineering, Geology, History, Microbiology, Physics, Wikipedia, Zoology | Leave a comment

Photosynthesis in the Marine Environment (I)

I’m currently reading this book. Below some observations from part 1.

“The term autotroph is usually associated with the photosynthesising plants (including algae and cyanobacteria) and heterotroph with animals and some other groups of organisms that need to be provided high-energy containing organic foods (e.g. the fungi and many bacteria). However, many exceptions exist: Some plants are parasitic and may be devoid of chlorophyll and, thus, lack photosynthesis altogether6, and some animals contain chloroplasts or photosynthesising algae or
cyanobacteria and may function, in part, autotrophically; some corals rely on the photosynthetic algae within their bodies to the extent that they don’t have to eat at all […] If some plants are heterotrophic and some animals autotrophic, what then differentiates plants from animals? It is usually said that what differs the two groups is the absence (animals) or presence (plants) of a cell wall. The cell wall is deposited outside the cell membrane in plants, and forms a type of exo-skeleton made of polysaccharides (e.g. cellulose or agar in some red algae, or silica in the case of diatoms) that renders rigidity to plant cells and to the whole plant.”

“For the autotrophs, […] there was an advantage if they could live close to the shores where inorganic nutrient concentrations were higher (because of mineral-rich runoffs from land) than in the upper water layer of off-shore locations. However, living closer to shore also meant greater effects of wave action, which would alter, e.g. the light availability […]. Under such conditions, there would be an advantage to be able to stay put in the seawater, and under those conditions it is thought that filamentous photosynthetic organisms were formed from autotrophic cells (ca. 650 million years ago), which eventually resulted in macroalgae (some 450 million years ago) featuring holdfast tissues that could adhere them to rocky substrates. […] Very briefly now, the green macroalgae were the ancestors of terrestrial plants, which started to invade land ca. 400 million years ago (followed by the animals).”

“Marine ‘plants’ (= all photoautotrophic organisms of the seas) can be divided into phytoplankton (‘drifters’, mostly unicellular) and phytobenthos (connected to the bottom, mostly multicellular/macroscopic).
The phytoplankton can be divided into cyanobacteria (prokaryotic) and microalgae (eukaryotic) […]. The phytobenthos can be divided into macroalgae and seagrasses (marine angiosperms, which invaded the shallow seas some 90 million years ago). The micro- and macro-algae are divided into larger groups as based largely on their pigment composition [e.g. ‘red algae‘, ‘brown algae‘, …]

There are some 150 currently recognised species of marine cyanobacteria, ∼20 000 species of eukaryotic microalgae, several thousand species of macroalgae and 50(!) species of seagrasses. Altogether these marine plants are accountable for approximately half of Earth’s photosynthetic (or primary) production.

The abiotic factors that are conducive to photosynthesis and plant growth in the marine environment differ from those of terrestrial environments mainly with regard to light and inorganic carbon (Ci) sources. Light is strongly attenuated in the marine environment by absorption and scatter […] While terrestrial plants rely of atmospheric CO2 for their photosynthesis, marine plants utilise largely the >100 times higher concentration of HCO3 as the main Ci source for their photosynthetic needs. Nutrients other than CO2, that may limit plant growth in the marine environment include nitrogen (N), phosphorus (P), iron (Fe) and, for the diatoms, silica (Si).”

“The conversion of the plentiful atmospheric N2 gas (∼78% in air) into bio-available N-rich cellular constituents is a fundamental process that sustains life on Earth. For unknown reasons this process is restricted to selected representatives among the prokaryotes: archaea and bacteria. N2 fixing organisms, also termed diazotrophs (dia = two; azo = nitrogen), are globally wide-spread in terrestrial and aquatic environments, from polar regions to hot deserts, although their abundance varies widely. [Why is nitrogen important, I hear you ask? Well, when you hear the word ‘nitrogen’ in biology texts, think ‘protein’ – “Because nitrogen is relatively easy to measure and protein is not, protein content is often estimated by assaying organic nitrogen, which comprises from 15 to 18% of plant proteins” (Herrera et al.see this post]. […] . Cyanobacteria dominate marine diazotrophs and occupy large segments of marine open waters […]  sustained N2 fixation […] is a highly energy-demanding process. […] in all diazotrophs, the nitrogenase enzyme complex […] of marine cyanobacteria requires high Fe levels […] Another key nutrient is phosphorus […] which has a great impact on growth and N2 fixation in marine cyanobacteria. […] Recent model-based estimates of N2 fixation suggest that unicellular cyanobacteria contribute significantly to global ocean N budgets.”

“For convenience, we often divide the phytoplankton into different size classes, the pico-phytoplankton (0.2–2 μm effective cell diameter, ECD4); the nanophytoplankton (2–20 μm ECD) and the microphytoplankton (20–200 μm ECD). […] most of the major marine microalgal groups are found in all three size classes […] a 2010 paper estimate that these plants utilise 46 Gt carbon yearly, which can be divided into 15 Gt for the microphytoplankton, 20 Gt for the nanophytoplankton and 11 Gt for the picophytoplankton. Thus, the very small (nano- + pico-forms) of phytoplankton (including cyanobacterial forms) contribute 2/3 of the overall planktonic production (which, again, constitutes about half of the global production”).

“Many primarily non-photosynthetic organisms have developed symbioses with microalgae and cyanobacteria; these photosynthetic intruders are here referred to as photosymbionts. […] Most photosymbionts are endosymbiotic (living within the host) […] In almost all cases, these micro-algae are in symbiosis with invertebrates. Here the alga provides the animal with organic products of photosynthesis, while the invertebrate host can supply CO2 and other inorganic nutrients including nitrogen and phosphorus to the alga […]. In cases where cyanobacteria form the photosymbiont, their ‘caloric’ nutritional value is more questionable, and they may instead produce toxins that deter other animals from eating the host […] Many reef-building […] corals contain symbiotic zooxanthellae within the digestive cavity of their polyps, and in general corals that have symbiotic algae grow much faster than those without them. […] The loss of zooxanthellae from the host is known as coral bleaching […] Certain sea slugs contain functional chloroplasts that were ingested (but not digested) as part of larger algae […]. After digesting the rest of the alga, these chloroplasts are imbedded within the slugs’ digestive tract in a process called kleptoplasty (the ‘stealing’ of plastids). Even though this is not a true symbiosis (the chloroplasts are not organisms and do not gain anything from the association), the photosynthetic activity aids in the nutrition of the slugs for up to several months, thus either complementing their nutrition or carrying them through periods when food is scarce or absent.”

“90–100 million years ago, when there was a rise in seawater levels, some of the grasses that grew close to the seashores found themselves submerged in seawater. One piece of evidence that supports [the] terrestrial origin [of marine angiosperms] can be seen in the fact that residues of stomata can be found at the base of the leaves. In terrestrial plants, the stomata restrict water loss from the leaves, but since seagrasses are principally submerged in a liquid medium, the stomata became absent in the bulk parts of the leaves. These marine angiosperms, or seagrasses, thus evolved from those coastal grasses that successfully managed to adapt to being submerged in saline waters. Another theory has it that the ancestors of seagrasses were freshwater plants that, therefore, only had to adapt to water of a higher salinity. In both cases, the seagrasses exemplify a successful readaptation to marine life […] While there may exist some 20 000 or more species of macroalgae […], there are only some 50 species of seagrasses, most of which are found in tropical seas. […] the ability to extract nutrients from the sediment renders the seagrasses at an advantage over (the root-less) macroalgae in nutrient-poor waters. […] one of the basic differences in habitat utilisation between macroalgae and seagrasses is that the former usually grow on rocky substrates where they are held in place by their holdfasts, while seagrasses inhabit softer sediments where they are held in place by their root systems. Unlike macroalgae, where the whole plant surface is photosynthetically active, large proportions of seagrass plants are comprised of the non-photosynthetic roots and rhizomes. […] This means […] that seagrasses need more light in order to survive than do many algae […] marine plants usually contain less structural tissues than their terrestrial counterparts”.

“if we define ‘visible light’ as the electromagnetic wave upon which those energy-containing particles called quanta ‘ride’ that cause vision in higher animals (those quanta are also called photons) and compare it with light that causes photosynthesis, we find, interestingly, that the two processes use approximately the same wavelengths: While mammals largely use the 380–750 nm (nm = 10-9 m) wavelength band for vision, plants use the 400–700-nm band for photosynthesis; the latter is therefore also termed photosynthetically active radiation (PAR […] If a student
asks “but how come that animals and plants use almost identical wavelengths of radiation for so very different purposes?”, my answer is “sorry, but we don’t have the time to discuss that now”, meaning that while I think it has to do with too high and too low quantum energies below and above those wavelengths, I really don’t know.”

“energy (E) of a photon is inversely proportional to its wavelength […] a blue photon of 400 nm wavelength contains almost double the energy of a red one of 700 nm, while the photons of PAR between those two extremes carry decreasing energies as wavelengths increase. Accordingly, low-energy photons (i.e. of high wavelengths, e.g. those of reddish light) are absorbed to a greater extent by water molecules along a depth gradient than are photons of higher energy (i.e. lower wavelengths, e.g. bluish light), and so the latter penetrate deeper down in clear oceanic waters […] In water, the spectral distribution of PAR reaching a plant is different from that on land. This is because water not only attenuates the light intensity (or, more correctly, the photon flux, or irradiance […]), but, as mentioned above and detailed below, the attenuation with depth is wavelength dependent; therefore, plants living in the oceans will receive different spectra of light dependent on depth […] The two main characteristics of seawater that determine the quantity and quality of the irradiance penetrating to a certain depth are absorption and scatter. […] Light absorption in the oceans is a property of the water molecules, which absorb photons according to their energy […] Thus, red photons of low energy are more readily absorbed than, e.g. blue ones; only <1% of the incident red photons (calculated for 650 nm) penetrate to 20 m depth in clear waters while some 60% of the blue photons (450 nm) remain at that depth. […] Scatter […] is mainly caused by particles suspended in the water column (rather than by the water molecules themselves, although they too scatter light a little). Unlike absorption, scatter affects short-wavelength photons more than long-wavelength ones […] in turbid waters, photons of decreasing wavelengths are increasingly scattered. Since water molecules are naturally also present, they absorb the higher wavelengths, and the colours penetrating deepest in turbid waters are those between the highly scattered blue and highly absorbed red, e.g. green. The greenish colour of many coastal waters is therefore often due not only to the presence of chlorophyll-containing phytoplankton, but because, again, reddish photons are absorbed, bluish photons are scattered, and the midspectrum (i.e. green) fills the bulk part of the water column.”

“the open ocean, several kilometres or miles from the shore, almost always appears as blue. The reason for this is that in unpolluted, particle-free, waters, the preferential absorption of long-wavelength (low-energy) photons is what mainly determines the spectral distribution of light attenuation. Thus, short-wavelength (high-energy) bluish photons penetrate deepest and ‘fill up’ the bulk of the water column with their colour. Since water molecules also scatter a small proportion of those photons […], it follows that these largely water-penetrating photons are eventually also reflected back to our eyes. Or, in other words, out of the very low scattering in clear oceanic waters, the photons available to be scattered and, thus, reflected to our eyes, are mainly the bluish ones, and that is why the clear deep oceans look blue. (It is often said that the oceans are blue because the blue sky is reflected by the water surface. However, sailors will testify to the truism that the oceans are also deep blue in heavily overcast weathers, and so that explanation of the general blueness of the oceans is not valid.)”

“Although marine plants can be found in a wide range of temperature regimes, from the tropics to polar regions, the large bodies of water that are the environment for most marine plants have relatively constant temperatures, at least on a day-to-day basis. […] For marine plants that are found in intertidal regions, however, temperature variation during a single day can be very high as the plants find themselves alternately exposed to air […] Marine plants from tropical and temperate regions tend to have distinct temperature ranges for growth […] and growth optima. […] among most temperate species of microalgae, temperature optima for growth are in the range 18–25 ◦C, while some Antarctic diatoms show optima at 4–6 ◦C with no growth above a critical temperature of 7–12 ◦C. By contrast, some tropical diatoms will not grow below 15–17 ◦C. Similar responses are found in macroalgae and seagrasses. However, although some marine plants have a restricted temperature range for growth (so-called stenothermal species; steno = narrow and thermal relates to temperature), most show some growth over a broad range of temperatures and can be considered eurythermal (eury = wide).”

June 4, 2015 Posted by | Biology, Books, Botany, Ecology, Evolutionary biology, Microbiology, Physics, Zoology | Leave a comment

Chlamydia and gonorrhea…

Below some observations from Holmes et al.‘s chapters about the sexually transmitted bacterial infections chlamydia and gonorrhea. A few of these chapters covered some really complicated stuff, but I’ve tried to keep the coverage reasonably readable by avoiding many of the technical details. I’ve also tried to make the excerpts easier to read by adding relevant links and by adding brief explanations of specific terms in brackets where this approach seemed like it might be helpful.

“Since the early 1970s, Chlamydia trachomatis has been recognized as a genital pathogen responsible for an increasing variety of clinical syndromes, many closely resembling infections caused by Neisseria gonorrhoeae […]. Because many practitioners have lacked access to facilities for laboratory testing for chlamydia, these infections often have been diagnosed and treated without benefit of microbiological confirmation. Newer, molecular diagnostic tests have in part now addressed this problem […] Unfortunately, many chlamydial infections, particularly in women, are difficult to diagnose clinically and elude detection because they produce few or no symptoms and because the symptoms and signs they do produce are nonspecific. […] chlamydial infections tend to follow a fairly self-limited acute course, resolving into a low-grade persistent infection which may last for years. […] The disease process and clinical manifestations of chlamydial infections probably represent the combined effects of tissue damage from chlamydial replication and inflammatory responses to chlamydiae and the necrotic material from destroyed host cells. There is an abundant immune response to chlamydial infection (in terms of circulating antibodies or cell-mediated responses), and there is evidence that chlamydial diseases are diseases of immunopathology. […] A common pathologic end point of chlamydial infection is scarring of the affected mucous membranes. This is what ultimately leads to blindness in trachoma and to infertility and ectopic pregnancy after acute salpingitis. There is epidemiologic evidence that repeated infection results in higher rates of sequelae.”

“The prevalence of chlamydial urethral infection has been assessed in populations of men attending general medical clinics, STD clinics, adolescent medicine clinics, and student health centers and ranges from 3–5% of asymptomatic men seen in general medical settings to 15–20% of all men seen in STD clinics. […] The overall incidence of C. trachomatis infection in men has not been well defined, since in most countries these infections are not officially reported, are not microbiologically
confirmed, and often may be asymptomatic, thus escaping detection. […] The prevalence of chlamydial infection has been studied in pregnant women, in women attending gynecology or family planning clinics, in women attending STD clinics, in college students, and in women attending general medicine or family practice clinics in school-based clinics and more recently in population-based studies. Prevalence of infection in these studies has ranged widely from 3% in asymptomatic women in community-based surveys to over 20% in women seen in STD clinics.[31–53] During pregnancy, 3–7% of women generally have been chlamydia positive […] Several studies in the United States indicate that approximately 5% of neonates acquire chlamydial infection perinatally, yet antibody prevalence in later childhood before onset of sexual activity may exceed 20%.”

“Clinically, chlamydia-positive and chlamydia-negative NGU [Non-Gonococcal Urethritis] cannot be differentiated on the basis of signs or symptoms.[76] Both usually present after a 7–21-day incubation period with dysuria and mild-to-moderate whitish or clear urethral discharge. Examination reveals no abnormalities other than the discharge in most cases […] Clinical recognition of chlamydial cervicitis depends on a high index of suspicion and a careful cervical examination. There are no genital symptoms that are specifically correlated with chlamydial cervical infection. […] Although urethral symptoms may develop in some women with chlamydial infection, the majority of female STD clinic patients with urethral chlamydial infection do not have dysuria or frequency. […] the majority of women with chlamydial infection cannot be distinguished from uninfected women either by clinical examination or by […] simple tests and thus require the use of specific diagnostic testing. […] Since many chlamydial infections are asymptomatic, it has become clear that effective control must involve periodic testing of individuals at risk.[168] As the cost of extensive screening may be prohibitive, various approaches to defining target populations at increased risk of infection have been evaluated. One strategy has been to designate patients attending specific high prevalence clinic populations for universal testing. Such clinics would include STD, juvenile detention, and some family planning clinics. This approach, however, fails to account for the majority of asymptomatic infections, since attendees at high prevalence clinics often attend because of symptoms or suspicion of infection. Consequently, selective screening criteria have been developed for use in various clinical settings.[204–208] Among women, young age (generally,

If you’re a woman who’s decided not to have children and so aren’t terribly worried about infertility, it should be emphasized that untreated chlamydia can cause other really unpleasant stuff as well, like chronic pelvic pain from pelvic inflammatory disease, or ectopic pregnancy, which may be life-threatening. This is the sort of infection you’ll want to get treated even if you’re not bothered by symptoms.

Neisseria gonorrhoeae (gonococci) is the etiologic agent of gonorrhea and its related clinical syndromes (urethritis, cervicitis, salpingitis, bacteremia, arthritis, and others). It is closely related to Neisseria meningitidis (meningococci), the etiologic agent of one form of bacterial meningitis, and relatively closely to Neisseria lactamica, an occasional human pathogen. The genus Neisseria includes a variety of other relatively or completely nonpathogenic organisms that are principally important because of their occasional diagnostic confusion with gonococci and meningococci. […] Many dozens of specific serovars have been defined […] By a combination of auxotyping and serotyping […] gonococci can be divided into over 70 different strains; the number may turn out to be much larger.”

“Humans are the only natural host for gonococci. Gonococci survive only a short time outside the human body. Although gonococci can be cultured from a dried environment such as a toilet seat up to 24 hours after being artificially inoculated in large numbers onto such a surface, there is virtually no evidence that natural transmission occurs from toilet seats or similar objects. Gonorrhea is a classic example of an infection spread by contact: immediate physical contact with the mucosal surfaces of an infected person, usually a sexual partner, is required for transmission. […] Infection most often remains localized to initial sites of inoculation. Ascending genital infections (salpingitis, epididymitis) and bacteremia, however, are relatively common and account for most of the serious morbidity due to gonorrhea.”

“Consideration of clinical manifestations of gonorrhea suggests many facets of the pathogenesis of the infection. Since gonococci persist in the male urethra despite hydrodynamic forces that would tend to wash the organisms from the mucosal surface, they must be able to adhere effectively to mucosal surfaces. Similarly, since gonococci survive in the urethra despite close attachment to large numbers of neutrophils, they must have mechanisms that help them to survive interactions with polymorphonuclear neutrophils. Since some gonococci are able to invade and persist in the bloodstream for many days at least, they must be able to evade killing by normal defense mechanisms of plasma […] Invasion of the bloodstream also implies that gonococci are able to invade mucosal barriers in order to gain access to the bloodstream. Repeated reinfections of the same patient by one strain strongly suggest that gonococci are able to change surface antigens frequently and/or to escape local immune mechanisms […] The considerable tissue damage of fallopian tubes consequent to gonococcal salpingitis suggests that gonococci make at least one tissue toxin or gonococci trigger an immune response that results in damage to host tissues.[127] There is evidence to support many of these inferences. […] Since the mid-1960s, knowledge of the molecular basis of gonococcal–host interactions and of gonococcal epidemiology has increased to the point where it is amongst the best described of all microbial pathogens. […] Studies of pathogenesis are [however] complicated by the absence of a suitable animal model. A variety of animal models have been developed, each of which has certain utility, but no animal model faithfully reproduces the full spectrum of naturally acquired disease of humans.”

“Gonococci are inherently quite sensitive to antimicrobial agents, compared with many other gram-negative bacteria. However, there has been a gradual selection for antibioticresistant mutants in clinical practice over the past several decades […] The consequence of these events has been to make penicillin and tetracycline therapy ineffective in most areas. Antibiotics such as spectinomycin, ciprofloxacin, and ceftriaxone generally are effective but more expensive than penicillin G and tetracycline. Resistance to ciprofloxacin emerged in SE Asia and Africa in the past decade and has spread gradually throughout much of the world […] Streptomycin (Str) is not frequently used for therapy of gonorrhea at present, but many gonococci exhibit high-level resistance to Str. […] Resistance to fluoroquinolones is increasing, and now has become a general problem in many areas of the world.”

“The efficiency of gonorrhea transmission depends on anatomic sites infected and exposed as well as the number of exposures. The risk of acquiring urethral infection for a man following a single episode of vaginal intercourse with an infected woman is estimated to be 20%, rising to an estimated 60–80% following four exposures.[16] The prevalence of infection in women named as secondary sexual contacts of men with gonococcal urethritis has been reported to be 50–90%,[16,17] but no published studies have carefully controlled for number of exposures. It is likely that the single-exposure transmission rate from male to female is higher than that from female to male […] Previous reports saying that 80% of women with gonorrhea were asymptomatic were most often based on studies of women who were examined in screening surveys or referred to STD clinics because of sexual contact with infected men.[23] Symptomatic infected women who sought medical attention were thus often excluded from such surveys. However […] more than 75% of women with gonorrhea attending acute care facilities such as hospital emergency rooms are symptomatic.[24] The true proportion of infected women who remain asymptomatic undoubtedly lies between these extremes […] Asymptomatic infections occur in men as well as women […] Asymptomatically infected males and females contribute disproportionately to gonorrhea transmission, because symptomatic individuals are more likely to cease sexual activity and seek medical care.”

“the incidence of asymptomatic urethral gonococcal infection in the general population also has been estimated at approximately 1–3%.[27] The prevalence of asymptomatic infection may be much higher, approaching 5% in some studies, because untreated asymptomatic infections may persist for considerable periods. […] The prevalence of gonorrhea within communities tends to be dynamic, fluctuating over time, and influenced by a number of interactive factors. Mathematical models for gonorrhea within communities suggest that gonorrhea prevalence is sustained not only through continued transmission by asymptomatically infected patients but also by “core group” transmitters who are more likely than members of the general population to become infected and transmit gonorrhea to their sex partners. […] At present, gonorrhea prevention and control efforts are heavily invested in the concept of vigorous pursuit and treatment of infected core-group members and asymptomatically infected individuals.”

“Relatively large numbers (>50) of gonococcal A/S [auxotype/serotype] classes usually are present in most communities simultaneously […] and new strains can be detected over time. The distribution of isolates within A/S classes tends to be uneven, with a few A/S classes contributing disproportionately to the total number of isolates. These predominant A/S classes generally persist within communities for months or years. […] Interviews of the patients infected by [a specific] strain early in [an] outbreak identified one infected female who acknowledged over 100 different sexual partners over the preceding 2 months, suggesting that she may have played an important role in the introduction and establishment of this gonococcal strain in the community. Thus the Proto/IB-3 strain may have become common in Seattle not because of specific biologic factors but because of its chance of transmission to members of a core population by a high-frequency transmitter.” [100+ partners over a 2 month period! I was completely dumbstruck when I’d read that.]

“clinical gonorrhea is manifested by a broad spectrum of clinical presentations including asymptomatic and symptomatic local infections, local complicated infections, and systemic dissemination. […] Acute anterior urethritis is the most common manifestation of gonococcal infection in men. The incubation period ranges from 1 to 14 days or even longer; however, the majority of men develop symptoms within 2–5 days […] The predominant symptoms are urethral discharge or dysuria [pain on urination]. […] Without treatment, the usual course of gonococcal urethritis is spontaneous resolution over a period of several weeks, and before the development of effective antimicrobial therapy, 95% of untreated patients became asymptomatic within 6 months.[43] […] The incubation period for urogenital gonorrhea in women is less certain and probably more variable than in men, but most who develop local symptoms apparently do so within 10 days of infection.[51,52] The most common symptoms are those of most lower genital tract infections in women […] and include increased vaginal discharge, dysuria, intermenstrual uterine bleeding, and menorrhagia [abnormally heavy and prolonged menstrual period], each of which may occur alone or in combination and may range in intensity from minimal to severe. […] The clinical assessment of women for gonorrhea is often confounded […] by the nonspecificity of these signs and symptoms and by the high prevalence of coexisting cervical or vaginal infections with Chlamydia trachomatis, Trichomonas vaginalis, Candida albicans, herpes simplex virus, and a variety of other organisms […] Among coinfecting agents for patients with gonorrhea in the United States, C. trachomatis [chlamydia] is preeminent. Up to 10–20% of men and 20–30% of women with acute urogenital gonorrhea are coinfected with C. trachomatis.[10,46,76,139–141] In addition, substantial numbers of women with acute gonococcal infection have simultaneous T. vaginalis infections.”

“Among patients with gonorrhea, pharyngeal infection occurs in 3–7% of heterosexual men, 10–20% of heterosexual women, and 10–25% of homosexually active men. […] Gonococcal infection is transmitted to the pharynx by orogenital sexual contact and is more efficiently acquired by fellatio than by cunnilingus.[63]”

“In men, the most common local complication of gonococcal urethritis is epididymitis […], a syndrome that occurred in up to 20% of infected patients prior to the availability of modern antimicrobial therapy. […] Postinflammatory urethral strictures were common complications of untreated gonorrhea in the preantibiotic era but are now rare […] In acute PID [pelvic inflammatory disease], the clinical syndrome comprised primarily of salpingitis, and frequently including endometritis, tubo-ovarian tuboovarian abscess, or pelvic peritonitis is the most common complication of gonorrhea in women, occurring in an estimated 10–20% of those with acute gonococcal infection.[75,76] PID is the most common of all complications of gonorrhea, as well as the most important in terms of public-health impact, because of both its acute manifestations and its longterm sequelae (infertility, ectopic pregnancy, and chronic pelvic pain).”

“A major impediment to use of culture for gonorrhea diagnosis in many clinical settings are the time, expense, and logistical limitations such as specimen transport to laboratories for testing, a process that may take several days and result in temperature variation or other circumstances that can jeopardize culture viability.[111] In recent years, reliable nonculture assays for gonorrhea detection have become available and are being used increasingly. […] recently, nucleic acid amplification tests (NAATs) for gonorrhea diagnosis have become widely available.[116,117] Assays based on polymerase chain reaction (PCR), transcription-mediated amplification (TMA), and other nucleic acid amplification technologies have been developed. As a group, commercially available NAATs are more sensitive than culture for gonorrhea diagnosis and specificities are nearly as high as for culture. […] Emerging data suggest that most currently available NAATs are substantially more sensitive for gonorrhea detection than conventional culture.”

“Prior to the mid-1930s, when sulfanilamide was introduced, gonorrhea therapy involved local genital irrigation with antiseptic solutions such as silver nitrate […] By 1944 […] many gonococci had become sulfanilamide resistant […] Fortunately, in 1943 the first reports of the near 100% utility of penicillin for gonorrhea therapy were published,[127] and by the end of World War II, as penicillin became available to the general public, it quickly became the therapy of choice. Since then, continuing development of antimicrobial resistance by N. gonorrhoeae[128,129] led to regular revisions of recommended gonorrhea therapy. From the 1950s until the mid-1970s, gradually increasing chromosomal penicillin resistance led to periodic increases in the amount of penicillin required for reliable therapy. […] by the late 1980s, penicillins and tetracyclines were no longer recommended for gonorrhea therapy.
In addition to resistance to penicillin, tetracyclines, and erythromycin, in 1987, clinically significant chromosomally mediated resistance to spectinomycin — another drug recommended for gonorrhea therapy — was described in U.S. military personnel in Korea.[132] In Korea, because of the high prevalence of PPNG [spectinomycin-resistant Penicillinase-Producing Neisseria Gonorrhoeae], in 1981, spectinomycin had been adopted as the drug of choice for gonorrhea therapy. By 1983, however, spectinomycin treatment failures were beginning to occur in patients with gonorrhea […] Following recognition of the outbreak of spectinomycin-resistant gonococci in Korea, ceftriaxone became the drug of choice for treatment of gonorrhea in U.S. military personnel in that country.[132] […] Beginning in 1993, fluoroquinolone antibiotics were recommended for therapy of uncomplicated gonorrhea in the United States […] [However] in 2007 the CDC opted to no longer recommend fluoroquinolone antibiotics for therapy of uncomplicated gonorrhea. This change meant that ceftriaxone and other cephalosporin antibiotics had become the sole class of antibiotics recommended as first-line therapy for gonorrhea. […] For over two decades, ceftriaxone — a third-generation cephalosporin—has been the most reliable single-dose regimen used for gonorrhea worldwide. […] there are currently few well-studied therapeutic alternatives to ceftriaxone for gonorrhea treatment.”

November 1, 2014 Posted by | Books, Epidemiology, Immunology, Infectious disease, Medicine, Microbiology, Pharmacology | Leave a comment

100 Cases in Clinical Pathology

This book is another publication from the 100 Cases … series which I’ve talked about before – I refer to these posts for some general comments about what this series is like and some talk about the other books in the series which I’ve read. The book is much like the others, though of course the specific topics covered are different in the various publications. I liked this book and gave it 3 stars on goodreads. The book has three sections: a section dealing with ‘chemical pathology, immunology and genetics’; a section dealing with ‘histopathology’; and a section dealing with ‘haematology’. As usual I knew a lot more about some of the topics covered than I did about some of the others. Some cases were quite easy, others were not. Some of the stuff covered in Greenstein & Wood’s endocrinology text came in handy along the way and enabled me for example to easily identify a case of Cushing’s syndrome and a case of Graves’ disease. I don’t think I’ll spoil anything by noting that two of the cases in this book involved these disorders, but if you plan on reading it later on you may want to skip the coverage below, as I have included some general comments from the answer sections of the book in this post.

As someone who’s not working in the medical field and who will almost certainly never need to know how to interpret a water deprivation test (also covered in detail in Greenstein and Wood, incidentally), there are some parts of books like this one which are not particularly ‘relevant’ to me; however I’d argue that far from all the stuff included in a book like this one is ‘stuff you don’t need to know’, as there are also for example a lot of neat observations included about how specific symptoms (and symptom complexes) are linked to specific disorders, some related ideas about which other medical conditions might cause similar health problems, and which risk factors are potentially important to have in mind in specific contexts. If you’ve had occasional fevers, night sweats and experienced weight loss over the last few months, you should probably have seen a doctor a while ago – knowledge included in books like this one may make the reader perhaps a bit less likely to overlook an important and potentially treatable health problem, and/or increase awareness of potential modifiable risk factors in specific contexts. A problem is however that the book will be hard to read if you have not read any medical textbooks before, and in that case I would probably advise you against reading it as it’s almost certainly not worth the effort.

I have added a few observations from the book below.

“After a bone marrow transplant (and any associated chemotherapy), the main risks are infection (from low white cell counts and the use of immunosuppressants, such as cyclosporin), bleeding (from low platelet counts) and graft versus host disease (GVHD). […] An erythematous rash that develops on the palms or soles of the feet of a patient 10–30 days after a bone marrow transplant is characteristic of GVHD. […] GVHD is a potentially life-threatening problem that can occur in up to 80% of successful allogeneic bone marrow transplants. […] Clinically, GVHD manifests like an autoimmune disease with a macular-papular rash, jaundice and hepatosplenomegaly and ultimately organ fibrosis. It classically involves the skin, gastrointestinal tract and the liver. […] Depending on severity, treatment of acute GVHD may involve topical and intravenous steroid therapy, immunosuppression (e.g. cyclosporine), or biologic therapies targeting TNF-α […], a key inflammatory cytokine. […] Prognosis is related to response to treatment. The mortality of patients who completely respond can still be around 20%, and the mortality in those who do not respond is as high as 75%.”

“The leading indication for a liver transplant is alcoholic cirrhosis in adults and biliary atresia in children. […] The overall one-year survival of a liver transplant is over 90%, with 10-year survival of around 70%. […] Transplant rejection can be classified by time course, which relates to the underlying immune mechanism: • Hyperacute organ rejection occurs within minutes of the graft perfusion in the operating theatre. […] The treatment for hyperacute rejection is immediate removal of the graft. • Acute organ rejection take place a number of weeks after the transplant […] The treatment for acute rejection includes high dose steroids. • Chronic organ rejection can take place months to years after the transplant. […] As it is irreversible, treatment for chronic rejection is difficult, and may include re-transplantation.”

“Chronic kidney disease (CKD) is characterized by a reduction in GFR over a period of 3 or more months (normal GFR is >90–120 mL/min). It arises from a progressive impairment of renal function with a decrease in the number of functioning nephrons; generally, patients remain asymptomatic until GFR reduces to below 15 mL/min (stage V CKD). Common causes of CKD are (1) diabetes mellitus, (2) hypertension, (3) glomerulonephritis, (4) renovascular disease, (5) chronic obstruction or interstitial nephritis, and (6) hereditary or cystic renal disease”

“The definition of an aneurysm is an abnormal permanent focal dilatation of all the layers of a blood vessel. An AAA [abdominal aortic aneurysm] is defined when the aortic diameter, as measured below the level of the renal arteries, is one and a half times normal. Women have smaller aortas, but for convenience, more than 3 cm qualifies as aneurysmal. The main risk factors for aneurysm formation are male gender, smoking, hypertension, Caucasian/European descent and atherosclerosis. Although atherosclerosis is a risk factor and both diseases share common predisposing factors, there are also differences. Atherosclerosis is primarily a disease of the intima, the innermost layer of the vessel wall, whereas in aneurysms there is degeneration of the media, the middle layer. […] The annual risk of rupture equals and begins to outstrip the risk of dying from surgery when the aneurysm exceeds 5.5 cm. This is the size above which surgical repair is recommended, comorbidities permitting. […] Catastrophic rupture, as in this case, presents with hypovolaemic shock and carries a dismal prognosis.” [The patient in the case history died soon after having arrived at the hospital]

“Stroke refers to an acquired focal neurological deficit caused by an acute vascular event. The neurological deficit persists beyond 24 hours, in contrast to a transient ischaemic attack (TIA) where symptoms resolve within 24 hours, although the distinction is now blurred with the advent of thrombolysis. […] Strokes are broadly categorized into ischaemic and haemorrhagic types, the majority being ischaemic. The pathophysiology in a haemorrhagic stroke is rupture of a blood vessel causing extravasation of blood into the brain substance with tissue damage and disruption of neuronal connections. The resulting haematoma also compresses surrounding normal tissue. In most ischaemic strokes, there is thromboembolic occlusion of vessels due to underlying atherosclerosis of the aortic arch and carotid arteries. In 15–20% of cases, there is atherosclerotic disease of smaller intrinsic blood vessels within the brain[…]. A further 15–20% are due to emboli from the heart. […] The territory and the extent of the infarct influences the prognosis; [for example] expressive dysphasia and right hemiparesis are attributable to infarcts in Broca’s area and the motor cortex, both frontal lobe territories supplied by the left middle cerebral artery.”

“The stereotypical profile of a gallstone patient is summed up by the 4Fs: female, fat, fertile and forty. However, while gallstones are twice as common in females, increasing age is a more important risk factor. Above the age of 60, 10–20% of the Western population have gallstones. […] Most people with cholelithiasis are asymptomatic, but there is a 1–4% annual risk of developing symptoms or complications. […] Complications depend on the size of the stones. Smaller stones may escape into the common bile duct, but may lodge at the narrowing of the hepatopancreatic sphincter (sphincter of Oddi), obstructing the common bile duct and pancreatic duct, leading to obstructive jaundice and pancreatitis respectively. […] In most series, alcohol and gallstones each account for 30–35% of cases [of acute pancreatitis]. […] Once symptomatic, the definitive treatment of gallstone disease is generally surgical via a cholecystectomy.”

“Breast cancer affects 1 in 8 women (lifetime risk) in the UK. […] Between 10 and 40% of women who are found to have a mass by mammography will have breast cancer. […] The presence of lymphovascular invasion indicates the likelihood of spread of tumour cells beyond the breast, thereby conferring a poorer outlook. Without lymph node involvement, the 10-year disease-free survival is close to 70–80% but falls progressively with the number of involved nodes.”

“Melanoma is a cancer of melanocytes, the pigmented cells in the skin, and is caused by injury to lightly pigmented skin by excessive exposure to ultraviolet (UV) radiation […] The change in colour of a pre-existing pigmented lesion with itching and bleeding and irregular margins on examination are indicators of transformation to melanoma. Melanomas progress through a radial growth phase to a vertical growth phase. In the radial growth phase, the lesion expands horizontally within the epidermis and superficial dermis often for a long period of time. Progression to the vertical phase is characterized by downward growth of the lesion into the deeper dermis and with absence of maturation of cells at the advancing front. During this phase, the lesion acquires the potential to metastasize through lymphovascular channels. The probability of this happening increases with increasing depth of invasion (Breslow thickness) by the melanoma cells. […] The ABCDE mnemonic aids in the diagnosis of melanoma: Asymmetry – melanomas are likely to be irregular or asymmetrical. Border – melanomas are more likely to have an irregular border with jagged edges. Colour – melanomas tend to be variegated in colour […]. Diameter – melanomas are usually more than 7 mm in diameter. Evolution – look for changes in the size, shape or colour of a mole.”

“CLL [chronic lymphocytic leukaemia] is the most common leukaemia in the Western world. Typically, it is picked up via an incidental lymphocytosis in an asymptomatic individual. […] The disease is staged according to the Binet classification. Typically, patients with Binet stage A disease require no immediate treatment. Symptomatic stage B and all stage C patients receive chemotherapy. […] cure is rare and the aim is to achieve periods of remission and symptom control. […] The median survival in CLL is between four and six years, though some patients survive a decade or more. […] There is […] a tendency of CLL to transform into a more aggressive leukaemia, typically a prolymphocytic transformation (in 15–30% of patients) or, less commonly (<10% of cases), transformation into a diffuse large B-cell lymphoma (a so-called Richter transformation). Appearance of transformative disease is an ominous sign, with few patients surviving for more than a year with such disease.”

“Pain, swelling, warmth, tenderness and immobility are the five cardinal signs of acute inflammation.”

“Osteomyelitis is an infection of bone that is characterized by progressive inflammatory destruction with the formation of sequestra (dead pieces of bone within living bone), which if not treated leads to new bone formation occurring on top of the dead and infected bone. It can affect any bone, although it occurs most commonly in long bones. […] Bone phagocytes engulf the bacteria and release osteolytic enzymes and toxic oxygen free radicals, which lyse the surrounding bone. Pus raises intraosseus pressure and impairs blood flow, resulting in thrombosis of the blood vessels. Ischaemia results in bone necrosis and devitalized segments of bone (known as sequestra). These sequestra are important in the pathogenesis of non-resolving infection, acting as an ongoing focus of infection if not removed. Osteomyelitis is one of the most difficult infections to treat. Treatment may require surgery in addition to antibiotics, especially in chronic osteomyelitis where sequestra are present. […] Poorly controlled diabetics are at increased risk of infections, and having an infection leads to poor control of diabetes via altered physiology occurring during infection. Diabetics are prone to developing foot ulcers, which in turn are prone to becoming infected, which then act as a source of bacteria for infecting the contiguous bones of the feet. This process is exacerbated in patients with peripheral neuropathy, poor diabetic control and peripheral vascular disease, as these all increase the risk of development of skin breakdown and subsequent osteomyelitis.” [The patient was of course a diabetic…]

“Recent onset fever and back pain suggest an upper UTI [urinary tract infection]. UTIs are classified by anatomy into lower and upper UTIs. Lower UTIs refer to infections at or below the level of the bladder, and include cystitis, urethritis, prostatitis, and epididymitis (the latter three being more often sexually transmitted). Upper UTIs refer to infection above the bladder, and include the ureters and kidneys. Infection of the urinary tract above the bladder is known as pyelonephritis [which] may be life threatening or lead to permanent kidney damage if not promptly treated. UTIs are also classified as complicated or uncomplicated. UTIs in men, the elderly, pregnant women, those who have an indwelling catheter, and anatomic or functional abnormality of the urinary tract are considered to be complicated. A complicated UTI will often receive longer courses of broader spectrum antibiotics. Importantly, the clinical history alone of dysuria and frequency (without vaginal discharge) is associated with more than 90% probability of a UTI in healthy women. […] In women, a UTI develops when urinary pathogens from the bowel or vagina colonize the urethral mucosa, and ascend via the urethra into the bladder. During an uncomplicated symptomatic UTI in women, it is rare for infection to ascend via the ureter into the kidney to cause pyelonephritis. […] Up to 40% of uncomplicated lower UTIs in women will resolve spontaneously without antimicrobial therapy. The use of antibiotics in this cohort is controversial when taking into account the side effects of antibiotics and their effect on normal flora. If prescribed, antibiotics for uncomplicated lower UTIs should be narrow-spectrum […] Most healthcare-associated UTIs are associated with the use of urinary catheters. Each day the catheter remains in situ, the risk of UTI rises by around 5%. Thus inserting catheters only when absolutely needed, and ensuring they are removed as soon as possible, can prevent these.”

September 24, 2014 Posted by | alcohol, Books, Cancer/oncology, Cardiology, Diabetes, Immunology, Medicine, Microbiology, Nephrology, Neurology | Leave a comment

The Emergence of Animals: The Cambrian Breakthrough (II)

I decided to write one more post (this one) about the book and leave it at that. Go here for my first post about the book, which has some general remarks about the book, as well as a lot of relevant links to articles from wikipedia which cover topics also covered in the book. Below I have added some observations from the second half of the book.

“Use of bedrock geology to reconstruct ancient continental positions relies on the idea that if two separated continents were once joined to form a single, larger continent, then there ought to be distinctive geological terranes (such as mineral belts, mountain chains, bodies of igneous rock of similar age, and other roughly linear to irregularly-shaped large-scale geologic features) that were once contiguous but are now separated. Matching of these features can provide clues to the positions of continents that were once together. […] The main problem with using bedrock geology features to match continental puzzle pieces together is that many of the potentially most useful linear geologic features on the continents (such as volcanic arcs or chains of volcanoes, and continental margin fold belts or parallel mountain chains formed by compression of strata) are parallel to the edge of the continent. Therefore, these features generally run parallel to rift fractures, and are less likely to continue and be recognizable on any continent that was once connected to the continent in question.

Paleomagnetic evidence is an important tool for the determination of ancient continent positions and for the reconstruction of supercontinents. Nearly all rock types, be they sedimentary or igneous, contain minerals that contain the elements iron or titanium. Many of these iron- and titanium-bearing minerals are magnetic. […] The magnetization of a crystal of a magnetic mineral (such as magnetite) is established immediately after the mineral crystallizes from a volcanic melt (lava) but before it cools below the Curie point temperature. Each magnetic mineral has its own specific Curie point. […] As the mineral grain passes through the Curie point, the ambient magnetic field is “frozen” into the crystal and will remain unchanged until the crystal is destroyed by weathering or once again heated above the Curie point. This “locking in” of the magnetic signal in igneous rock crystals is the crucial event for paleomagnetism, for it indicates the direction of magnetic north at the time the crystal cooled (sometime in the distant geologic past for most igneous rocks). The ancient latitudinal position of the rock (and the continent of which it is a part) can be determined by measuring the direction of the crystal’s magnetization. For ancient rocks, this direction can be quite different from the direction of present day magnetic north. […] Paleomagnetic reconstruction is a form of geological analysis that is, unfortunately, fraught with uncertainties. The original magnetization is easily altered by weathering and metamorphism, and can confuse or obliterate the original magnetic signal. An inherent limitation of paleomagnetic reconstruction of ancient continental positions is that the magnetic remanence only gives information concerning the rocks’ latitudinal position, and gives no clue as to the original longitudinal position of the rocks in question. For example, southern Mexico and central India, although nearly half a world apart, are both at about 20 degrees North latitude, and, therefore, lavas cooling in either country would have essentially the same primary magnetic remanence. One of the few ways to get information about the ancient longitudinal positions of continents is to use comparison of life forms on different continents. The study of ancient distributions of organisms is called paleobiogeography.”

“Photosynthesis is generally considered to be a characteristic of plants in the traditional usage of the term “plant.” Nonbiologists are sometimes surprised to learn that [some] animals are photosynthetic […] One might argue that marine animals with zooxanthellae (symbiotic protists) are not truly photosynthetic because it is the protists that do the photosynthesis, not the animal. The protists just happen to be inside the animal. We would argue that this is not an important consideration, since photosynthesis in all eukaryotic (nucleated) cells is accomplished by chloroplasts, tiny organelles that are the cell’s photosynthesis factories. Chloroplasts are now thought by many biologists to have arisen by a symbiosis event in which a small, photosynthetic moneran took up symbiotic residence within a larger microbe […]. The symbiotic relationship eventually became so well established that it became an obligatory relationship for both the host microbe and the smaller symbiont moneran. Reproductive provisions were made to pass the genetic material of the symbiont, as well as the host, on to succeeding generations. It would sound strange to describe an oak as a “multicellular alga invaded by photosynthetic moneran symbionts,” but that is — in essence — what a tree is. Animals with photosynthetic protists in their bodies are able to create food internally, in the same way that an oak tree can, so we feel that these animals can be correctly called photosynthetic. […] Many of the most primitive types of living metazoa contain photosymbiotic
microbes or chloroplasts derived from microbes.”

“The most obvious reason for any organism, regardless of what kingdom it belongs to, to evolve a leaf-shaped body is to maximize its surface area. Leaf shape evolves in response to factors in addition to surface area requirement, but the surface area requirement, in all cases we are aware of, is the most important factor. […] Leaves of modern plants and Ediacaran animals probably evolved similar shapes for the same reason, namely, maximization of surface area. […] Photosymbiosis is not the only possible departure from heterotrophic feeding, the usual method of food acquisition for modern animals. Seilacher (1984) notes that flat bodies are good for absorption of simple compounds such as hydrogen sulfide, needed for one type of chemosymbiosis. In chemosymbiosis as in photosymbiosis, microbes (in this case bacteria) are held within an animal’s tissues as paying guests. The bacteria are able to use the energy stored in hydrogen sulphide molecules that diffuse into the host animal’s tissues. The bacteria use the hydrogen sulfide to create food, using biochemical reactions that would be impossible for animals to do by themselves. The bacteria use some of the food for themselves, but great excesses are produced and passed on to the host animal’s tissues. […] There may be important similarities between the ecologies of
[…] flattened Ediacaran creatures and the modern deep sea vent faunas. […] A form of chemotrophy (feeding on chemicals) that does not involve symbiosis is simple absorption of nutrients dissolved in sea water. Although this might not seem a particularly efficient way of obtaining food, there are tremendous amounts of “unclaimed” organic material dissolved in sea water. Monerans allow these nutrients to diffuse into their cells, a fact well known to microbiologists. Less well known is the fact that larger organisms can feed in this way also. Benthic foraminifera up to 38 millimeters long from McMurdo Sound, Antarctica, take up dissolved organic matter largely as a function of the surface area of their branched bodies”

“Although there is as of yet no unequivocal proof, it seems reasonable to infer from their shapes that members of the Ediacaran fauna used photosymbiosis, chemosymbiosis, and direct nutrient absorption to satisfy their food needs. Since these methods do not involve killing, eating, and digesting other living things, we will refer to them as “soft path” feeding strategies. Heterotrophic organisms use “hard path” feeding strategies because they need to use up the bodies of other organisms for energy. The higher in the food pyramid, the “harder” the feeding strategy, on up to the keystone predator (top carnivore) at the top of any particular ecosystem’s trophic pyramid. It is important to note that the term “hard,” as used here, does not necessarily imply that autotrophic organisms have any easier a time obtaining their food than do heterotrophic organisms. Green plants are not very efficient at converting sunlight to food; sunlight can be thought of as an elusive prey because it is not a concentrated energy source […]. Low food concentrations are a major difficulty encountered by organisms employing soft path feeding strategies. Deposit feeding is intermediate between hard and soft paths. […] Filter feeding, or capturing food suspended in the water, also has components of both hard and soft paths because suspension feeders can take both living and nonliving food from the water.”

“Probing deposit feeders […] began to excavate sediments to depths of several centimeters at the beginning of the Cambrian. Dwelling burrows several centimeters in length, such as Skolithos, first appeared in the Cambrian, and provided protection for filter-feeding animals. If a skeleton is broadly defined as a rigid body support, a burrow is in essence a skeleton formed of sediment […] Movement of metazoans into the substrate had profound implications for sea floor marine ecology. One aspect of the environment that controls the number and types of organisms living in the environment is called its dimensionality […]. Two-dimensional (or Dimension 2) environments tend to be flat, whereas three-dimensional environments (Dimension 3) have, to a greater or lesser degree, a third dimension. This third dimension can be either in an upward or a downward direction, or a combination of both directions. The Vendian sea floor was essentially a two-dimensional environment. […] With the probable exception of some of the stalked frond fossils, most Vendian soft-bodied forms hugged the sea floor. Deep burrowers added a third dimension to the benthos (sea floor communities), creating a three-dimensional environment where a two-dimensional situation had prevailed. The greater the dimensionality in any given environment, the longer the food chain and the taller the trophic pyramid can be […]. If the appearance of abundant predators is any indication, lengthening of the food chain seems to be an important aspect of the Cambrian explosion. Changes in animal anatomy and intelligence can be linked to this lengthening of the food chain. Most Cambrian animals are three-dimensional creatures, not flattened like many of their Vendian predecessors. Animals like mollusks and worms, even if they lack mineralized skeletons, are able to rigidify their bodies with the use of a water-filled internal skeleton called a coelom […] This fluid-filled cavity gives an animal’s body stiffness, and acts much like a turgid, internal, water balloon. A coelom allows animals to burrow in sediment in ways that a flattened animal (such as, for instance, a flatworm) cannot. It is most likely that a coelom first evolved in those Vendian shallow scribble-trail makers that were contemporaries of the large soft-bodied fossils. Some of these Ediacaran burrows show evidence of peristaltic burrowing. Inefficient peristaltic burrowing can be done without a coelom, but with a coelom it becomes dramatically more effective.”

Bilateral symmetry is important when considering the behavior of […] early coelomate animals. The most likely animal to evolve a brain is one with bilateral symmetry. Concomitant with the emergence of animals during the Vendian was the origin of brains. The Cambrian explosion was the first cerebralization or encephalization event. As part of the increase in the length of the food chain discussed above, higher-level consumers such as top or keystone predators established a mode of life that requires the seeking out and attacking of prey. These activities are greatly aided by having a brain able to organize and control complex behavior. […] Specialized light receptors seem to be a characteristic of all animals and many other types of organisms; […] photoreceptors have originated independently in at least forty and perhaps as many as sixty groups. Most animal phyla have at a minimum several pigmented eye spots. But advanced vision (i. e., compound or image-forming eyes) tied directly into a centralized brain is not common or well developed until the Cambrian. The tendency to have eyes is more pronounced for bilateral than for radial animals. […] some of the earliest trilobites had large compound eyes. Trilobites were probably not particularly smart by modern standards, but chances are that their behavioral capabilities far outstripped any that had existed during the early Vendian. […] Actively moving or vagile predators are, as a rule, smarter than their prey, because of the more rigorous requirements of information processing in a predatory life mode. Anomalocaris as a seek-and-destroy top predator may have been the brainiest Early Cambrian animal.”

“why didn’t brains and advanced predation develop much earlier that they did? A simple, thought experiment may help address this problem. Consider a jellyfish 1 mm in length and a cylindrical worm 1 mm in length. Increase the size (linear dimension) of each (by growth of the individual or by evolutionary change over thousands of generations) one hundred times. […] The worm will need internal plumbing because of its cylindrical body. The jellyfish won’t be as dependent on plumbing because its body has a higher surface area. […] Our enlarged, 10 cm long worm will possess a brain which has a volume one million times greater than the brain of its 1 mm predecessor (assuming that the shape of the brain remains constant). The jellyfish will also get more nerve tissue as it enlarges. But its nervous system is spread out in a netlike fashion; at most, its nerve tissue will be concentrated at a few radially symmetric points. The potential for complex and easily reprogrammed behavior, as well as sophisticated processing of sensory input data, is much greater in the animal with the million times larger brain (containing at least a million times as many brain cells as its tiny predecessor). Complex neural pathways are more likely to form in the larger brain. This implies no mysterious tendency for animals to grow larger brains; perfectly successful, advanced animals (echinoderms) and even slow-moving predators (sea spiders) get along fine without much brain. But centralized nerve tissue can process information better than a nerve net and control more complex responses to stimuli. Once brains were used to locate food, the world would never again be the same. This can be thought of as a “brain revolution” that permanently changed the world a half billion years ago.”

“There is little doubt that organisms produced oxygen before 2 billion years ago, but this oxygen was unable to accumulate as a gas because iron dissolved in seawater combined with the oxygen to form rust (iron oxide), a precipitate that sank, chemically inactive, to accumulate on the sea floor. Just as salt has accumulated in the oceans over billions of years, unoxidized (or reduced) iron was abundant in the seas before 2 billion years ago, and was available to “neutralize” the waste oxygen. Thus, dissolved iron performed an important oxygen disposal service; oxygen is a deadly toxin to organisms that do not have special enzymes to limit its reactivity. Once the reduced iron was removed from sea water (and precipitated on the sea floor as Precambrian iron formations; much of the iron mined for our automobiles is derived from these formations), oxygen began to accumulate in water and air. Life in the seas was either restricted to environments where oxygen remained rare, or was forced to develop enzymes […] capable of detoxifying oxygen. Oxygen could also be used by heterotrophic organisms to “burn” the biologic fuel captured in the form of the bodies of their prey. […] Much research has focused on lowered levels of atmospheric oxygen during the Precambrian. The other alternative, that oxygen levels were higher at times during the Precambrian than at present has not been much discussed. Once the “sinks” for free oxygen, such as dissolved iron, were saturated, there is little that would have prevented oxygen levels in the Precambrian from getting much higher than they are today. This is particularly so since there is no evidence for the presence of Precambrian land plants which could have acted as a negative feedback for continued increases in oxygen levels” [Here’s a recent-ish paper on the topicdo note that there’s an important distinction to be made between atmospheric oxygen levels and the oxygen levels of the oceans].

August 4, 2014 Posted by | Biology, Books, Botany, Ecology, Evolutionary biology, Geology, Microbiology, Paleontology, Zoology | Leave a comment

Infectious Agents and Cancer

“[H]uman papilloma virus, hepatitis B virus, hepatitis C virus, Epstein-Barr virus, human herpes virus 8, human T-cell lymphotropic virus 1, human immunodeficiency virus, Merkel cell polyomavirus, Helicobacter pylori, Opisthorchis viverrini, Clonorchis sinensis, Schistosoma haematobium […] are recognized as carcinogens and probable carcinogens by [the] International Agency for Research on Cancer (IARC). They are not considered in this book […] The aim of this monograph is to analyze associations of other infectious agents with cancer risk […] virology is not considered in our monograph: although there are some viruses that can be connected with cancer but are not included into the IARC list (John Cunningham virus, herpes simplex virus-1 and -2, human cytomegalovirus, simian virus 40, xenotropic murine leukemia virus-related virus), we decided to leave them for the virologists and to concentrate our efforts on other infectious agents (bacteria, protozoa, helminths and fungi) […] To the best of our knowledge, this is the first book devoted to this problem”

Here’s what I wrote on goodreads:

“This book is written by three Russian researchers, and you can tell; the language is occasionally hilariously bad, but it’s not too difficult to figure out what they’re trying to say. The content partially made up for the poor language, as the book covers quite a bit of ground considering the low page count.”

I gave the book two stars. I’m glad they wrote the book, because it covered some stuff I didn’t know much about. I think I’m closer to one star than three, but it’s mostly because it’s terribly written, not because I have major objections to the coverage as such. What I mean by this is that they talk about a lot of studies and they include a lot of data – they’re scientists who write about scientific research, they just happen to be Russian scientists who are not very good at English. It’s terribly written, but the stuff is interesting.

As mentioned above there are quite a few viruses which we know may lead to cancer in humans. I’ve recently read a lot of stuff about this topic as it was covered in both Boffetta at el. and also rather extensively in part 5 of the Sexually Transmitted Diseases text, which covered sexually transmitted viral pathogens (that section of the book was with its 230 pages actually a ‘book-length section’; it was significantly longer than this book is..). I’ve even covered some of that stuff here on the blog, e.g. here. I may incidentally write more about these things and related stuff later, as I’m quite far behind in terms of my intended coverage of the STD book at the moment.

Anyway, viruses aren’t the only bad guys around. So these guys decided to write a book about some other infectious diseases affecting humans, and how these infectious diseases may relate to cancer risk. As they point out in the book, “there is only one bacterium, Helicobacter pylori, which is recognized by IARC as an established human carcinogen.” After reading this book you’ll realize that there are some others which perhaps look a bit suspicious as well. In some cases a lot of studies have been done and you have both animal-models, lab-analyses, case-control studies, cohort studies, … In other cases you have just a few small studies to judge from. As is always the case when people have a close look at epidemiological research, this stuff is messy. Sometimes studies that looked really convincing turn out to not replicate in larger samples, sometimes dramatically different effect sizes are found in different areas of the world (which may of course both be interpreted as an indicator that the ‘true’ effect sizes are different in the different subpopulations, or it may be interpreted as a result e.g. of faulty study design which makes those Swedish data look really fishy..), sometimes different results can be explained by differences in data quality/type of data applied/etc. (classic cases are different effects based on whether you rely on self reports or biological disease markers, and different results from analyses of bacterial cultures vs PCR analyses), and so on and so forth. There are a lot of details, and they cover them in the book. I occasionally see people criticize epidemiological research online on the grounds that many (‘all?’) results published in this area are just random correlations without any deeper meaning. Sometimes this criticism may well be warranted, and the authors of this book certainly in some cases seem to go quite a bit further than I would do based on the same data. But there’s another part of the story here. When you start out with a couple of case-control studies indicating that guys with cancer type X are more likely to have positive lab cultures for this specific micro-organism, that may not be a big deal. But perhaps then a few microbiologists show up and tell you that it would actually make a lot of sense if there was a connection here (and they might start talking about fancy stuff like various ‘modulations of host immune responses’, ‘inflammatory markers’, ‘the role of nitric oxides’, …). They conduct some studies as well and perhaps one of the things they find is that the observed cancer grades in the patients seem to depend quite a lot upon which of the pathogen subtypes the individual happen to be infected with (perhaps suddenly also providing an explanation for some previously surprising negative results in specific cases). And then perhaps you get a couple of animal studies that show that these animals get cancer when you infect them with these bugs and don’t treat the infection. Perhaps you have a few more studies as well in different populations, because Chinese people get cancer too, and you start seeing that people around the world who happen to be infected with these bugs are all more likely to get cancer, compared to the locals who are not infected (…or perhaps not, and then it just gets more fun…). This process goes on for a while, until at some point it starts getting really hard to think these positive correlations are all just the result of random p-value hunting done by bored researchers who don’t know what else to do with their time, and you start asking yourself if perhaps this idea is not as stupid as it was when you first encountered it. Most of the time the process stops before then because the proposed link isn’t there, but modern epidemiology is not just random collections of correlations.

In the context of the specific infectious diseases covered in the book the people who have in some sense the final say in these things (the IARC) think we’re not quite there yet, but you have some cases where some different lines of evidence all seem to indicate that a link may be present and relevant. It would be highly surprising to me if in 20 years time we’d have realized that none of the infectious diseases they talk about in this book are at all involved in cancer pathogenesis. A related point is that most likely we’ve missed some ‘true connections’ along the way, and will continue to do so in the future, because even if a link is there, it’s sometimes really hard to find it and easy to overlook it, for many different reasons.

I have quoted a bit from the book below and added some comments here and there. I have corrected some of the spelling/language errors the authors made to ease reading; if a word is placed in brackets, it’s an indicator that I’ve replaced a misspelled word by the correct one (‘they meant to use’). The authors do not even have any clue how and when to use the word ‘the’, often using it when it’s not needed and forgetting to use it in cases where it is needed, which made quoting from the book painful. Read it for the content.

“Chronic inflammation substantially increases the probability of neoplastic transformation of the surrounding cells, inducing mutations and epigenetic alterations by the activity of inflammatory molecules […] through the formation of free radicals and DNA damage […] Since infectious agents persisting in the organism may cause chronic inflammation, they can also promote local carcinogenesis. […] Chronic inflammation can also specifically affect the functioning of [an] organ, for instance, promoting cholelithiasis and urolithiasis that increase the time of exposure of the gallbladder, bile ducts, urinary bladder and ureters to chemical carcinogens and carcinogenic bacteria. […] In addition to […] metabolic and immune mechanisms, a number of bacteria […] and protozoa […] [produce] or [contain] in their cell wall their own toxins […] possessing [carcinogenic] activity, affecting cell-cell interactions, intracellular signal transduction or induction of mutations and epigenetic alterations that can influence vital cell processes (apoptosis, proliferation, survival, growth, differentiation, invasion). Intracellular protozoan (Toxoplasma gondii) may induce resistance to multiple mechanisms of apoptosis […]. So, bacterial and [protozoan toxins] may function like initiating or like promoting agents.”

“Typhoid fever, which is a systemic infection caused by Salmonella enterica serovar Typhi (S. typhi), is a major health problem in developing countries. There are approximately 21.6 million cases of typhoid fever worldwide and an estimated 200,000 deaths every year. It is known that S. typhi may colonize the gallbladder, causing […] chronic inflammation. Welton et al. (1979) were the very first to [establish] an association between the typhoid-carrier state and death due to malignancies of the hepatobiliary tract. They recruited 471 U.S. carriers of [S. typhi] , matched them with 942 controls and demonstrated that chronic typhoid carriers died of hepatobiliary cancer six times more often than the controls. […] The absence of basic research analyzing the carcinogenic properties of S. typhi does not allow placing it in the short list of the infectious agents that may be a cause of cancer development but are not included in the IARC roster, but this bacterium undoubtedly should be [on] the extended list. If [basic] studies on cell lines and animal models [support] the results of [the] epidemiological investigations, S. typhi can be placed [on] the short list.” [I included this in part because it is one of several examples in the book of how even strong correlations and high relative risks are not considered sufficient on their own by epidemiologists to settle matters. Some relative risks in other studies have been even higher – a study on gall-bladder cancer found an RR of 12.7].

“Tuberculosis (TB), a destructive disease [affecting] the lungs […] is a major global health burden, with about nine million of new cases and 1.1 million deaths annually. When the host protective immunity fails to control M. tuberculosis growth, progression to active disease occurs. […] According to the data of the last comprehensive systematic review and [meta-analysis] published by Brenner et al. (2011), there were 30 studies […] conducted in North America, Europe and Asia, which investigated the association of tuberculosis on lung cancer risk with adjustment for smoking. The relative risk (RR) of lung cancer development among patients with TB history was 1.76 (95% CI = 1.49–2.08).”

“22 studies from North America, Europe and East Asia [have] investigated the association between pneumonia and lung cancer risk while adjusting for smoking […] A significant increase in lung cancer risk was observed among all studies (RR = 1.43, 95% CI = 1.22–1.68). […] To sum up, there are basic as well as extensive epidemiological evidence that С. pneumoniae may cause lung cancer” [However effect sizes seem to be different in different countries. I was skeptical about this one in part because a non-smoker’s absolute risk of getting lung cancer is very low, meaning that relative risks in the neighbourhood reported above although statistically significant probably are clinically insignificant. How pneumonia and smoking interact seems to me a much more important question. Then again we haven’t got an explanation for all of the non-smoking-related lung cancers yet, and they are caused by something, so it’s also not like researching this is a complete waste of time.]

“Primary infection with C. trachomatis [Chlamydia], the most prevalent sexually transmitted bacterium worldwide with an estimated 90 million new cases occurring each year, is often asymptomatic and may persist for several months or years. The first study analyzing possible association of C. trachomatis with cervical cancer was carried out by Schachter et al. (1975) who assessed the prevalence of antibodies to TRIC (trachoma-inclusion conjunctivitis) agents in women with cervical dysplasia and in women attending selected clinics […]. According to this investigation, antibodies to chlamydiae were identified in 77.6% of the women with dysplasia or cervical cancer whereas antichlamydial antibodies were less prevalent in the other clinic populations. Four years later, Paavonen et al. (1979) obtained [similar] results in 93 of patients with cervical dysplasia comparing them to the controls. […] Smith et al. (2001, 2002) examined 499 women with incident invasive cervical cancer cases and 539 control patients from Brazil and the Philippines, detecting that C. trachomatis increased risk of squamous cervical cancer among HPV-positive women (OR=2.1; 95% CI=1.1–4.0). The results were similar in both countries.” [As I recently pointed out elsewhere, “Chronic infection with HPV is a necessary cause of cervical cancer. Using sensitive molecular techniques, virtually all tumours are positive for the virus.” But as this finding (and other related findings) indicate, other infectious processes may play a role as well in HPV-related cancers. Synergistic effects are common in this area (recall for example also the herpes simplex virus-HIV link).]

Trichomonas vaginalis (T. vaginalis), a protozoan parasite, is the causative agent of trichomoniasis, the most common nonviral sexually transmitted disease in humans. This parasite has a worldwide distribution and it infects 250–350 million people worldwide. [wiki says ~150 mil, but these guesstimates should always be taken with a grain of salt. Either way it affects a lot of people] […] Zhang et al. (1995) observed a relationship between T. vaginalis infection and cervical cancer in [their] prospective study in a cohort of 16,797 Chinese women. T. vaginalis-infection correlated with higher cervical cancer risk (RR=3.3, 95% CI=1.5–7.4). In a large cohort study conducted in Finland by Viikki et al. (2000) T. vaginalis was associated with a high RR of cervical cancer, 6.4 (95% CI = 3.7–10) and SIR [standardized incidence ratio]=5.5 (95% CI=4.2–7.2s), respectively. […] Mekki and Ivić (1979), detected that T. vaginalis were of a significantly smaller diameter in invasive carcinoma and carcinoma in situ in comparison with dysplasia. In the control group with trichomoniasis alone, the diameter of T. vaginalis was twice as large as that in carcinoma and larger compared to dysplasia, indicating that small forms of T. vaginalis are more carcinogenic than large ones. […] To sum up, there are basic as well as epidemiological evidence that T. vaginalis may be a cause of cervical and prostate cancer […] For cervical cancer it is evident, for prostate cancer it is arguable. According to our criteria, it is possible to include it in the short list of the infectious agents that may be a cause of cancer development but are not placed in the IARC roster.”

“At the moment of publication, IARC [recognizes] Schistosoma haematobium [and] [S. mansoni], Opisthorchis viverrini, and Clonorchis sinensis as causative agents of cancer, leaving a possibility to enlarge this list by [Schistosoma japonicum] [and] Opisthorchis felineus.” [The authors think the list should be enlarged even more, but I did not find their helmith data/coverage very convincing (not much research has been done in this area), so I decided not to cover these things here].

July 12, 2014 Posted by | Books, Cancer/oncology, Epidemiology, Immunology, Infectious disease, Medicine, Microbiology | Leave a comment

Sexually Transmitted Diseases (4th edition) (IV)

Here’s a link to a previous post about the book, which includes links to the first two posts I wrote about it.

I was not super impressed with the coverage in part 3, although there was a lot of interesting stuff as well. However the level of coverage and amount of detail included is high in part four and five. There were a lot of details which evaded me in some of the recent chapters, but I also learned a great deal. There’s quite a lot of coverage of various ‘related topics’ (microbiology, biochemistry, immunology, oncology) in the parts of the book I’ve read recently, and like many other medical texts this book will help you realize that many things you in your mind had thought of as unrelated actually are connected in various interesting ways. It’s worth noting that given how many aspects of these things the book covers (again, 2000+ pages…) you actually get to know a lot of stuff about a lot of other things besides just ‘classic STDs’. It turns out that in Jamaica and Trinidad, over 70% of all lymphoid malignancies are attributable to exposure to a specific herpes virus most people probably haven’t heard about, HTLV-1 (prevalence is also high in other parts of the world, e.g. southern Japan). I didn’t expect to learn this from a book about sexually transmitted diseases, but there we are.

I hope that I’ve picked out stuff from this part of the coverage which is also intelligible to people who didn’t read the 95+% of those chapters I didn’t quote (I always like feedback on such aspects).

“At the simplest level, infection of a cell by a virus or bacterium may lead to cell death. In the case of viruses, specific disease syndromes may be caused by destruction of certain subsets of cells that express essential differentiated functions. A classic example of this is the development of the AIDS following HIV-1 mediated depletion of the CD4 lymphocyte population. Virus-induced cell death may result from one or more specific mechanisms. Many viruses express specific proteins that have as their major function the induction of a blockade in normal host cell metabolism (cellular translation and transcription) such that the metabolic machinery of the cell is subverted preferentially to viral replication. For obvious reasons, the expression of such proteins is usually highly toxic to the cell. Cellular destruction or “direct cytopathic effect” is considered responsible for the disease manifestations of many lytic viruses, including, for example, HSV and poliovirus. On the other hand, many cells may respond to the presence of an invading virus by the induction of apoptosis and the initiation of programmed cell death. Some viruses appear to have evolved mechanisms to prevent or delay apoptosis, thus potentially prolonging productive infection and maximizing replication. For example, HSV-1 infection induces apoptosis at multiple metabolic checkpoints but has also evolved mechanisms to block apoptosis at each point.28 Importantly, the inhibition of apoptosis by HSV-1 also prevents apoptosis induced by virus-specific cytotoxic T lymphocytes, thereby conferring on the infected cell a certain measure of resistance to the host’s cell-mediated immune responses.29

However, many viruses are not intrinsically cytopathic. HBV is a prime example, as many infected HBsAg carriers are asymptomatic and without overt evidence of active liver disease. Despite this, such carriers may be very infectious […] The presence or absence of liver disease is largely determined by the T-cell response to the virus.30 Thus, chronic hepatitis B results from a relatively vigorous but unsuccessful attempt on the part of the host to eliminate the infection. […] chronic liver inflammation and the occurrence of hepatocellular carcinoma reflect the immune response to the virus, rather than specific virus effects. Similar indirect mechanisms may contribute to the progressive immune destruction of infected CD4-positive lymphocytes in patients with HIV-1 infection.

Some bacterial disease processes may also be caused largely to immunopathologic responses. For instance, there is substantial evidence that complications of genital chlamydia infections (salpingitis, Reiter’s syndrome) are correlated with and may be owing to stimulation of antibodies against a heatshock protein (hsp60).33,34 […] In contrast, gonococcal tissue damage appears to be caused by the direct toxic effects of lipid A and peptidoglycan fragments”

“Some viruses are capable of altering differentiated cellular functions, resulting in the production of disease by mechanisms that do not exist among bacteria. A prime example is the altered cellular growth that follows infections by molluscum contagiosum virus (MCV) […]. A more extreme example is the proliferation of epithelial cells that is induced by infection with HPVs. HPV-related epithelial malignancies and cellular transformation are related to the expression of two specific HPV proteins, the E6 and E7 oncoproteins, by high-risk HPV subtypes.22 These proteins interact with p53 and pRb, both promoting cellular proliferation and cell survival. Oncogenic transformation is usually associated with high-level expression of E7 from integrated HPV DNA. The Kaposi’s sarcoma-associated herpes virus (KSHV) also expresses a number of proteins that mimic important host regulators of cellular proliferation and survival […] Expression of these proteins may result in deregulation of cell growth, with changes in the cellular morphology and/or acquisition of the ability of the cells to form colonies in soft agar, changes that are indicative of transformation.

On the other hand, hepatocellular cancers occurring in the context of chronic viral hepatitis are likely to have an alternative explanation. Although it is possible that integration of HBV DNA may be responsible for altered cellular growth control in some hepatitis B-associated cases, liver cancer in this setting may be primarily immunopathogenic.30,32 Chronic inflammation accompanied by oxidative stress and cellular DNA damage are likely to pla[y] important roles.”

“The human immunodeficiency viruses (HIV-1 and HIV-2) and the simian immunodeficiency viruses (SIV) (with a subscript indicating the species of origin) are members of the lentivirus genus of the Retroviridae family, commonly called retroviruses. […] Retroviruses are divided into two subfamilies: Orthoretrovirinae and Spumaretrovirinae […] The spumaretroviruses have distinctive features of their replication cycle that require this more distant classification. They have been isolated from primates, but not humans, and are not associated with any known disease. The orthoretroviruses are divided into six genera and represent viruses that infect snakes, fish, birds, and mammals. […] Human infections occur with viruses from two of these genera. The Deltaretrovirus genus includes human T-cell leukemia virus type I (HTLV-I), the causative agent of adult T-cell leukemia,5, 6, 7 and human T-cell leukemia virus type II (HTLV-II), which is not known to be associated with any disease syndrome. HTLV-I is also associated with another syndrome called HTLV-associated myelopathy (HAM). HTLV-I and HTLV-II are related to viruses found in primates and more distantly related to bovine leukemia virus. The lentivirus genus includes HIV-18 and HIV-29 as well as viruses found in a variety of mammals ranging from primates to sheep. Viruses within these different genera vary widely in the diseases they cause and the mechanisms of disease induction, in contrast to the many common features of their replication cycle. […] In its DNA form the viral genome is inserted into the host genome […]. This step in the virus life cycle has important implications for several features of virus-host interactions. For example, viral DNA that integrates into the genome of a cell but is not expressed becomes silently carried in the descendents of that cell. When this happens in a germline cell, or in the cell of an early embryo that becomes a germline cell, this copy of viral DNA becomes a linked physical part of the host genome, is present in every cell in the body, and is passed on to subsequent generations. Such a genetic element is called an endogenous retrovirus. Most of the elements that become fixed are defective, as there is probably a strong selective pressure against elements that can activate to produce infectious virus. Thus, they represent an archive within the host genome of previous waves of retroviral infections. In fact, the human genome carries a record of retroviral infections over the last 40 million years of primate evolution. These are viruses that we do not recognize as active in the human population at present but are represented by 110,000 genomic inserts of gammaretroviruses, 10,000 inserts of betaretroviruses, and 80,000 inserts of a genus that may be distantly related to spumaretroviruses or may represent an uncharacterized lineage.10 Most of these elements contain large deletions; however, if these deletions had been retained, our genomes would be 40% endogenous retroviruses by mass and outnumber our normal genes 7 to 1.”

“Most histories of retroviruses start with the dramatic discovery by Peyton Rous in 1911 that a virus, Rous sarcoma virus (RSV), could cause cancer. […] The isolation of other tumor-causing retroviruses followed and in time it became apparent that there were two broad classes of agents: one class of viruses caused cancer after a long latency period […], while the other class caused tumors that appeared rapidly […]. We now know that the acutely transforming retroviruses carry a cell-derived oncogene that is responsible for the transforming activity,14 while the slowly transforming retroviruses act by the chance integration of viral DNA near these cellular oncogenes in the host genome to induce their expression and promote tumor formation.15,16 Importantly, many of these same genes can be mutated or overexpressed in human cancers, and the proteins they encode are now the targets of new generations of specific antitumor therapies […] One can confidently surmise that the remnants of the beta- and gammaretroviruses littered in our genomes had such oncogenic effects when they were active. Ironically, for the active human retroviruses, HTLV-I causes tumors by a different but still poorly understood mechanism, and HIV is involved in tumor formation only indirectly through immune suppression. […] There are two fundamental differences between lentiviruses and most other retroviruses: Lentiviruses do not cause cancer [directly…] and they establish chronic infections that result in a long incubation period followed by a chronic symptomatic disease. The “slow” (lenti is Latin for slow), chronic nature of these viral infections was first appreciated for a disease of sheep called maedi-visna (maedi = labored breathing, visna = paralysis and wasting).”

“Using the current sequence diversity in the HIV-1 population, the 1959 sequence, and estimates of the rate of sequence change per year, it has been possible to suggest that the cross-species transmission event that gave rise to the M group of HIV-1 occurred early in the twentieth century.38 If we accept that SIVcpz [HIV in chimps…] has entered the human population three times in the last century (the three groups N,O, and M), then it follows that this virus likely has been transmitted to humans any number of times over the last 10,000 years. Only in the last century the human institutions of large cities and efficient transportation corridors have given these transmission events access to a human environment that could support an epidemic.”

“Over 100 herpesviruses have been identified, with at least eight infecting humans [I had no idea there were that many of them, and I had no clue some of the ones mentioned were actually herpes viruses…]. All human herpesviruses are well adapted to their natural host, being endemic in all human populations studied and carried by a significant fraction of persons in each population. The human herpesviruses include herpes simplex viruses types 1 and 2 (HSV-1 and HSV-2), varicella-zoster virus (VZV), Epstein-Barr virus (EBV), cytomegalovirus (CMV), human herpesvirus 6 (HHV-6), human herpesvirus 7 (HHV-7), and human herpesvirus 8 (HHV-8) or Kaposi’s sarcoma (KS)-associated herpesvirus. Disease caused by human herpesviruses tends to be relatively mild and self-limited in immunocompetent persons, although severe and quite unusual disease can be seen with immunosuppression. […] all herpesviruses share biologic traits. These include expression of a large number of viral enzymes, assembly of the nucleocapsid in the cell nucleus, cytopathic effects on the cell during productive infection, and ability to establish latent infections in an infected host.”

“Vaccine development poses great challenges in the case of herpesviruses because recovery from natural disease is not associated with elimination of virus and does not always protect against another episode of disease.
Live-attenuated, killed, and recombinant subunit herpesvirus vaccines have all been studied. Whole-virus vaccines have the advantage of exposing the immune system to all viral antigens. Live-attenuated vaccines have tended to produce longer-lasting immunity than killed preparations. However, live-attenuated herpesvirus vaccines may be capable of establishing latent infections. The risks are not clear and there is concern that vaccine recipients who subsequently become immunosuppressed may develop disease caused by reactivated virus. Two avirulent HSV strains have been shown to generate lethal recombinants in mice.127 Thus, recombination between an attenuated vaccine strain and a superinfecting wild-type strain could occur. Because several herpesviruses have been associated with malignancies in humans, the long-term safety of any live-attenuated vaccine needs careful study.”

“In the most recent data from NHANES, the prevalence of HSV-1 appears to have fallen slightly from 62% in the years 1988-1994 to 57.7% in the years 1999-2004 in the general population.30 In Western Europe, the prevalence of HSV-1 infection in young adults remains 10-20% higher than that in the United States.31 In STD clinics in the United States, about 60% of attendees have HSV-1 antibodies. In Asia and Africa, HSV-1 infection remains almost universal […] The cumulative lifetime incidence of HSV-2 reaches 25% in white women, 20% in white men, 80% in African American women and 60% in African American men […] Transmission of HSV between sexual partners has been addressed most often in prospective studies of serologically discordant couples, i.e., in couples in whom one partner has and the other does not have HSV-2. Longitudinal studies of such couples have shown that the transmission rate varies from 3% to 12% per year. […] Unlike other STDs, persons usually acquire genital HSV-1 and genital HSV-2 in the context of a steady rather than casual relationship.91 Women have higher rates of acquisition than men; in one study the attack rate among seronegative women approached 30% per year.88 […] Subclinical or asymptomatic viral shedding is an important aspect of the clinical and epidemiologic understanding of genital herpes, as most episodes of sexual and vertical transmission appear to occur during such shedding. […] the risk of HSV transmission is likely similar regardless of the presence of lesions, supporting the epidemiologic observation that most HSV is acquired from asymptomatic partners. […] Subclinical HSV reactivation is highest in the first year after acquisition of infection. During this time period, HSV can be detected from genital sites by PCR on a mean of 25-30% of days […]. This is about 1.5 times higher than patients sampled later in their disease course.”

“The major morbidity of recurrent genital herpes is its frequent reactivation rate. Most likely, all HSV-2 seropositive persons reactivate HSV-2 in the genital region. Moreover, because of the extensive area enervated by the sacral nerve root ganglia, reactivation of HSV-2 is widespread over a large anatomic area.

A prospective study of 457 patients with documented first-episode genital herpes infection has shown that 90% of patients with genital HSV-2 developed recurrences in the first 12 months of infection.93 The median recurrence rate was 0.33 recurrences/month. Most patients experienced multiple clinical reactivations. After primary HSV-2 infection, 38% of patients had at least 6 recurrences and 20% had more than 10 recurrences in the first year of infection. Men had slightly more frequent recurrences than women, median 5 per year compared with 4 recurrences per year [it’s important to note that the recurrence rate is substantial even in patients on suppressive therapy: “About 25% of persons on suppressive therapy will develop a breakthrough recurrence each 3-month period”] […] Recently, long-term cohort studies indicate that the frequency of symptomatic recurrences gradually decreases over time. In the initial years of infection, reported recurrence rate decreases by a median of 1 recurrence per year. […] subclinical shedding episodes account for one-third to one-half of the total episodes of HSV reactivation as measured by viral isolation and for 50-75% of reactivations as measured by PCR. […] Rather than regarding HSV-2 as a predominantly silent infection with occasional clinical outbreaks with marked viral shedding, HSV is a dynamic infection, with very frequent reactivation, mostly subclinical, and active effort on the part of the immune system of the host is required to control mucosal viral replication. […] Immunocompromised patients have frequent and prolonged mucocutaneous HSV infections.226, 227, 228 Over 70% of renal and bone marrow transplant recipients who have serologic evidence of HSV infection reactivate HSV infection clinically within the first month after transplantation […] Recurrent genital herpes in immunosuppressed patients often results in the development of large numbers of vesicles which coalesce into extensive deep, often necrotic, ulcerative lesions.228 […] about 70% of HIV-infected persons in the developed world and 95% in the developing world have HSV-2 antibody. […] The epidemiologic interactions between HIV and HSV-2 have led to calculation of potential population-level impact of these intersecting epidemics. […] The population attributable risk will depend on the prevalence of HSV-2 in the population at risk; at 50% HSV-2 prevalence, common among MSM [Males who have Sex with Males, US], or African Americans in the United States, or general population in sub-Saharan Africa, 35% of HIV infections will be attributable to HSV-2. […] the risk of transmitting HSV [from the mother] to the neonate is 30-50% in women with newly acquired HSV [during the last part of the pregnancy] versus <1% in women with established infection.” [This is relevant not only because herpes sucks, but also because it sucks even more when a newborn child gets it].

“More than 50% of individuals in most populations throughout the world demonstrate serological evidence of prior CMV infection.6 The coevolution with and adaptation to its human host over millions of years may account for the observation that in most cases, CMV infection causes few if any symptoms.5 However, in immunocompromised individuals, primary infection or reactivation of latent virus can be life-threatening. As well, congenital infections are common and can result in serious lifelong sequelae. […] Although CMV does not typically come to medical attention as a result of genital tract lesions or disease, it can be transmitted sexually and has important consequences for the sexually active, child-bearing population. […] As with many viruses that cause chronic infection, CMV seems to have coevolved with humans to a balanced state in which the virus persists but generally causes little clinical illness. The host’s innate and adaptive immune responses are usually successful at limiting CMV infection as is evident by the clear association of immune system dysfunction with CMV disease. In the absence of prophylactic antiviral treatment, CMV often reactivates in seropositive individuals who undergo hematopoietic stem cell transplantation (HSCT).41 Immunosuppression resulting from drugs used to treat cancer and autoimmune disorders, and from impaired T-cell function that occurs with advanced AIDS, is also associated with reactivation of CMV. […] The development of primary CMV infection has been noted in up to 79% of liver transplants and 58% of kidney or heart transplants in which the donor is seropositive and the recipient is seronegative.134,135 In the setting of HSCT, several studies have documented that CMV seropositivity of the recipient results in significantly increased overall posttransplant mortality compared to CMV seronegative recipients with a seronegative donor.136 When the recipient is CMV seronegative, overall mortality is increased when the donor is seropositive compared to the situation where the donor is seronegative.137 […] the transplant recipient is at particularly high risk of CMV reactivation during periods of potent immunosuppression that accompany graft rejection or graft-versus-host disease.”

July 7, 2014 Posted by | Books, Epidemiology, Evolutionary biology, Genetics, Immunology, Infectious disease, Medicine, Microbiology | Leave a comment

Sexually Transmitted Diseases (4th edition) (III)

I read the first nine chapters of this very long book a while back, and I decided to have another go at it. I have now read chapters 10-18, the first seven of which deal with ‘Profiles of Vulnerable Populations’ (including chapters about: Gender and Sexually Transmitted Diseases (10), Adolescents and STDs Including HIV Infection (11), Female Sex Workers and Their Clients in the Epidemiology and Control of Sexually Transmitted Diseases (12), Homosexual and Bisexual Behavior in Men in Relation to STDs and HIV Infection (13), Lesbian Sexual Behavior in Relation to STDs and HIV Infection (14) (some surprising stuff in that chapter, but I won’t cover that here), HIV and Other Sexually Transmitted Infections in Injection Drug Users and Crack Cocaine Smokers (15), and STDs, HIV/AIDS, and Migrant Populations (16)), and the last two of which deal with ‘Host Immunity and Molecular Pathogenesis and STD’ (Chapters about: ‘Genitourinary Immune Defense’ (17) and ‘Normal Genital Flora’ (19 as well as ‘Pathogenesis of Sexually Transmitted Viral and Bacterial Infections’ (19) – I have only read the first two chapters in that section so far, and so I won’t cover the last chapter here. I also won’t cover the content of the first of these chapters, but for different reasons). The book has 108 chapters and more than 2000 pages, so although I’ve started reading the book again I’m sure I won’t finish the book this time either. My interest in the things covered in this book is purely academical in the first place.

You can read my first two posts about the book here and here.

Some observations and comments below…

“A major problem when assessing the risk of men and women of contracting an STI [sexually transmitted infection], is the differential reporting of sexual behavior between men and women. It is believed that women tend to underreport sexual activity, whereas men tend to over-report. This has been highlighted by studies assessing changes in reported age at first sexual intercourse between successive birth cohorts15 and by studies that compared the numbers of sex partners reported by men and by women.10,13,16, 17, 18 […] There is widespread agreement that women are more frequently and severely affected by STIs than men. […] In the studies in the general population that have assessed the prevalence of gonorrhea, chlamydial infection, and active syphilis, the prevalence was generally higher in women than in men […], with differences in prevalence being more marked in the younger age groups. […] HIV infection is also strikingly more prevalent in women than in men in most populations where the predominant mode of transmission is heterosexual intercourse and where the HIV epidemic is mature […] It is generally accepted that the male-to-female transmission of STI pathogens is more efficient than female-to-male transmission. […] The high vulnerability to STIs of young women compared to young men is [however] the result of an interplay between psychological, sociocultural, and biological factors.33

“Complications of curable STIs, i.e., STIs caused by bacteria or protozoa, can be avoided if infected persons promptly seek care and are managed appropriately. However, a prerequisite to seeking care is that infected persons are aware that they are infected and that they seek treatment. A high proportion of men and of women infected with N. gonorrhoeae, C. trachomatis, or T. vaginalis, however, never experience symptoms. Women are asymptomatic more often than men. It has been estimated that 55% of episodes of gonorrhea in men and 86% of episodes in women remain asymptomatic; 89% of men with chlamydial infection remain asymptomatic and 94% of women.66 For chlamydial infection, it has been well documented that serious complications, including infertility due to tubal occlusion, can occur in the absence of a history of symptoms of pelvic inflammatory disease.65

“Most population-based STD rates underestimate risk for sexually active adolescents because the rate is inappropriately expressed as cases of disease divided by the number of individuals in this age group. Yet only those who have had intercourse are truly at risk for STDs. For rates to reflect risk among those who are sexually experienced, appropriate denominators should include only the number of individuals in the demographic group who have had sexual intercourse. […] In general, when rates are corrected for those who are sexually active, the youngest adolescents have the highest STD rates of any age group.5

“Although risk of HPV acquisition increases with number of partners,67,74,75 prevalence of infection is substantial even with limited sexual exposure. Numerous clinic-based studies,76,77 supported by population-based data, indicate that HPV prevalence typically exceeds 10% among young women with only one or two partners.71

“while 100 years ago young men in the United States spent approximately 7 years between [sexual] maturation and marriage, more recently the interval was 13 years, and increasing; for young women, the interval between menarche and marriage has increased from 8 years to 14. […] In 1970, only 5% of women in United States had had premarital intercourse by age 15, whereas in 1988, 26% had engaged in intercourse by this age. However, in 1988, 37% of never married 15-17-year-olds had engaged in intercourse but in 2002, only 30% had. Comparable data from males demonstrated even greater declines — 50% of never married 15-17-year-olds reported having had intercourse in 1988, compared with only 31% in 200299

“Infection with herpes simplex type 2 (HSV-2) is extremely common among FSWs [female sex workers], and because HSV-2 infection increases the likelihood of both HIV acquisition in HIV-uninfected individuals, and HIV transmission in HIV-infected individuals, HSV-2 infection plays a key role in HIV transmission dynamics.100 Studies of FSWs in Kenya,67 South Africa,101 Tanzania,36 and Mexico72 have found HSV-2 prevalences ranging from 70% to over 80%. In a prospective study of HIV seronegative FSWs in Nairobi, Kenya, 72.7% were HSV-2 seropositive at baseline.67 Over the course of over two years of observation […] HSV-2 seropositive FSWs were over six times more likely to acquire HIV infection than women who were HSV-2 seronegative.”

“Surveys in the UK133 and New Zealand134 found that approximately 7% of men reported ever paying for sex. A more recent telephone survey in Australia found that almost 16% of men reported having ever paid for sex, with 1.9% reporting that they had paid for sex in the past 12 months.135 Two national surveys in Britain found that the proportion of men who reported paying women for sex in the previous 5 years increased from 2.0% in 1990 to 4.2% in 2000.14 A recent review article summarizing the findings of various surveys in different global regions found that the median proportion of men who reported “exchanging gifts or money for sex” in the past 12 months was approximately 9-10%, whereas the proportion of men reporting who engaged in “paid sex” or sex with a sex worker was 2-3%.136

“There are currently around 175-200 million people documented as living outside their countries of birth.3 This number includes both voluntary migrants, people who have chosen to leave their country of origin, and forced migrants, including refugees, trafficked people, and internally displaced people.4 […] Each year about 700 million people travel internationally with an estimated 50 million originating in developed countries traveling to developing ones.98 […] Throughout history, infectious diseases of humans have followed population movements. The great drivers of population mobility including migration, economic changes, social change, war, and travel have been associated with disease acquisition and spread at individual and population levels. There have been particularly strong associations of these key modes of population mobility and mixing for sexually transmitted diseases (STDs), including HIV/AIDS. […] Epidemiologists elucidated early in the HIV/AIDS epidemic that there was substantial geographic variability in incidence, as well as different risk factors for disease spread. As researchers better understood the characteristics of HIV transmission, its long incubation time, relatively low infectivity, and chronic disease course, it became clear that mobility of infected persons was a key determinant for further spread to new populations.6 […] mobile populations are more likely to exhibit high-risk behaviors”

“Studies conducted over the past decade have relied on molecular techniques to identify previously noncultivable organisms in the vagina of women with “normal” and “abnormal” flora. […] These studies have confirmed that the microflora of some women is predominated by species belonging to the genus Lactobacillus, while women having BV [bacterial vaginosis] have a broad range of aerobic and anaerobic microorganisms. It has become increasingly clear that even with these more advanced tools to characterize the microbial ecology of the vagina the full range of microorganisms present has yet to be fully described. […] the frequency and concentration of many facultative organisms depends upon whether the woman has BV or Lactobacillus-predominant microflora.36 However, even if “normal” vaginal microflora is restricted to those women having a Lactobacillus-dominant flora as defined by Gram stain, 46% of women are colonized by G. vaginalis, 78% are colonized by Ureaplasma urealyticum, and 31% are colonized by Candida albicans.36 […] Nearly all women are vaginally colonized by obligately anaerobic gram-negative rods and cocci,36 and several species of anaerobic bacteria, which are not yet named, are also present. While some species of anaerobes are present at higher frequencies or concentrations among women with BV, it is clear that the microbial flora is complex and cannot be defined simply by the presence or absence of lactobacilli, Gardnerella, mycoplasmas, and anaerobes. This observation has been confirmed with molecular characterization of the microflora.26, 27, 28, 29, 30, 31, 32, 33, 34, 35

Vaginal pH, which is in some sense an indicator of vaginal health, varies over the lifespan (I did not know this..): In premenarchal girls vaginal pH is around 7, whereas it drops to 4.0-4.5 in healthy women of reproductive age. It increases again in post-menopausal women, but postmenopausal women receiving hormone replacement therapy have lower average vaginal pH and higher numbers of lactobacilli in their vaginal floras than do postmenopausal women not receiving hormone replacement therapy, one of several findings indicating that vaginal pH is under hormonal control (estrogen is important). Lactobacilli play an important role because those things produce lactic acid which lowers pH, and women with a reduced number of lactobacilli in their vaginal floras have higher vaginal pH. Stuff like sexual intercourse, menses, and breastfeeding all affect vaginal pH and -microflora, as does antibiotic usage, and such things may play a role in disease susceptibility. Aside from lowering pH some species of Lactobacilli also play other helpful roles which are likely to be important in terms of disease susceptibility, such as producing hydrogen peroxide in their microenvironments, which is the kind of stuff a lot of (other) bacteria really don’t like to be around: “Several clinical studies conducted in populations of pregnant and nonpregnant women in the United States and Japan have shown that the prevalence of BV is low (4%) among women colonized with H2O2-producing strains of lactobacilli. By comparison, approximately one third of women who are vaginally colonized by Lactobacillus that do not produce H2O2 have BV.45, 46, 47“.

My interest in the things covered in this book is as mentioned purely academical, but I’m well aware that some of the stuff may not be as ‘irrelevant’ to other people reading along here as it is to me. One particularly relevant observation I came across which I thought I should include here is this:

“The lack of reliable plenotypic methods for identification of lactobacilli have led to a broad misunderstanding of the species of lactobacilli present in the vagina, and the common misperception that dairy and food derived lactobacilli are similar to those found in the vagina. […] Acidophilus in various forms have been used to treat yeast vaginitis.144 Some investigators have gone so far as to suggest that ingestion of yogurt containing acidophilus prevents recurrent Candida vaginitis.145 Nevertheless, clinical studies of women with acute recurrent vulvovaginitis have demonstrated that women who have recurrent yeast vaginitis have the same frequency and concentration of Lactobacillus as women without recurrent infections.146 […] many women who seek medical care for chronic vaginal symptoms report using Lactobacillus-containing products orally or vaginally to restore the vaginal microflora in the mistaken belief that this will prevent recurrent vaginitis.147 Well-controlled trials have failed to document any decrease in vaginal candidiasis whether orally or vaginally applied preparations of lactobacilli are used by women.148 Microbial interactions in the vagina probably are much more complex than have been appreciated in the past.”

As illustrated above, there seems to be some things ‘we’ know which ‘people’ (including some doctors..) don’t know. But there are also some really quite relevant things ‘we’ don’t know a lot about yet. One example would be whether/how hygiene products mediate the impact of menses on vaginal flora: “It is unknown whether the use of tampons, which might absorb red blood cells during menses, may minimize the impact of menses on colonization by lactobacilli. However, some observational data suggests that women who routinely use tampons for catamenial protection are more likely to maintain colonization by lactobacilli compared to women who use pads for catamenial protection”. Just to remind you, colonization by lactobacilli is desirable. On a related and more general note: “Many young women use vaginal products including lubricants, contraceptives, antifungals, and douches. Each of these products can alter the vaginal ecosystem by changing vaginal pH, altering the vaginal fluid by direct dilution, or by altering the capacity of organisms to bind to the vaginal epithelium.” There are a lot of variables at play here and my reading of the results indicate that it’s not always obvious what is actually the best advice. For example an in this context large (n=235) prospective study about the effect of N-9, a compound widely used in contraceptives, on vaginal flora “demonstrated that N-9 did have a dose-dependent impact on the prevalence of anaerobic gram-negative rods, and was associated with a twofold increase in BV (OR 2.3, 95% CI 1.1-4.7).” Using spermicides like those may on the one hand perhaps decrease the likelihood of getting pregnant and perhaps lower the risk of contracting a sexually transmitted disease during intercourse, but on the other hand usage of such preparations may also affect the vaginal flora in a way which may make users more vulnerable to sexually transmitted diseases by promoting E. coli colonization of the vaginal flora. On a more general note, “The impact of contraceptives on the vaginal ecosystem, including their impact on susceptibility to infection, has not been adequately investigated to date.” The book does cover various studies on different types of contraceptives, but most of the studies are small and probably underpowered, so I decided not to go into this stuff in more detail. An important point to take away here is however that there’s no doubt that the vaginal flora is important for disease susceptibility: “longitudinal studies [have] showed a consistent link between increased incidence of HIV, HSV-2 and HPV and altered vaginal microflora […] there is a strong interaction between the health of the vaginal ecosystem and susceptibility to viral STIs.” Unfortunately, “use of probiotic products for treatment of BV has met with limited success.”

I should note that although multiple variables and interactions are involved in ‘this part of the equation’, it is of course only part of the bigger picture. One way in which it’s only part of the bigger picture is that the vaginal flora plays other roles besides the one which relates to susceptibility to sexually transmitted disease – one example: “Studies have established that some organisms considered to be part of the normal vaginal microflora are associated with an increased risk of preterm and/or low birth weight delivery when they are present at high-density concentrations in the vaginal fluid”. (And once again the lactobacilli in particular may play a role: “high-density vaginal colonization by Lactobacillus species has been linked with a decreased risk of most adverse outcomes of pregnancy”). Another major way in which this stuff is only part of the equation is that human females have a lot of other ways to defend themselves as well besides relying on bacterial colonists. If you don’t like immunology there are some chapters in here which you’d be well-advised to skip.

July 5, 2014 Posted by | Books, Data, Demographics, Epidemiology, Immunology, Infectious disease, Medicine, Microbiology | Leave a comment

100 Cases in Clinical Medicine

This book is another book in the same series as the 100 Cases in Acute Medicine book. Here’s part of the preface:

“Most doctors think that the most memorable way to learn medicine is to see patients. It is easier to recall information based on a real person than a page in a textbook. Another important element in the retention of information is the depth of learning. Learning that seeks to understand problems is more likely to be accessible later than superficial factual accumulation. This is the basis of problem-based learning, whereby students explore problems with the help of a facilitator. The cases in this book are designed to provide another useful approach, parallel to seeing patients and giving an opportunity for self-directed exploration of clinical problems. They are based on the findings of history taking and examination, together with the need to evaluate initial investigations such as blood investigations, X-rays and electrocardiograms.

These cases are no substitute for clinical experience with real patients, but they provide a safe environment for students to explore clinical problems and their own approach to diagnosis and management. Most are common problems that might present to a general practitioner’s surgery, a medical outpatients clinic or a session on call in hospital. There are a few more unusual cases to illustrate specific points and to emphasize that rare things do present, even if they are uncommon. The cases are written to try to interest students in clinical problems and to enthuse them to find out more. They try to explore thinking about diagnosis and management of real clinical situations.”

As for the ‘interest students in clinical problems and to enthuse them to find out more’-part they certainly succeeded, but I approached this book in a slightly different manner than I did the first one in the series. When I read the acute medicine book, I’d occasionally think to myself while reading the patient history and/or the reported lab results that ‘hey, this sounds a bit like…’ and I’d look up the diagnosis/condition I was considering in order to decide if I wanted to ‘guess’ at that, before moving on to reading the answer part of the case. I did this a few times as well here, but actually most of the pre-answer wiki peeks were related to the interpretation of specific lab results (‘how to interpret some of the arterial blood gas test results’). The reason why I tried to limit myself from looking up stuff before reading the answer was that I wanted to know a little bit about how much of the pathophysiology text (and stuff covered in related texts, such as e.g. Hall’s Handbook and Rogers et al., as well as various medical lectures e.g. from Khan Academy) I could remember. I actually realized when reading the first 100 cases book that there were very few conditions covered there which I had not already read about, or at least seen mentioned, elsewhere; the problem was figuring out which patients had which specific problems. Part of the reason why I often had trouble with that part was incidentally related to the fact that there are some other relevant books I have not read – books such as this one or this one (I’m not planning on reading these, just in case you were wondering). A related point is that doctors have a lot more information available to them than do the people who are sick, and that this is certainly a (small) part of the explanation for why they are better at figuring out what’s wrong than are the people who are sick – symptoms can be non-specific, but if so lab results will often tell you more about where to look and what to look for. I decided beforehand when reading this book that I’d try for fun to keep score and figure out in how many cases I guessed the correct diagnosis; it turned out that I guessed the right diagnosis in roughly one-fifth of the cases and in a few other cases the guess I made was a very plausible differential diagnosis which needed to be ruled out anyway. In a few of them I didn’t get ‘the complete picture’, and I learned something from many of the cases where I knew the (‘diagnosis’-part of the-) right answer. I feel quite certain I would have guessed more of them if I’d spent more time on individual cases; I read the entire book yesterday, and this is not a book you can read in a few hours (I think it took me 12 hours, at least, but I’m not really sure as I didn’t keep track and did take breaks occasionally. Ratios like these – me spending easily 5-10 hours or more on stuff which leads to a post which you’ll read in perhaps 10-15 minutes – are incidentally one reason why I sometimes feel that people reading along here are ‘cheating’ in a way. On the other hand I really can’t complain as long as I’m enabling such ‘cheating’ in the first place…). I got far most of my guesses correct, as in many cases I didn’t guess at all because I wasn’t completely sure what was going on. Of course treatment and management aspects I didn’t ‘guess at’, and that’s an important aspect of the book as well. The conditions I recognized spanned a rather broad range; from colon cancer over HIV seroconversion illness (main differential was malaria – I knew this as well) to COPD, rheumatoid arthritis, bacterial meningitis, obstructive sleep apnea, peripheral neuropathy secondary to undiagnosed type 2 diabetes mellitus, dementia, small cell lung cancer with associated paraneoplastic syndrome, and Parkinson’s disease. As you can probably tell from those diagnoses, like the acute medicine book this book also has some rather depressing cases. Some cases, e.g. a case of cerebral toxoplasmosis secondary to HIV infection (this is actually an AIDS-defining illness, so she had AIDS at the time of admission) and a diet-related vitamin B12 deficiency, were really obvious in retrospect, but in medicine there’s a lot of stuff to remember.

I’ve added some quotes, observations and key points from the book below.

“Cystic fibrosis should always be considered when there is a story of repeated chest infections in a young person. Although it presents most often below the age of 20 years, diagnosis may be delayed until the 20s, 30s, 40s or later in milder cases.”

“Patients with a chronic persistent cough of unexplained cause should have a chest X-ray. When the X-ray is clear the cough is likely to be produced by one of three main causes in non-smokers. Around half of such cases have asthma or will go on to develop asthma over the next few years. Half of the rest have rhinitis or sinusitis with a postnasal drip. In around 20 per cent the cough is related to gastro-oesophageal reflux […] Cough is a common side effect in patients treated with angiotensin-converting enzyme (ACE) inhibitors.”

“This man has signs of chronic liver disease with ascites and oedema. […] The most common cause of chronic liver disease is alcohol. […] However, his alcohol intake is too low to be consistent with the diagnosis of alcoholic liver disease [15-20 units/week, according to the patient history. This was why I initially rejected alcohol-related pathology in this case and (very briefly) considered other causes instead, without coming up with anything (this was another one of those aforementioned obvious ones in retrospect)…]. When the provisional diagnosis is discussed with him, though, he eventually admits that his alcohol intake has been at least 40–50 units per week for the last 20 years. His alcohol intake has increased further during the last year after his marriage had ended.” [Patients sometimes lie to their doctors. This one did. In case you were wondering he died three years later from an esophageal variceal bleed.]

“Patients often become symptomatic due to renal failure only when their glomerular filtration rate (GFR) is less than 15 mL/min [normal range is 90+, US] and thus may present with end-stage renal failure.” [This is an example of a more general point in many medical contexts; our bodies often have a lot of ‘excess capacity’ and redundancies implemented in order to make us less likely to get sick/get symptoms which may decrease our likelihood of survival even if things aren’t optimal. The book actually has other examples illustrating this point, e.g. this: “Patients with central diabetes insipidus typically describe an abrupt onset of polyuria and polydipsia. This is because urinary concentration can be maintained fairly well until the number of AVP-secreting neurones in the hypothalamus decreases to 10–15 per cent of the normal number, after which AVP levels decrease to a range where urine output increases dramatically.”]

“Petechiae are small capillary haemorrhages that characteristically develop in crops in areas of increased venous pressure, such as the dependent parts of the body. Petechiae are the smallest bleeding lesions (pinhead in size), and suggest problems with platelet number or function. Purpura are larger in size than petechiae with variable shape and involve bleeding into subcutaneous tissues. Purpura can be seen in a variety of bleeding disorders […] AML is the most common acute leukaemia in adults with a mean age at presentation of 65 years. Patients with AML generally present with symptoms related to complications of pancytopenia (eg, anemia, neutropenia, and thrombocytopenia), including weakness, breathlessness and easy fatigability, infections of variable severity, and/or haemorrhagic findings such as gingival bleeding, ecchymoses, epistaxis, or menorrhagia.”

“Vegans who omit all animal products from their diet often have subclinical vitamin B12 deficiency […] Vitamin B12 deficiency may occur in strict vegetarians who eat no dairy products. […] Typical neurological signs are position and vibration sense impairment in the legs, absent reflexes and extensor plantars.”

“Malaria prophylaxis is often not taken regularly. Even when it is, it does not provide complete protection against malaria […] A traveller returning from a malaria endemic region who develops a fever has malaria until proven otherwise.”

“Peripheral oedema may occur due to local obstruction of lymphatic or venous outflow or because of cardiac, renal, pulmonary or liver disease. Unilateral oedema is most likely to be due to a local problem […] Bilateral oedema may be due to cardiac, liver or renal disease. […] Pitting oedema needs to be distinguished from lymphoedema, which is characteristically non-pitting. This is tested by firm pressure with the thumb for approximately 10 s. If the oedema is pitting, an indentation will be present after pressure is removed. […] frothy urine is a clue to the diagnosis of nephrotic syndrome and is commonly noted by patients with heavy proteinuria.”

“80% of C. difficile infections occur in people aged over 65 years since a lower density and fewer species of gut bacteria make them more susceptible to colonisation by C. difficile […] 20% of hospital patients and those in long-term care facilities are colonised with C. difficile. […] C. difficile infection should be suspected in any hospital patient who develops diarrhoea.”

“ADPKD [Autosomal Dominant Polycystic Kidney Disease] is the most common inherited renal disease, occurring in approximately 1:600 to 1:1000 individuals. Although the name ‘ADPKD’ is derived from renal manifestations of cyst growth leading to enlarged kidneys and renal failure, this is a systemic disorder manifested by the presence of hepatic cysts, diverticular disease, inguinal hernias, mitral valve prolapse, intracranial aneurysms and hypertension. […] Patients with ADPKD are often asymptomatic. […] Flank pain is the most common symptom […] Hypertension occurs early in the course of this disease, affecting 60% of patients with normal renal function. Approximately 50% of ADPKD patients will develop end-stage renal failure.”

“Transient small nodes in the neck or groin are common benign findings. However, a 3 × 4 cm mass of nodes for 2 months is undoubtedly abnormal. […] Lymph nodes are normally barely palpable, if at all. The character of enlarged lymph nodes is very important. In acute infections the nodes are tender, and the overlying skin may be red. Carcinomatous nodes are usually very hard, fixed and irregular. The nodes of chronic leukaemias and lymphomas are non-tender, firm and rubbery.  […] The typical systemic symptoms of lymphoma are malaise, fever, night sweats, pruritus, weight loss, anorexia and fatigue.”

“Colonic diverticula are small outpouchings that are most commonly found in the left colon. […] Inflammation in a diverticulum is termed diverticulitis. […] Diverticular disease is a common finding in the elderly Western population and may be asymptomatic or cause irritable bowel syndrome-type symptoms. […] Diverticular disease is a common condition; its presence can distract the unwary doctor from pursuing a coincident condition.”

“Tension type headaches are the commonest headaches in the general population. The typical presentation is of mild to moderate headache, nonthrobbing, bilateral with no associated symptoms. Cluster headaches are characterised by attacks of severe unilateral orbital or temporal pain, accompanied by autonomic features such as nasal congestion, lacrimation and rhinorrhoea. Migraines are often preceded by characteristic symptoms such as flashing lights and are often unilateral. Nausea and photophobia may occur during an attack. Brain tumours cause headaches by causing raised intracranial pressure. The headache is worse after coughing and is often associated with nausea and vomiting. […] The sudden onset of a headache within seconds or a few minutes is characteristic of a subarachnoid haemaorrhage (SAH). […] Patients with SAH often describe the pain as ‘the worst headache in my life’. [And in many cases it’s also the last headache they ever will have:] SAH is associated with a mortality rate of up to 50%.”

“He drinks 35 units of alcohol per week and smokes 30 cigarettes per day.” [Aargh! Another one of those! But at least this one didn’t lie about his drinking habits. … But it gets worse:] “No history was available from the patient [she’s in a coma], but her partner volunteered the information that they are both intravenous heroin addicts. She is unemployed, smokes 25 cigarettes per day, drinks 40 units of alcohol per week and has used heroin for the past 4 years.” [Dammit! Some of these histories are depressing in more than one way. The woman had been found unconscious by her partner. My first thought when reading the case story about the woman and the lab results was that, ‘This reminds me of that movie I saw a while back, what’s it called..?’ – I can’t remember the name of the movie, but it’s not important. I want to quote a bit more extensively from the answer part of this case because I thought it was sort of fascinating in a way; it illustrates how a drug overdose isn’t always just a problem because of the drug overdose:]

“This patient has acute renal failure as a result of rhabdomyolysis. Severe muscle damage causes a massively elevated serum creatine kinase (CK) level and a rise in serum potassium and phosphate levels. In this case, she has lain unconscious on her left arm for many hours due to an overdose of alcohol and intravenous heroin. As a result, she has developed severe ischaemic muscle damage, causing release of myoglobin, which is toxic to the kidneys. […] Acute renal failure due to rhabdomyolysis causes profound hypocalcaemia in the oliguric phase due to calcium sequestration in muscle and reduced 1,25-dihydroxycalciferol levels, often with rebound hypercalcaemia in the recovery phase. This woman’s consciousness level is still depressed as a result of opiate and alcohol toxicity, and she has clinical and radiological evidence of aspiration pneumonia. She has mixed metabolic and respiratory acidosis (low pH, bicarbonate) due to acute renal failure and respiratory depression (pCO2 elevated). Her arterial oxygenation is reduced due to hypoventilation and pneumonia. She also has compartment syndrome in her arm due to massive swelling of her damaged muscles. This patient has life-threatening hyperkalaemia with electrocardiogram (ECG) changes. […] Emergency treatment involves intravenous calcium gluconate, which stabilizes cardiac conduction, and intravenous insulin/glucose, intravenous sodium bicarbonate and nebulized salbutamol, all of which temporarily lower the plasma potassium by increasing the cellular uptake of potassium. However, these steps should be regarded as holding measures while urgent dialysis is being organized. The chest X-ray and clinical findings indicate consolidation of the left lower lobe. This patient should initially be managed on an intensive care unit. She will require antibiotics for her pneumonia and will require a naloxone infusion or mechanical ventilation for her respiratory failure. The patient should have vigorous rehydration with monitoring of her central venous pressure. If a good urinary flow can be maintained, urinary pH should be kept greater than 7.0 by bicarbonate infusion, which prevents the renal toxicity of myoglobin. This patient also needs to be considered urgently for surgical fasciotomy to relieve the compartment syndrome in her arm.”

Back when I read the Acute Muscle Injuries text, compartment syndrome was sort of a worst-case-scenario. Here it’s just one of multiple problems, each of which on their own would be quite terrible. I should incidentally note in case you were wondering if all of the patients in this book are alcoholics that most of them are not – but that they mention in the coverage that: “In some surveys alcohol is linked directly to around 25% of acute medical admissions.” I looked around very briefly for those numbers because they sounded very high to me, but I didn’t find much. This paper had an estimate of 6%, but that’s out of all hospital admissions and you’d expect the proportion of all admissions involving alcohol to be significantly lower than the proportion of acute admissions. Note in that context that ‘the true number’ in the former case is to some extent unknowable – though you can try to estimate it, as people do – as e.g. alcohol’s role in certain cancers is quite difficult to figure out in general, and impossible to figure out at the individual level; it makes sense to say that drinking alcohol increases your risk of breast cancer (and perhaps that’s not even the best example as we’re quite sure alcohol has a role there, a level of certainty we in other areas of oncology do not have), but deciding with certainty whether patient X’s specific case of breast cancer was alcohol-related or not is impossible – ‘it may have been a contributing factor’ is probably the closest you can get, we don’t have a test for that. Same goes for a cardiovascular event – ‘alcohol may have played a role’, but that’s it. Perhaps also worth remembering here is incidentally that on a related note some epidemiological findings trying to have a closer look at precisely these sorts of things may have results which are partially explained by statistical artifacts unrelated to the ‘true’ associated risk; a smoker who drinks a lot is highly likely to die from various alcohol- and smoking-related causes at a relatively early age. Such early deaths may well make people with such habits less likely to get old enough to get prostate cancer (the risk of which increases dramatically with age), even if alcohol and smoking on their own perhaps actually increase the risk of prostate cancer, in the sense that the effects of both alcohol and smoking may be to make those cells more likely to turn malignant (I don’t know if this is actually the case or not, it’s just the sort of thing you need to watch out for). There are ways around such problems – a competing risks framework is important to have in mind here – but problems of this sort are sometimes hard to avoid and/or deal with.

They don’t talk about these things in the book, but they talk about a lot of other interesting stuff, and I can’t cover all of it. One thing I have yet to cover which I thought I should include as a small favour to a friend reading along is this part, from the very last pages of the book:

“Traditional Chinese medicine includes herbal therapy, acupuncture, massage and dietary therapy. There is potential for developing novel treatments for diseases such as asthma and food allergies with Chinese herbs. However, there is concern over the lack of standardization and controlled clinical trials. Chinese herbal medicines containing aristolochic acid have been implicated in a specific nephropathy characterised by extensive interstitial fibrosis with atrophy and loss of the tubules, with thickening of the walls of the interlobular and afferent arterioles. Blood pressure is generally normal or only modestly elevated. Patients presenting with a creatinine < 200 will generally stabilise their renal function after stopping the Chinese medications, but patients with worse kidney function will generally progress to end-stage kidney failure.”

I liked the book and I gave it three stars on goodreads. You need to be fluent in ‘medical textbook’ in order to get much out of this book, but if you have some medical knowledge I believe you’ll be quite likely to find the books in this series quite interesting.

June 8, 2014 Posted by | alcohol, Books, Cancer/oncology, Epidemiology, Infectious disease, Medicine, Microbiology, Nephrology | Leave a comment

Bioterrorism and Infectious Agents

I read this book over the weekend. I gave it three stars on goodreads but seriously considered giving it four stars.

The book is from 2005, meaning that some parts of it, particularly I assume those related to diagnostics (genotyping etc.), presumably are a bit dated – progress in vaccine development may also have occurred in the meantime, I wouldn’t know but some authors assumed such a development would be likely in their coverage. Most of the stuff covered is, I think, still as relevant today as it was when it was written.

The book is a Springer publication and contains 10 chapters on various topics related to bioterrorism and specific infectious disease agents which may be used for that purpose. Most chapters deal with specific agents or classes of agents which have the potential to be used in a bioterrorism setting, and only the last two chapters deal with more general topics – the first one of these addresses the bioterrorism setting more generally than do the previous chapters (“When the agent used in a biological attack is known, response to such an attack is considerably simplified. The first eight chapters of this text deal with agent-specific concerns and strategies for dealing with infections due to the intentional release of these agents. A larger problem arises when the identity of an agent is not known. […] in some cases, an attack may be threatened or suspected, but it may remain unclear as to whether such an attack has actually occurred. Moreover, it may be unclear whether casualties are due to a biological agent, a chemical agent, or even a naturally occurring infectious disease process or toxic exposure […] This chapter provides a framework for dealing with outbreaks of unknown origin and etiology. Furthermore, it addresses several related concerns and topics not covered elsewhere in this text.”), whereas the last one very briefly addresses ‘The Economics of Planning and Preparing for Bioterrorism’.

An implicit assumption I’d made before reading this book regarding the bioterrorism setting is that in such a setting we’d know that bioterrorism was taking place – it would be obvious because of all those sick people. But it is far from clear that this would always be the case. Most of the agents have incubation periods measured in days or weeks, and even after symptoms present it may be difficult to realize what’s going on because these diseases are not commonly seen in clinical practise and may be confused with other more common conditions. An aerosolized agent introduced into an environment with a large number of people could infect a lot of people who’d not display symptoms until much later. It’d be difficult to figure out what was going on. A long incubation period incidentally doesn’t necessarily mean the disease isn’t severe; it may well mean that once you get symptoms of a severity that’ll lead you to seek medical attention you’re already screwed. An example:

“Symptoms and physical findings are nonspecific in the beginning of [anthrax] infection. The clinical presentation is usually biphasic. The initial stage begins with the onset of myalgia, malaise, fatigue, nonproductive cough, occasional sensation of retrosternal pressure, and fever. […] anthrax symptoms insidiously mimic flu-like symptoms in the beginning […] In some patients, a brief period of apparent recovery follows. Other patients may progress directly to the second, fulminant stage of illness. The second stage develops suddenly with the onset of acute respiratory distress, hypoxemia, and cyanosis. Death sometimes occurs within hours […] The disease progression from the first manifestation of symptoms until death appears to have a considerable range from a few hours […] to 11 days”

While reading the book, and especially in the beginning, I was a bit surprised more effort was not put into covering the topics briefly addressed in chapter 9 especially (the ‘unknown etiology’ chapter above), but actually the coverage that was chosen matches quite well what they state that they set out to do. The book is written for health professionals: “this volume will provide health care workers with up-to-date important reviews by world-renowned experts on infectious and biological agents that could be used for bioterrorism”. Mostly the book is about the infectious agents and how people affected by these agents may present and what can and should be done in terms of treatment/monitoring/isolation etc., so it makes sense that this work does not include a lot of stuff on what might be termed more general risk management aspects, response modelling, coordination problems and so on; there is a little bit on that stuff in the last chapters, but not much. I’d be very surprised if there are not other books/works published which deal with the risk- and decision-management aspects of this kind of stuff in much more detail (especially given the existence of books like this one).

The fact that the book is written for health professionals (“Emergency physicians, Public Health personnel, Internists, Infectious Disease specialists, Microbiologists, Critical care specialists, and even General practitioners”) means that if you’re not a health professional some of this stuff will be stuff you won’t understand. Patients will not be described as having double vision (they’ll have diplopia), and they won’t be described as ‘sweating a lot’ (they’ll be diaphoretic). The authors assume that when they tell you that the suggested treatment may result in hemolytic anemia you’ll know what that implies, and that you know what G-CSF stands for in the context of adjunctive melioidosis treatment. Usage of abbreviations/acronyms which are not explained is incidentally part of the reason why this book would never get five stars from me; using acronyms without telling you once what the letters stand for is a capital offence in my book. Even if you don’t know much about medicine you’ll learn about exposure routes of various substances/diseases (is person-to-person transmission something I should be worried about? Is it airborne?), symptoms (to some extent – you’ll understand some of the words without looking up the medical terms), prognosis in case of exposure, existence (or lack thereof) of a vaccine/treatment, etc. You’ll also learn a little about the history of some of the substances in question; some of them have been used in warfare before, and extensive research has been conducted on quite a few of them during the Cold War, where both the US and the Soviet Union worked on weaponization of some of these substances.

The 8 chapters on specific biological agents/diseases deal with anthrax, plague, tularemia, melioidosis and glanders, smallpox, hemorrhagic fever viruses, botulism, and ricin. None of these things are nice and you can certainly justify covering them in a book like this. The US Centers for Disease Control and Prevention classifies 6 biological agents as ‘Category A’ biowarfare agents, which is the highest risk category and include agents which “can be easily disseminated or transmitted from person-to-person, can cause high mortality, and have the potential for major public health impact. This category includes agents like smallpox, anthrax, plague, botulinum toxin, and Ebola hemorrhagic fever.” All category A agents are covered in this book, as are a few category B agents. The fact that agents such as ricin (“A dose the size of a few grains of table salt can kill an adult human”) are included in the B category, rather than category A, provides you with a bit of context as to of how awful the agents belonging in the A category are. Many of the agents are not just terrible because they kill a lot of people; some of them will also cause really severe and prolonged morbidity in case people survive. A few examples:

“Patients who require mechanical ventilation, respectively, need average periods of 58 days (type A) and 26 days (type B) for weaning (Hughes et al., 1981). Recovery may not begin for as long as 100 days (Colerbatch et al., 1989).” (Botulism. You may not be able to breathe on your own for a month or two.)

“Smallpox is disfiguring. Older texts suggested removing mirrors from patients’ rooms (Dixon, 1962).”

“Deafness is a very common and often permanent result of LASV [Lassa virus] infection, occurring in approximately 30% of patients (Cummins et al., 1990a).”

“Following parenteral treatment, prolonged oral antibiotics are needed to prevent relapse […] The proportion of patients who relapse can be reduced to less than 10%, and probably less than 5%, if appropriate antibiotics are given for 20 weeks.” (They’re talking about melioidosis victims. You may need to treat these people for months to prevent them from relapsing, and some will relapse even if you do. Melioidosis isn’t unique in this respect: “all persons exposed to a bioterrorist incident involving anthrax should be administered one of these [post-exposure prophylaxis] regimens at the earliest possible opportunity. Adherence to the antibiotic prophylaxis program must be strict, as disease can result at any point within 30–60 days after exposure if antibiotics are stopped.”)

Even the class A agents may be said to some extent to belong on a spectrum. Anthrax doesn’t really transmit from person to person, so the total death toll would mostly be limited to people directly exposed to the agent during an attack (‘mostly’ because e.g. people handling the bodies may be exposed to anthrax spores as well). Pneumonic plague is, well, different. Sometimes the very high virulence of an agent may actually implicitly be an argument against using the agent as a biological weapon in some contexts: “F. tularensis is less desirable than other organisms as a weapon because it does not have a stable spore phase and is difficult to handle without infecting those processing and dispersing the pathogen (Cunha, 2002).”

Especially disconcerting in the context of an attack is the idea of wide-spread panic following release of one of these agents, causing health services to become overextended and unable to help actual victims – they do address this topic in the book:

“An announced or threatened bioterroism attack can provoke fear, uncertainty, and anxiety in the population, resulting in overwhelming numbers of patients seeking medical evaluation for unexplained symptoms, and demanding antidotes for feared exposure. Such a scenario could also follow a covert release when the resulting epidemic is characterized as the consequence of a bioterror attack. Symptoms due to anxiety and autonomic arousal, and side effects of postexposure antibiotic prophylaxis may suggest prodromal disease due to biological agent exposure, and pose challenges in differential diagnosis. This “behavioral contagion” is best prevented by risk communication from health and government authorities that includes a realistic assessment of the risk of exposure, information about the resulting disease, and what to do and whom to contact for suspected exposure. Risk communication must be timely, accurate, consistent, and well coordinated.”

One thing I should perhaps note in this context is that anthrax is not the only one of these agents which ‘for practical purposes’ do not transmit from person-to-person (e.g., “Only two well-documented instances of person-to-person spread are recorded in the [melioidosis] literature”), and that some of those that do actually require quite a bit of exposure to transfer successfully – the ‘everybody who stands next to someone with the Incurable Cough of Death disease and get coughed at will die horribly within 24 hours and we have no cure’-situation will never happen because such diseases don’t exist. On a related note, the faster a disease kills/incapacitates you, the less time the infected individual has to actively transfer it to other people; so even severe and fast-acting diseases will often be self-limiting to some extent. On a related note, “With the exception of smallpox, pneumonic plague, and, to a lesser degree, certain viral hemorrhagic fevers, the agents in the Centers for Disease Control and Prevention’s (CDC’s) categories A and B […] are not contagious via the respiratory route.”

I could cover this book in a lot of detail, but I decided to limit my coverage to talking about the stuff above and then add a few remarks about smallpox and plague here, because I figure these two sort of deserve to be covered when dealing with a book like this.

First, plague. This is not just a disease of the past:

“Improved sanitation, hygiene, and modern disease control methods have, since the early 20th century, steadily diminished the impact of plague on public health, to the point that an average of 2,500 cases is now reported annually […] The plague bacillus is, however, entrenched in rodent populations in scattered foci on all inhabited continents except Australia […] and eliminating these natural transmission cycles is unfeasible. Furthermore, although treatment with antimicrobials has reduced the case fatality ratio of bubonic plague to 10% or less, the fatality ratio for pneumonic plague remains high. A review of 420 reported plague cases in the US in the period 1949–2000 identified a total of 55 cases of plague pneumonia, of which 22 (40.0%) were fatal”

Note that even though the annual number of cases is relatively low, you don’t have to go back to Medieval times to find a rather severe outbreak costing millions of lives:

The third (Modern) pandemic began in southwestern China in the mid-19th, struck Hong Kong in 1894, and was soon carried by rat-infested steamships to port cities on all inhabited continents, including several in the United States (US) (Link, 1955; Pollitzer, 1954). By 1930, the third pandemic had caused more than 26 million cases and 12 million deaths.”

This is a terrible disease, so of course people have thought about weaponizing it:

“Biological warfare research programs begun by the Soviet Union (USSR) and the US during the Second World War intensified during the Cold War, and in the 1960s both nations had active programs to “weaponize” Y. pestis. In 1970, a World Health Organization (WHO) expert committee on biological warfare warned of the dangers of plague as a weapon, noting that the causative agent was highly infective, that it could be easily grown in large quantities and stored for later use, and that it could be dispersed in a form relatively resistant to desiccation and other adverse environmental conditions […] Models developed by this expert committee predicted that the intentional release of 50 kg of aerosolized Y. pestis over a city of 5 million would, in its primary effects, cause 150,000 cases of pneumonic plague and 36,000 deaths. It was further postulated that, without adequate precautions, an initial outbreak of pneumonic plague involving 50% of a population could result in infection of 90% of the rest of the population in 20–30 days and could cause a case fatality ratio of 60–70%. The work of this committee provided a basis for the 1972 international Biological Weapons and Toxins Convention prohibiting biological weapons development and maintenance, and that went into effect in 1975 […] It is now known that, despite signing this accord, the USSR continued an aggressive clandestine program of research and development that had begun decades earlier, stockpiling battle-ready plague weapons (Alibek, 1999). The Soviets prepared Y. pestis in liquid and dry forms as aerosols to be released by bomblets, and plague was considered by them as one of the most important strategic weapons in their arsenal. […] It is assumed that a terrorist attack would most likely use a Y. pestis aerosol, possibly resulting in large numbers of severe and fatal primary and secondary pneumonic plague cases. Especially given plague’s notoriety, even a limited event would likely cause public panic, create large numbers of the “worried-well,” foster irrational evasive behavior, and quickly place an overwhelming stress on medical and other emergency response elements working to save lives and bring about control of its spread”

“Several simulations of a plague attack have been conducted in the US […] these have involved all levels of government, numerous agencies, and a wide range of first responders […] Two of these […] were based on coordinated national and local responses to simulated plague attacks. During these simulations, critical deficiencies in emergency response became obvious, including the following: problems in leadership, authority, and decision-making; difficulties in prioritization and distribution of scarce resources; failures to share information; and overwhelmed health care facilities and staff. The need to formulate in advance sound principles of disease containment, and the administrative and legal authority to carry them out without creating confusing new government procedures were glaringly obvious […] In the US, several “sniffing devices” to detect aerosolized microbial pathogens have been developed and tested. The Department of Homeland Security and the Environmental Detection Agency have deployed a PCR-based detection system named BioWatch to continuously monitor filtered air in major cities for Y. pestis and other select agents.”

One of the ‘interesting’ aspects is how the effect of such an attack might be magnified by an attack using conventional weapons as well targeting the likely first responders. Imagine the bombing of local hospitals combined with a plague outbreak and widespread panic plus lack of coordination at the higher decision making level – societal collapse combined with pneumonic plague seems like a combination that could really elevate the body count.

Okay, lastly: Smallpox. Before going into the details I have express my opinion on this matter: If a person works towards releasing smallpox in order to infect other human beings (and so reintroduce the disease), that person is in my book an enemy of the human race who should be shot on sight. No trial, just kill him (or her).

“Smallpox […] is one of the six pathogens considered a serious threat for biological terrorism […] Smallpox has several attributes that make it a potential threat. It can be grown in large amounts. It spreads via the respiratory route. It has a 30% mortality rate. […] In summary, variola has several virologic attributes that make it attractive as a terrorist weapon. It is easy to grow. It can be lyophilized to protect it from heat. It can be aerosolized. Its genome is large and theoretically amendable to modification.”

“The clinical illness and fatality rate roughly parallel the density of the skin lesions. When lesions are sparse, cases are unlikely to die and probably are not efficient transmitters. However, their mobility may allow them to have enough social interaction to result in transmission […] As lesions become denser and confluent, the fatality rate increases, the amount of virus in the respiratory secretions increases, and patients are more infectious […] Hemorrhagic smallpox has a fatality rate of nearly 100%, and patients are highly infectious. About 1–5% of unvaccinated patients with V. major get hemorrhagic smallpox […] They are usually very sick, usually unable to get out of bed and thus may not transmit efficiently. The clinical presentation (from mild to discrete to confluent to hemorrhagic) is a function of the host response, not the virus. The clinical types do not breed true, in that transmission from any patient can give rise to any of the clinical presentations, and the virus is the same.”

“The individual lesions undergo a slow and predictable evolution. […] By about the 3rd day, the macules become papular, and the papules progress to fluid-filled vesicles by about the 5th day. These vesicles become large, hard, tense pustules by about the seventh or eighth day. […] The pustules are “in” the skin, not just “on” the skin. They are deep-seated […] About the 8th or 9th day, the lesions begin to dry up and umbilicate. By about 2 weeks after the onset of the rash, lesions are scabbing. About 3 weeks after onset, the scabs begin to separate, leaving pitted and depigmented scars. The causes of death from smallpox are not well elucidated. Massive viral toxemia probably causes a sepsis cascade. Cardiovascular shock may be part of the agonal syndrome. In hemorrhagic cases, disseminated intravascular coagulation probably occurs. Antibacterial agents are not helpful. Loss of fluid and proteins from the exudative rash probably contribute to death. Modern medical care might reduce the fatality rate, but there is no way to prove that contention […] There is no proven therapy. No data exist to show whether modern supportive care could reduce the death rate.”

“When smallpox is known to be circulating, the clinical presentation and characteristic rash make diagnosis fairly easy. Diagnosis can be difficult when smallpox is not high on the index of suspicion. Initial cases after a covert bioterrorist attack will probably be missed, at least until the 4th or 5th day of the rash. Transmission may have already taken place by this time. […] Smallpox does not ordinarily spread rapidly. Transmission requires prolonged face-to-face contact, such as that which occurs among family members or caregivers. Transmission is most efficient when the index patient is less than 6 feet from the recipient, so that the large-droplet respiratory secretions can be inhaled […] Since virus is not secreted from the respiratory tract until the end of the prodrome, patients are usually bedridden when they become infectious and usually do not transmit the disease widely. […] No historical evidence exists that smallpox was an effective bioweapon […] what has been written into historical texts and some medical journals may have been fueled more by fear than plausibility.”

“Smallpox virus currently exists legally in only two laboratories: the CDC in Atlanta and at the State Research Center for Virology and Biotechnology in the Novosibirsk region of Russia. Possession of smallpox virus in any place other than these two laboratories is illegal by international convention. A former Deputy Director of the Soviet Union’s bioweapons program has written that, during the cold war, their laboratories produced smallpox in large amounts, and made efforts to adapt it for loading into intercontinental missiles (Alibek, 1999). Scientists defecting from the former Soviet Union, or leaving Russia seeking work in other nations, may have illegally carried stocks of the virus to “rogue” nations (Alibek, 1999; Gellman, 2002; Mangold et al., 1998; Warrick, 2002). There is no publicly accessible proof that such defectors actually transported smallpox out of Russia, but no way of disproving that they did. […] Terrorists with access to a modern virus laboratory might genetically modify smallpox in ways similar to the published manipulations of ectromelia [mousepox] […] Genetically altered strains might pose problems of transmission; alteration of pathogenicity might have unknown effects on the transmissibility of the virus. Experienced intelligence observers feel that terrorists would avoid creating a strain with enhanced virulence. Such strains could devastate developing countries with poor public health systems, and a widespread outbreak would quickly spread to such countries (Johnson et al., 2003). Natural smallpox could similarly boomerang. Terrorists with the ability to manufacture it would realize that an effective attack might cause widespread disease in nations harboring their colleagues. Many such nations have poor public health systems and little vaccine, and would be more devastated than the nation initially attacked”

“The United States stopped routine vaccination in 1972. It could be resumed if the threat of smallpox becomes considerable. Only in a scenario where smallpox becomes widespread would it be wise to resume mass vaccination. […] The current CDC smallpox response strategy is based on pre-exposure vaccination of carefully screened members of first response teams, epidemiologic response teams, and clinical response teams at designated facilities. […] Readiness to control an outbreak resulting from an attack entails a high index of suspicion among clinicians, a good network of diagnostic laboratory capabilities, and a plan for use of surveillance and isolation techniques to quickly contain outbreaks. […] Resumption of widespread vaccination is dangerous and unnecessary.”

Vaccinations are not dangerous because they may cause smallpox to reappear, but rather because there are some other risks involved when getting the vaccination. It’s important to note that Variola major is not the active ingredient in the vaccinations used – rather the vaccinia virus is used, a virus belonging to the same family. There’s more on this stuff here.

As implied by the goodreads rating, I liked this book.

March 31, 2014 Posted by | Books, Infectious disease, Medicine, Microbiology | Leave a comment

A few lectures

I love Crawford’s lectures, and this one is great as usual. Much of this will presumably be review if you’ve explored wikipedia a bit (lots of good astronomy stuff there), but there’ll probably be some new stuff as well and her delivery is really good.

I’m very skeptical about some of the numbers presented in this lecture, and this kind of stuff – insufficiently sourced (/unsourced) numbers which are hard to look up, also on account of other information being constantly added to the mix – is an aspect of lectures which I really don’t like. Not a great lecture in my opinion, but I figured I might as well post it anyway.

I’ve linked to e.g. this article before, so some of the stuff covered in this lecture should be well known to those readers who’ve read along for a long time and follow all my links… (Ha!)

As usual it’s annoying that you can’t see where the lecturer is pointing when talking about stuff on a given slide, but the lecture has some interesting stuff and it’s worth watching it despite this problem.

March 2, 2014 Posted by | Astronomy, Infectious disease, Lectures, Mathematics, Medicine, Microbiology, Physics | Leave a comment

Antibiotic Policies: Controlling Hospital Acquired Infection

I’ve read about and blogged this topic before, but this is the first academic text on the topic I’ve read. I liked the book and gave it four stars on goodreads. It’s a typical Springer publication, i.e. it’s a collection of relevant studies/papers published on the topic; there are fourteen papers/chapters included in the book. Given the nature of the book there’s some overlap across chapters, but that’s to be expected and it doesn’t really matter much. The book was published in 2011 so it’s reasonably up to date even though things are happening fast in this area.

Some of the authors of the studies included in the book assume that the reader possesses a level of knowledge about microbiology which goes way beyond what you’d get from reading an intro text like Hardy, and although I’ve also previously browsed one of the books you’d actually need to have read in order to understand the details (Brooks, Butel & Morse), I’ve of course long ago forgotten much of that stuff and so occasionally felt a bit lost while reading the book. There’s some good stuff in there though, and many of the chapters I did not find that hard to read although some details eluded me. It’s my impression that you probably will not get much out of the book if you’ve never read a microbiology text before (I actually feel a bit sad having to write that as the topics covered are very important in terms of future public health, and so in a way I’d really wish as many people as possible actually read this book, or at the very least familiarized themselves in some other way with the problems covered in the book).

“This book serves a twin purpose in helping to construct a more informed evidence base for coherent policy making while, at the same time, providing practical advice for health professionals in the prevention and control of HAIs.”

The quote above is from the preface of the book. The papers included in the book cover a wide variety of topics; one chapter deals with the ‘total scale’ of the problem of healthcare associated infections (HAIs), another chapter deals with (among other things) how antibiotic treatment regimes and the development of resistant strains in the community and/or health care institutions are associated, one chapter deals with the epidemiology of drug resistant strains of bacteria and how to properly categorize drug resistance (which can take on many forms), and quite a few chapters focus on specific HAIs (C. difficile, MRSA, VRE, ESBL-producing bacteria, CRE, Acinetobacter baumannii, and MDR (multi-drug resistant) Pseudomonas all get a chapter each). Many intervention studies are covered and the focus is not just on identifying the extent of the problem but also on finding ways to counter the problems; one chapter deals specifically with antibiotic stewardship, which is one of the main ways to try to stop the spread of antibiotic resistance, but many other chapters cover that topic as well in the specific setting. Another key strategic element in any intervention strategy, infection control measures (hand hygiene, patient isolation, etc.), is likewise covered in many of the chapters, and as the studies included have a very ‘evidence-based medicine approach’ to these matters important but potentially embarrasing problems like compliance problems on the part of health care providers [it’s harder to convince doctors to wash their hands than it is to convince nurses..] are not overlooked. The book is not US-centric; countless international studies are included, and a specific chapter is reserved to dealing with MDR infections in low-resource health care settings. The institutional setting is important and is covered in a few chapters, and included in that discussion are observations related to how things like reimbursement methodology may impact health care provider behaviours and how faulty incentive structures on the institutional level may aggravate the problems with resistance development e.g. by failing to address collective action problems in this area.

As might be inferred from the comments above, there’s way too much stuff in there for it to make sense for me to cover it all here. However I have added some observations from the book below, emphasizing some important points and observations along the way and adding a few comments here and there.

“What is required is tackling of the problem at its root cause, namely the gross over use of antibiotics.” […let’s just start out with that one, so that people will not falsely assume that this aspect is not covered in the book.]

“In broad terms, there are two means by which patients can develop multi-resistant infections—they can either develop their own resistant pathogen, or they can acquire someone else’s strain.
Emergence of new resistant pathogens is directly related to antimicrobial selection pressure either via the mutation of new resistance genes or the alteration of bacterial ecology (e.g. in the gut) that facilitates the transfer of naturally occurring or emergent resistance genes from one bacterial class to another […] antibiotic use in food production can have the same effect as direct human antibiotic misuse, since it can select for both resistant pathogens (e.g. fluoroquinolone-resistant Campylobacter in chicken meat) or resistance genes such that food consumption results in either direct fecal colonisation or acquisition of resistance genes by routine gut flora [3,4]. Antibiotic stewardship is therefore not simply a hospital issue.” […]

“The global burden of healthcare associated infections (HAI) is currently unknown, despite international efforts to fill this gap in our knowledge. Where the size of the burden of HAI has been quantified, the greatest impact is in those countries with least resources to measure and manage them. […] 3.5–10.5% of hospitalised patients in industrialised countries may experience HAI (E.C.D.C. 2008), while greater than 25% of hospitalised patients in developing world nations may be affected (W.H.O. 2005). […] While in 2000, 70 countries did not screen donated blood for HIV, hepatitis B or hepatitis C, currently the risk of bacterial infection from transfusion is greater than the risk of acquiring these viruses. Reuse of contaminated needles or syringes during injections in limited resource settings poses a major threat for transmission of infection, accounting for an estimated 21 million hepatitis B infections, 2 million hepatitis C infections and over 95,000 HIV infections. […] Of the 8.8 million deaths in children under the age of 5 years, infectious diseases account for 5.5 million (63%) […] clinicians in developing countries tend to diagnose and prescribe medication empirically. People with undetected resistance then receive antibiotics to which their isolate is not susceptible. For example, one study in western Kenya found that more than half of the patients treated empirically for bacterial diarrhea were given ineffective antibiotics. Among patients with shigella, this number exceeded 80% (Shapiro et al. 2001). […] In developing countries, antibiotics are a scarce resource, and most clinics and hospitals can barely afford common first-line agents, much less second and third-line alternatives […] variation in prices of antibiotics is considerable. The wholesale price differential between amoxicillin and co-amoxiclav, for example, is on the order of a factor of 20 (Forster 2010). This means that where resistant bacteria necessitate the use of co-amoxiclav, only 5% of the patients can be treated for the same budget as with amoxicillin. […] In coastal Kenya, resistance to chloramphenicol, amoxicillin, cotrimoxazole, and gentamicin in Gram-negative sepsis is common, and susceptibility remains only to two rarely used drugs, ciprofloxacin and cefotaxime. The cost of treating a 15 kg child with sepsis would be $0.38–2.30 for gentamicin and chloramphenicol versus $73–108 for the effective drugs […] In Thailand, only 9% of antibiotics administered in a teaching hospital were appropriate to the patient’s condition, and 36% of patients were given antibiotics without evidence of an infection […]

“The underused vaccines that could have the biggest effect on antibiotic use in hospitals are against Streptococcus pneumoniae and Haemophilus influenzae type b. To these should be added one of the new vaccines against Rotavirus, the main cause of dehydrating diarrhea, which kills 400,000–500,000 infants and children in developing countries annually. Even though Rotavirus is, in fact, a virus, reducing its incidence will reduce antibiotic use. The most appropriate treatment for rotavirus and other causes of watery diarrhea is oral rehydration therapy, but since antibiotics are used inappropriately in many cases, reducing the number of cases will reduce antibiotic use.” [..vaccines against viruses may help decrease the number of bacteria resistant to antibioticsyep, this stuff is complicated..]

“HAI are recognised as among the most common adverse outcomes from hospitalisation in the US; approximately 1.7 million HAI are reported across the US each year, which are associated with around 99,000 deaths per year. Around a third of HAI are urinary tract infections, one fifth are surgical site infections, 15% are pneumonia and 14% are bloodstream infections (C.D.C. 2010).” […] Estimates in Europe are that approximately 4.1 million patients per year experience HAI, and that attributable deaths are of the order of 37,000 per year (E.C.D.C. 2005–2010).” [These estimates are somewhat uncertain and I’m not sure how much you should read into the fact that they differ in the way that they do, with fewer but more lethal HAIs in the US. Before you read a lot into it, you should certainly note that there is huge regional variation in the data here.] […]

“Surgical prophylaxis is a common area of overuse as shown in many publications. Measured by total DDDs [defined daily doses], it can amount to around one third of a hospital’s total antibiotic use. This illustrates the potential for ecological damage although surgeons often ask whether 24 h or even single dose prophylaxis can really select for resistance. The simple answer is yes, but of course much of the problem is extension of prophylaxis beyond the perioperative period, often for several days in critical patients, perhaps until all lines and drains are removed. There is no evidence base in favour of such practices.” […]

“Since 2002, increasing rates of CDI [Clostridium difficile Infection] with a more severe course, higher mortality (from 4.7 to 13.8%) and more complications (from 7.1 to 18.2%) have been reported in Canada […] Of all patients who develop CDI in the hospital setting, approximately 80–90% have used antibiotics in the previous 3 months. […] MRSA can survive for months in hospital environment […] and it can be isolated on clinical equipment, as well as on general surfaces especially close to patient’s area, such as curtains, beds, lockers and over-bed tables […] Before contact precautions are implemented, MRSA carriers may have already contaminated their environment with MRSA. […] Cross-transmission between patients may occur via HCWs [health care workers’] hands after touching contaminated environmental surfaces […] One study showed that 10% of HCWs fingertips were contaminated with MRSA after contact with MRSA positive patient’s environment […] There is now reasonable evidence that rates of MRSA, C.difficile, VRE and multi resistant Gram-negatives can be reversed by modulating use of key agents such as cephalosporins and quinolones […] The real problem for the future, of course, is how to do this without “squeezing the balloon”, transferring the resistance selection pressure to other classes of agents. This highlights another paradox, that of current antibiotic policies which tend to lead to a lack of diversity of use of different classes of antibiotics. Diversity of use is probably one of the best strategies to delay emergence of resistance, although a lack of choice of truly different drug classes makes its implementation problematic. Moreover, the holy grail, and the most difficult thing is to achieve total reduction in prescribing while not compromising patient outcomes. Again, this isn’t something current strategies are good at achieving.” […]

“ESBL-producing bacteria are not only present in hospitals from endemic nosocomial sources but are introduced into the hospital from other health care facilities (particularly high rates occur in care of the elderly homes […] but also from individuals coming
from the community (Ben-Ami et al. 2006). […] This community carriage is an important facet of ESBL control [Again, what happens outside the hospitals matter a great deal…] […]

“Carbapenems have the broadest antimicrobial spectrum of any beta-lactam antibiotic and are frequently used as first-line agents for the treatment of severe infections caused by multiresistant Gram-negative bacteria […] The emergence and spread of carbapenem-resistant Enterobacteriaceae (CRE) are therefore a major concern for patient safety and public health. Infections due to CRE may lead to increased likelihood of treatment failure and growing reliance on third-line agents and combination therapy, with doubtful therapeutic efficacy and increased potential for toxic side-effects […] It also increases the cost of treatment […] CRE differ from most other multidrug-resistant bacterial pathogens in that there is no reliable treatment available (Schwaber and Carmeli 2008). […] two cases of panresistant CRE were recently reported from a hospital in New York […panresistant strains are basically untreatable, US.] […] Patients with CRE infection are at high risk of treatment failure and adverse outcomes, including increased mortality and morbidity, longer length of hospital stay, and higher treatment costs when compared to infections caused by susceptible strains. Several studies have reported high percentages of crude in-hospital mortality— some over 50%—among patients infected with CRE […] the magnitude of the excess mortality directly attributable to CRE is difficult to quantify […] Overall, uncertainties persist in individual patient-level analyses regarding which prior antibiotic exposures are most important as risk factors for acquisition of, transmission of and infection with CRE. Similarly, ecologic studies using aggregate datalevel analyses do not show a clear-cut picture.” […]

“Antibiotic policies are crucial but they cannot be effective without active infection control program[s]. A hospital with a strong infection control program without an antibiotic stewardship component would tackle transmission of multi-resistant organisms such as VRE but would not prevent individual patients from getting colonised or infected with resistant microbes. On the other hand, strong antibiotic stewardship would be expected to control the menace of multi-resistant organism but in absence of an infection control program, transmission of organisms (even if not multiply resistant) would be easy and would adversely affect patient care.” […]

“A retrospective, risk-adjusted, cohort study of 80 patients with Acinetobacter bacteraemia conducted in Korea demonstrated that those infected with imipenemresistant strains had a significantly higher 30-day cumulative mortality rate than those infected with imipenem-susceptible strains (57.5% versus 27.5%) […] This was mainly due to a higher rate of inappropriate antimicrobial therapy. […] Carbapenems are the mainstay of treatment for severe infections. However, carbapenem-resistant A. baumannii strains have emerged worldwide. […] A considerable proportion of multi-drug resistant A. baumannii strains are susceptible only to polymyxins, which prompted the use of an old antibiotic in recent years. […] Polymyxins are polypeptide antibiotics that act as detergents on the bacterial cell wall. They were introduced in 1940 but they were abandoned in the 1980s due to the occurrence of nephrotoxicity and neurotoxicity. […] Reported nephrotoxicity ranges between 8 and 36%. […] Reported neurotoxicity ranges between 7 and 29%, with oral and perioral paresthesias, visual disturbances and polyneuropathy […] [So basically what has happened is that doctors have been forced to restart using drugs they threw away 30 years ago because those drugs caused kidney failure and severe nerve damage. These old drugs are currently the only drugs that work against some MDR infections, and no new drugs are even close to being developed at this point]. […]

P. aeruginosa is the second most common cause of health-care associated pneumonia, of hospital-acquired pneumonia and of ventilator-associated pneumonia (VAP). It is also reported as the cause of 9% of hospital-acquired urinary tract infections (UTIs). […] It is estimated that the rate of colonization and/or infection by MDR P. aeruginosa is 0.5 episodes/1,000 patient-days in the general ward and 29.9 to 36.7/100 patients in the ICU (Agodi et al. 2007; Peňa et al. 2009). […] infections by MDR P. aeruginosa have a significant impact on mortality. A retrospective study of our group in non-neutropenic hosts in the general ward disclosed 22.2% mortality of infections by MDR P. aeruginosa compared to 0% of infections by susceptible isolates […]. For ICU infections caused by MDR P. aeruginosa mortality ranges between 22% and 77%; this ranges between 12% and 23% when ICU infections are caused by susceptible isolates (Shorr 2009).” […]

Antibiotic effectiveness can be viewed as a shared resource in which current use depletes future value and imposes costs on society in the form of longer hospitalization, higher mortality rates, and the diversion of resources into the provision of newer and more expensive drugs. In making treatment decisions, prescribers should weigh the favorable effects of applying antibiotics to improve a patient’s health against the negative consequences for the public and future drug effectiveness (Laxminarayan 2003b). However, clinicians usually ignore the future therapeutic risks to society associated with antibiotic use and instead focus on the direct benefits of antibiotic treatment to their patients. […] In the absence of a good pipeline of new drugs, it is the balance between the individual patient and society as a whole, otherwise known as the ecological perspective, that has to be clearly established and debated. We need to get clever, quickly. […] In the long term, new antibiotics are needed […] However, as a gap of 10–15 years has been identified (European Centre for Disease Prevention and Control and European Medicines Agency 2009), immediate action is needed to conserve the power of the available arsenal.”

November 27, 2013 Posted by | Books, Infectious disease, Medicine, Microbiology, Nephrology, Neurology, Pharmacology | 13 Comments

A few papers

i. The Living Dead: Bacterial Community Structure of a Cadaver at the Onset and End of the Bloat Stage of Decomposition. There are a lot of questions one might ask about how the world works. Incidentally I should note that when I die I really wouldn’t mind contributing to a study like this. Here’s the abstract, with a couple of links added to ease understanding:

“Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.”

The introduction contains a good description of how decomposition in humans proceed:

“A cadaver is far from dead when viewed as an ecosystem for a suite of bacteria, insects, and fungi, many of which are obligate and documented only in such a context. Decomposition is a mosaic system with an intimate association between biotic factors (i.e., the individuality of the cadaver, intrinsic and extrinsic bacteria and other microbes, and insects) and abiotic factors (i.e., weather, climate, and humidity) and therefore a function of a specific ecological scenario. Slight alteration of the ecosystem, such as exclusion of insects or burial, may lead to a unique trajectory for decomposition and potentially anomalous results; therefore, it is critical to forensics that the interplay of these factors be understood. Bacteria are often credited as a major driving force for the process of decomposition but few studies cataloging the microbiome of decomposition have been published […]

A body passes through several stages as decomposition progresses driven by dehydration and discernible by characteristic gross taphonomic changes. The early stages of decomposition are wet and marked by discoloration of the flesh and the onset and cessation of bacterially-induced bloat. During early decay, intrinsic bacteria begin to digest the intestines from the inside out, eventually digesting away the surrounding tissues [3]. Enzymes from within the dead cells of the cadaver also begin to break down tissues (autolysis). During putrefaction, bacteria undergo anaerobic respiration and produce gases as by-products such as hydrogen sulfide, methane, cadaverine, and putrescine [5]. The buildup of resulting gas creates pressure, inflating the cadaver, and eventually forcing fluids out [3]. This purging event marks the shift from early decomposition to late decomposition and may not be uniform; the head may purge before the trunk, for example. Purge may also last for some period of time in some parts of the body even as other parts of the body enter the most advanced stages of decomposition. In the trunk, purge is associated with an opening of the abdominal cavity to the environment [3]. At this point, the rate of decay is reported by several authors to greatly increase as larval flies remove large portions of tissues; however, mummification may also occur, thus serving to preserve tissues [6][9]. The final stages of decomposition last through to skeletonization and are the driest stages [7], [10][13].”

It’s really quite an interesting paper, but you probably don’t want to read this while you’re having dinner. A few other interesting observations and conclusions:

“Many factors can influence the bacteria detected in and on a cadaver, including the individual’s “starting” microbiome, differences in the decomposition environments of the two cadavers, and differences in the sites sampled at end-bloat. The integrity of organs at end-bloat varied between cadavers (as decomposition varied between cadavers) and did not allow for consistent sampling of sites across cadavers. Specifically, STAFS 2011-016 no longer had a sigmoidal colon at the end-bloat sample time.” […]

“With the exception of the fecal sample from STAFS 2011-006, which was the least rich sample in the study with only 26 unique OTUs [operational taxonomic units – US] detected, fecal samples were the richest of all body sites sampled, with an average of nearly 400 OTUs detected. The stomach sample was the second least rich sample, with small intestine and mouth samples slightly richer. The body cavity, transverse colon, and sigmoidal colon samples were much richer. Overall, these data show that as one moves from the upper gastrointestinal tract (mouth, stomach, and small intestine) to the lower gastrointestinal tract (colon and rectal/fecal), microbiome richness increases.” […]

“It is important to note that while difference in abundance seen in particular species between this study and the others noted above could be due to the discussed constraints of culturing bacteria, differences could also be due to a variety of factors such as individual variability between the cadaver microbiomes, seasonality, climate, and species of colonizing insects. Finally, abundance does not necessarily indicate metabolic significance for decomposition, a point of importance that our study cannot address.” […]

“Our data represent initial insights into the bacteria populating decomposing human cadavers and an early start to discovering successive changes through time. While our data support the findings of previous culture studies, they also demonstrate that bacteria not detected by culture-based methods comprise a large portion of the community. No definitive conclusion regarding a shift in community structure through time can be made with this data set.”

ii. Protein restriction for diabetic renal disease.

Background

Diabetic renal disease (diabetic nephropathy) is a leading cause of end-stage renal failure. Once the process has started, it cannot be reversed by glycaemic control, but progression might be slowed by control of blood pressure and protein restriction.

Objectives

To assess the effects of dietary protein restriction on the pro gression of diabetic nephropathy in patients with diabetes .

Search methods

We searched The Cochrane Library , MEDLINE, EMBASE, ISI Proceedings, Science Citation Index Expanded and bibliographies of included studies.

Selection criteria

Randomised controlled trials (RCTs) and before and after studies of the effects of a modified or restricted protein diet on diabetic renal function in people with type 1 or type 2 diabetes following diet for at least four months were considered.

Data collection and analysis

Two reviewers performed data extraction and evaluation of quality independently. Pooling of results was done by means of random- effects model.

Main results

Twelve studies were included, nine RCTs and three before and after studies. Only one study explored all-cause mortality and end-stage renal disease (ESRD) as endpoints. The relative risk (RR) of ESRD or death was 0.23 (95% confidence interval (CI) 0.07 to 0.72) for patients assigned to a low protein diet (LPD). Pooling of the seven RCTs in patients with type 1 diabetes resulted in a non-significant reduction in the decline of glomerular filtration rate (GFR) of 0.1 ml/min/month (95% CI -0.1 to 0.3) in the LPD group. For type 2 diabetes, one trial showed a small insignificant improvement in the rate of decline of GFR in the protein-restricted group and a second found a similar decline in both the intervention and control groups. Actual protein intake in the intervention groups ranged from 0.7 to 1.1 g/kg/day. One study noted malnutrition in the LPD group. We found no data on the effects of LPDs on health-related quality of life and costs.

Authors’ conclusions

The results show that reducing protein intake appears to slightly slow progression to renal failure but not statistically significantly so. However, questions concerning the level of protein intake and compliance remain. Further longer-term research on large representative groups of patients with both type 1 and type 2 diabetes mellitus is necessary.”

The paper has a lot more. Do note that due to the link between kidney disease and dietary protein intake, at least one diabetic I know has actually considered the question of whether to adjust protein intake at an even earlier point in the disease process than the one comtemplated in these studies, i.e. before the lab tests show that the kidneys have started to fail – this is hardly an outrageous idea given evidence in related fields. I do think however that the evidence is much too inconclusive in the case of diabetic nephropathy for anything like this to make much sense at this point. Lowering salt intake seems to be far more likely to have positive effects. I’d be curious to know if the (very tentative..) finding that the type of dietary protein (‘chicken and fish vs red meat’) may matter for outcomes, and not just the amount of protein, holds; this seems very unclear at this point, but it’s potentially important as it also relates to the compliance/adherence problem.


iii. Direct evidence of 1,900 years of indigenous silver production in the Lake Titicaca Basin of Southern Peru:

“Archaeological excavations at a U-shaped pyramid in the northern Lake Titicaca Basin of Peru have documented a continuous 5-m-deep stratigraphic sequence of metalworking remains. The sequence begins in the first millennium AD and ends in the Spanish Colonial period ca. AD 1600. The earliest dates associated with silver production are 1960 ± 40 BP (2-sigma cal. 40 BC to AD 120) and 1870 ± 40 BP (2-sigma cal. AD 60 to 240) representing the oldest known silver smelting in South America. Scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) analysis of production debris indicate a complex, multistage, high temperature technology for producing silver throughout the archaeological sequence. These data hold significant theoretical implications including the following: (i) silver production occurred before the development of the first southern Andean state of Tiwanaku, (ii) the location and process of silverworking remained consistent for 1,500 years even though political control of the area cycled between expansionist states and smaller chiefly polities, and (iii) that U-shaped structures were the location of ceremonial, residential, and industrial activities.”

A little more from the paper:

“Our data establish an initial date for silverworking that is at least three centuries earlier than previous studies had indicated. […] Three independent lines of evidence establish the chronological integrity of the deposit: 1) a ceramic sequence in uninterrupted stratigraphic layers, 2) absolute radiocarbon dates, and 3) absolute ceramic thermoluminescence (TL) dates (1). […] the two absolute dating methods are internally consistent, and […] these match the relative sequence derived from analyzing the diagnostic pottery or ceramics. The unit excavated at Huajje represents a rare instance of an intact, well-demarcated stratigraphic deposit that allows us to precisely define the material changes through time in silver production. […] The steps required for silver extraction include mining, beneficiation (i.e., crushing of the ore and sorting of metal-bearing mineral), optional roasting to remove sulfur via oxidation, followed by smelting, and cupellation […] Archaeological or ethnographic evidence for most of these steps is extremely scarce, making this a very significant assemblage for our understanding of early silver production. A total of 3,457 (7,215.84 g) smelting-related artifacts were collected.”

November 1, 2013 Posted by | Archaeology, Biology, Diabetes, Medicine, Microbiology, Nephrology, Papers | Leave a comment