Econstudentlog

Endocrinology (part 5 – calcium and bone metabolism)

Some observations from chapter 6:

“*Osteoclasts – derived from the monocytic cells; resorb bone. *Osteoblasts – derived from the fibroblast-like cells; make bone. *Osteocytes – buried osteoblasts; sense mechanical strain in bone. […] In order to ensure that bone can undertake its mechanical and metabolic functions, it is in a constant state of turnover […] Bone is laid down rapidly during skeletal growth at puberty. Following this, there is a period of stabilization of bone mass in early adult life. After the age of ~40, there is a gradual loss of bone in both sexes. This occurs at the rate of approximately 0.5% annually. However, in ♀ after the menopause, there is a period of rapid bone loss. The accelerated loss is maximal in the first 2-5 years after the cessation of ovarian function and then gradually declines until the previous gradual rate of loss is once again established. The excess bone loss associated with the menopause is of the order of 10% of skeletal mass. This menopause-associated loss, coupled with higher peak bone mass acquisition in ♂, largely explains why osteoporosis and its associated fractures are more common in ♀.”

“The clinical utility of routine measurements of bone turnover markers is not yet established. […] Skeletal radiology[:] *Useful for: *Diagnosis of fracture. *Diagnosis of specific diseases (e.g. Paget’s disease and osteomalacia). *Identification of bone dysplasia. *Not useful for assessing bone density. […] Isotope bone scans are useful for identifying localized areas of bone disease, such as fracture, metastases, or Paget’s disease. […] Isotope bone scans are particularly useful in Paget’s disease to establish the extent and sites of skeletal involvement and the underlying disease activity. […] Bone biopsy is occasionally necessary for the diagnosis of patients with complex metabolic bone diseases. […] Bone biopsy is not indicated for the routine diagnosis of osteoporosis. It should only be undertaken in highly specialist centres with appropriate expertise. […] Measurement of 24h urinary excretion of calcium provides a measure of risk of renal stone formation or nephrocalcinosis in states of chronic hypercalcaemia. […] 250H vitamin D […] is the main storage form of vitamin D, and the measurement of ‘total vitamin D’ is the most clinically useful measure of vitamin D status. Internationally, there remains controversy around a ‘normal’ or ‘optimal’ concentration of vitamin D. Levels over 50nmol/L are generally accepted as satisfactory and values <25nmol/L representing deficiency. True osteomalacia occurs with vitamin D values <15 nmol/L. Low levels of 250HD can result from a variety of causes […] Bone mass is quoted in terms of the number of standard deviations from an expected mean. […] A reduction of one SD in bone density will approximately double the risk of fracture.”

[I should perhaps add a cautionary note here that while this variable is very useful in general, it is more useful in some contexts than in others; and in some specific disease process contexts it is quite clear that it will tend to underestimate the fracture risk. Type 1 diabetes is a clear example. For more details, see this post.]

“Hypercalcaemia is found in 5% of hospital patients and in 0.5% of the general population. […] Many different disease states can lead to hypercalcaemia. […] In asymptomatic community-dwelling subjects, the vast majority of hypercalcaemia is the result of hyperparathyroidism. […] The clinical features of hypercalcaemia are well recognized […]; unfortunately, they are non-specific […] [They include:] *Polyuria. *Polydipsia. […] *Anorexia. *Vomiting. *Constipation. *Abdominal pain. […] *Confusion. *Lethargy. *Depression. […] Clinical signs of hypercalcaemia are rare. […] the presence of bone pain or fracture and renal stones […] indicate the presence of chronic hypercalcaemia. […] Hypercalcaemia is usually a late manifestation of malignant disease, and the primary lesion is usually evident by the time hypercalcaemia is expressed (50% of patients die within 30 days).”

“Primary hyperparathyroidism [is] [p]resent in up to 1 in 500 of the general population where it is predominantly a disease of post-menopausal ♀ […] The normal physiological response to hypocalcaemia is an increase in PTH secretion. This is termed 2° hyperparathyroidism and is not pathological in as much as the PTH secretion remains under feedback control. Continued stimulation of the parathyroid glands can lead to autonomous production of PTH. This, in turn, causes hypercalcaemia which is termed tertiary hyperparathyroidism. This is usually seen in the context of renal disease […] In majority of patients [with hyperparathyroidism] without end-organ damage, disease is benign and stable. […] Investigation is, therefore, primarily aimed at determining the presence of end-organ damage from hypercalcaemia in order to determine whether operative intervention is indicated. […] It is generally accepted that all patients with symptomatic hyperparathyroidism or evidence of end-organ damage should be considered for parathyroidectomy. This would include: *Definite symptoms of hypercalcaemia. […] *Impaired renal function. *Renal stones […] *Parathyroid bone disease, especially osteitis fibrosis cystica. *Pancreatitis. […] Patients not managed with surgery require regular follow-up. […] <5% fail to become normocalcaemic [after surgery], and these should be considered for a second operation. […] Patients rendered permanently hypoparathyroid by surgery require lifelong supplements of active metabolites of vitamin D with calcium. This can lead to hypercalciuria, and the risk of stone formation may still be present in these patients. […] In hypoparathyroidism, the target serum calcium should be at the low end of the reference range. […] any attempt to raise the plasma calcium well into the normal range is likely to result in unacceptable hypercalciuria”.

“Although hypocalcaemia can result from failure of any of the mechanisms by which serum calcium concentration is maintained, it is usually the result of either failure of PTH secretion or because of the inability to release calcium from bone. […] The clinical features of hypocalcaemia are largely as a result of neuromuscular excitability. In order of  severity, these include: *Tingling – especially of fingers, toes, or lips. *Numbness – especially of fingers, toes, or lips. *Cramps. *Carpopedal spasm. *Stridor due to laryngospasm. *Seizures. […] symptoms of hypocalcaemia tend to reflect the severity and rapidity of onset of the metabolic abnormality. […] there may be clinical signs and symptoms associated with the underlying condition: *Vitamin D deficiency may be associated with generalized bone pain, fractures, or proximal myopathy […] *Hypoparathyroidism can be accompanied by mental slowing and personality disturbances […] *If hypocalcaemia is present during the development of permanent teeth, these may show areas of enamel hypoplasia. This can be a useful physical sign, indicating that the hypocalcaemia is long-standing. […] Acute symptomatic hypocalcaemia is a medical emergency and demands urgent treatment whatever the cause […] *Patients with tetany or seizures require urgent IV treatment with calcium gluconate […] Care must be taken […] as too rapid elevation of the plasma calcium can cause arrhythmias. […] *Treatment of chronic hypocalcaemia is more dependent on the cause. […] In patients with mild parathyroid dysfunction, it may be possible to achieve acceptable calcium concentrations by using calcium supplements alone. […] The majority of patients will not achieve adequate control with such treatment. In those cases, it is necessary to use vitamin D or its metabolites in pharmacological doses to maintain plasma calcium.”

“Pseudohypoparathyroidism[:] *Resistance to parathyroid hormone action. *Due to defective signalling of PTH action via cell membrane receptor. *Also affects TSH, LH, FSH, and GH signalling. […] Patients with the most common type of pseudohypoparathyroidism (type 1a) have a characteristic set of skeletal abnormalities, known as Albright’s hereditary osteodystrophy. This comprises: *Short stature. *Obesity. *Round face. *Short metacarpals. […] The principles underlying the treatment of pseudohypoparathyroidism are the same as those underlying hypoparathyroidism. *Patients with the most common form of pseudohypoparathyroidism may have resistance to the action of other hormones which rely on G protein signalling. They, therefore, need to be assessed for thyroid and gonadal dysfunction (because of defective TSH and gonadotrophin action). If these deficiencies are present, they need to be treated in the conventional manner.”

“Osteomalacia occurs when there is inadequate mineralization of mature bone. Rickets is a disorder of the growing skeleton where there is inadequate mineralization of bone as it is laid down at the epiphysis. In most instances, osteomalacia leads to build-up of excessive unmineralized osteoid within the skeleton. In rickets, there is build-up of unmineralized osteoid in the growth plate. […] These two related conditions may coexist. […] Clinical features [of osteomalacia:] *Bone pain. *Deformity. *Fracture. *Proximal myopathy. *Hypocalcaemia (in vitamin D deficiency). […] The majority of patients with osteomalacia will show no specific radiological abnormalities. *The most characteristic abnormality is the Looser’s zone or pseudofracture. If these are present, they are virtually pathognomonic of osteomalacia. […] Oncogenic osteomalacia[:] Certain tumours appear to be able to produce FGF23 which is phosphaturic. This is rare […] Clinically, such patients usually present with profound myopathy as well as bone pain and fracture. […] Complete removal of the tumour results in resolution of the biochemical and skeletal abnormalities. If this is not possible […], treatment with vitamin D metabolites and phosphate supplements […] may help the skeletal symptoms.”

Hypophosphataemia[:] Phosphate is important for normal mineralization of bone. In the absence of sufficient phosphate, osteomalacia results. […] In addition, phosphate is important in its own right for neuromuscular function, and profound hypophosphataemia can be accompanied by encephalopathy, muscle weakness, and cardiomyopathy. It must be remembered that, as phosphate is primarily an intracellular anion, a low plasma phosphate does not necessarily represent actual phosphate depletion. […] Mainstay [of treatment] is phosphate replacement […] *Long-term administration of phosphate supplements stimulates parathyroid activity. This can lead to hypercalcaemia, a further fall in phosphate, with worsening of the bone disease […] To minimize parathyroid stimulation, it is usual to give one of the active metabolites of vitamin D in conjunction with phosphate.”

“Although the term osteoporosis refers to the reduction in the amount of bony tissue within the skeleton, this is generally associated with a loss of structural integrity of the internal architecture of the bone. The combination of both these changes means that osteoporotic bone is at high risk of fracture, even after trivial injury. […] Historically, there has been a primary reliance on bone mineral density as a threshold for treatment, whereas currently there is far greater emphasis on assessing individual patients’ risk of fracture that incorporates multiple clinical risk factors as well as bone mineral density. […] Osteoporosis may arise from a failure of the body to lay down sufficient bone during growth and maturation; an earlier than usual onset of bone loss following maturity; or an rate of that loss. […] Early menopause or late puberty (in ♂ or ♀) is associated with risk of osteoporosis. […] Lifestyle factors affecting bone mass [include:] *weight-bearing exercise [increase bone mass] […] *Smoking. *Excessive alcohol. *Nulliparity. *Poor calcium nutrition. [These all decrease bone mass] […] The risk of osteoporotic fracture increases with age. Fracture rates in ♂ are approximately half of those seen in ♀ of the same age. An ♀ aged 50 has approximately a 1:2 chance [risk, surely… – US] of sustaining an osteoporotic fracture in the rest of her life. The corresponding figure for a ♂ is 1:5. […] One-fifth of hip fracture victims will die within 6 months of the injury, and only 50% will return to their previous level of independence.”

“Any fracture, other than those affecting fingers, toes, or face, which is caused by a fall from standing height or less is called a fragility (low-trauma) fracture, and underlying osteoporosis should be considered. Patients suffering such a fracture should be considered for investigation and/or treatment for osteoporosis. […] [Osteoporosis is] [u]sually clinically silent until an acute fracture. *Two-thirds of vertebral fractures do not come to clinical attention. […] Osteoporotic vertebral fractures only rarely lead to neurological impairment. Any evidence of spinal cord compression should prompt a search for malignancy or other underlying cause. […] Osteoporosis does not cause generalized skeletal pain. […] Biochemical markers of bone turnover may be helpful in the calculation of fracture risk and in judging the response to drug therapies, but they have no role in the diagnosis of osteoporosis. […] An underlying cause for osteoporosis is present in approximately 10-30% of women and up to 50% of men with osteoporosis. […] 2° causes of osteoporosis are more common in ♂ and need to be excluded in all ♂ with osteoporotic fracture. […] Glucocorticoid treatment is one of the major 2° causes of osteoporosis.”

Advertisements

February 22, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Medicine, Nephrology, Neurology, Pharmacology | Leave a comment

Words

The words below are mostly words I encountered while reading Wolfe’s The Claw of the Conciliator and O’Brian’s Master and Commander. I wanted to finish off my ‘coverage’ of those books here, so I decided to include a few more words than usual (the post includes ~100 words, instead of the usual ~80).

Threnody. Noctilucent. Dell. Cariole. Rick. Campanile. Obeisance. Cerbotana. Caloyer. Mitre. Orpiment. Tribade/tribadism (NSFW words?). Thiasus. Argosy. Partridge. Cenotaph. Seneschal. Ossifrage. Faille. Calotte.

Meretrice. Bijou. Espalier. Gramary. Jennet. Algophilia/algophilist. Clerestory. Liquescent. Pawl. Lenitive. Bream. Bannister. Jacinth. Inimical. Grizzled. Trabacalo. Xebec. Suet. Stanchion. Beadle.

Philomath. Gaby. Purser. Tartan. Eparterial. Otiose. Cryptogam. Puncheon. Neume. Cully. Carronade. Becket. Belay. Capstan. Nacreous. Fug. Cosset. Roborative. Comminatory. Strake.

Douceur. Bowsprit. Orlop. Turbot. Luffing. Sempiternal. Tompion. Loblolly (boy). Felucca. Genet. Steeve. Gremial. Epicene. Quaere. Mumchance. Hance. Divertimento. Halliard. Gleet. Rapparee.

Prepotent. Tramontana. Hecatomb. Inveteracy. Davit. Vaticination/vaticinatory. Trundle. Antinomian. Scunner. Shay. Demulcent. Wherry. Cullion. Hemidemisemiquaver. Cathead. Cordage. Kedge. Clew. Semaphore. Tumblehome.

February 21, 2018 Posted by | Books, Language | Leave a comment

A few diabetes papers of interest

(I hadn’t expected to only cover two papers in this post, but the second paper turned out to include a lot of stuff I figured might be worth adding here. I might add another post later this week including some of the other studies I had intended to cover in this post.)

i. Burden of Mortality Attributable to Diagnosed Diabetes: A Nationwide Analysis Based on Claims Data From 65 Million People in Germany.

“Diabetes is among the 10 most common causes of death worldwide (2). Between 1990 and 2010, the number of deaths attributable to diabetes has doubled (2). People with diabetes have a reduced life expectancy of ∼5 to 6 years (3). The most common cause of death in people with diabetes is cardiovascular disease (3,4). Over the past few decades, a reduction of diabetes mortality has been observed in several countries (59). However, the excess risk of death is still higher than in the population without diabetes, particularly in younger age-groups (4,9,10). Unfortunately, in most countries worldwide, reliable data on diabetes mortality are lacking (1). In a few European countries, such as Denmark (5) and Sweden (4), mortality analyses are based on national diabetes registries that include all age-groups. However, Germany and many other European countries do not have such national registries. Until now, age-standardized hazard ratios for diabetes mortality between 1.4 and 2.6 have been published for Germany on the basis of regional studies and surveys with small respondent numbers (1114). To the best of our knowledge, no nationwide estimates of the number of excess deaths due to diabetes have been published for Germany, and no information on older age-groups >79 years is currently available.

In 2012, changes in the regulation of data transparency enabled the use of nationwide routine health care data from the German statutory health insurance system, which insures ∼90% of the German population (15). These changes have allowed for new possibilities for estimating the burden of diabetes in Germany. Hence, this study estimates the number of excess deaths due to diabetes (ICD-10 codes E10–E14) and type 2 diabetes (ICD-10 code E11) in Germany, which is the number of deaths that could have been prevented if the diabetes mortality rate was as high as that of the population without diabetes.”

“Nationwide data on mortality ratios for diabetes and no diabetes are not available for Germany. […] the age- and sex-specific mortality rate ratios between people with diabetes and without diabetes were used from a Danish study wherein the Danish National Diabetes Register was linked to the individual mortality data from the Civil Registration System that includes all people residing in Denmark (5). Because the Danish National Diabetes Register is one of the most accurate diabetes registries in Europe, with a sensitivity of 86% and positive predictive value of 90% (5), we are convinced that the Danish estimates are highly valid and reliable. Denmark and Germany have a comparable standard of living and health care system. The diabetes prevalence in these countries is similar (Denmark 7.2%, Germany 7.4% [20]) and mortality of people with and without diabetes comparable, as shown in the European mortality database”

“In total, 174,627 excess deaths (137,950 from type 2 diabetes) could have been prevented in 2010 if mortality was the same in people with and without diabetes. Overall, 21% of all deaths in Germany were attributable to diabetes, and 16% were attributable to type 2 diabetes […] Most of the excess deaths occurred in the 70- to 79- and 80- to 89-year-old age-groups (∼34% each) […]. Substantial sex differences were found in diabetes-related excess deaths. From the age of ∼40 years, the number of male excess deaths due to diabetes started to grow, but the number of female excess deaths increased with a delay. Thus, the highest number of male excess deaths due to diabetes occurred at the age of ∼75 years, whereas the peak of female excess deaths was ∼10 years later. […] The diabetes mortality rates increased with age and were always higher than in the population without diabetes. The largest differences in mortality rates between people with and without diabetes were observed in the younger age-groups. […] These results are in accordance with previous studies worldwide (3,4,7,9) and regional studies in Germany (1113).”

“According to official numbers from the Federal Statistical Office, 858,768 people died in Germany in 2010, with 23,131 deaths due to diabetes, representing 2.7% of the all-cause mortality (26). Hence, in Germany, diabetes is not ranked among the top 10 most common causes of death […]. We found that 21% of all deaths were attributable to diabetes and 16% were attributable to type 2 diabetes; hence, we suggest that the number of excess deaths attributable to diabetes is strongly underestimated if we rely on reported causes of death from death certificates, as official statistics do. Estimating diabetes-related mortality is challenging because most people die as a result of diabetes complications and comorbidities, such as cardiovascular disease and renal failure, which often are reported as the underlying cause of death (1,23). For this reason, another approach is to focus not only on the underlying cause of death but also on the multiple causes of death to assess any mention of a disease on the death certificate (27). In a study from Italy, the method of assessing multiple causes of death revealed that in 12.3% of all studied death certificates, diabetes was mentioned, whereas only 2.9% reported diabetes as the underlying cause of death (27), corresponding to a four times higher proportion of death related to diabetes. Another nationwide analysis from Canada found that diabetes was more than twice as likely to be a contributing factor to death than the underlying cause of death from the years 2004–2008 (28). A recently published study from the U.S. that was based on two representative surveys from 1997 to 2010 found that 11.5% of all deaths were attributable to diabetes, which reflects a three to four times higher proportion of diabetes-related deaths (29). Overall, these results, together with the current calculations, demonstrate that deaths due to diabetes contribute to a much higher burden than previously assumed.”

ii. Standardizing Clinically Meaningful Outcome Measures Beyond HbA1c for Type 1 Diabetes: A Consensus Report of the American Association of Clinical Endocrinologists, the American Association of Diabetes Educators, the American Diabetes Association, the Endocrine Society, JDRF International, The Leona M. and Harry B. Helmsley Charitable Trust, the Pediatric Endocrine Society, and the T1D Exchange.

“Type 1 diabetes is a life-threatening, autoimmune disease that strikes children and adults and can be fatal. People with type 1 diabetes have to test their blood glucose multiple times each day and dose insulin via injections or an infusion pump 24 h a day every day. Too much insulin can result in hypoglycemia, seizures, coma, or death. Hyperglycemia over time leads to kidney, heart, nerve, and eye damage. Even with diligent monitoring, the majority of people with type 1 diabetes do not achieve recommended target glucose levels. In the U.S., approximately one in five children and one in three adults meet hemoglobin A1c (HbA1c) targets and the average patient spends 7 h a day hyperglycemic and over 90 min hypoglycemic (13). […] HbA1c is a well-accepted surrogate outcome measure for evaluating the efficacy of diabetes therapies and technologies in clinical practice as well as in research (46). […] While HbA1c is used as a primary outcome to assess glycemic control and as a surrogate for risk of developing complications, it has limitations. As a measure of mean blood glucose over 2 or 3 months, HbA1c does not capture short-term variations in blood glucose or exposure to hypoglycemia and hyperglycemia in individuals with type 1 diabetes; HbA1c also does not capture the impact of blood glucose variations on individuals’ quality of life. Recent advances in type 1 diabetes technologies have made it feasible to assess the efficacy of therapies and technologies using a set of outcomes beyond HbA1c and to expand definitions of outcomes such as hypoglycemia. While definitions for hypoglycemia in clinical care exist, they have not been standardized […]. The lack of standard definitions impedes and can confuse their use in clinical practice, impedes development processes for new therapies, makes comparison of studies in the literature challenging, and may lead to regulatory and reimbursement decisions that fail to meet the needs of people with diabetes. To address this vital issue, the type 1 diabetes–stakeholder community launched the Type 1 Diabetes Outcomes Program to develop consensus definitions for a set of priority outcomes for type 1 diabetes. […] The outcomes prioritized under the program include hypoglycemia, hyperglycemia, time in range, diabetic ketoacidosis (DKA), and patient-reported outcomes (PROs).”

“Hypoglycemia is a significant — and potentially fatal — complication of type 1 diabetes management and has been found to be a barrier to achieving glycemic goals (9). Repeated exposure to severe hypoglycemic events has been associated with an increased risk of cardiovascular events and all-cause mortality in people with type 1 or type 2 diabetes (10,11). Hypoglycemia can also be fatal, and severe hypoglycemic events have been associated with increased mortality (1214). In addition to the physical aspects of hypoglycemia, it can also have negative consequences on emotional status and quality of life.

While there is some variability in how and when individuals manifest symptoms of hypoglycemia, beginning at blood glucose levels <70 mg/dL (3.9 mmol/L) (which is at the low end of the typical post-absorptive plasma glucose range), the body begins to increase its secretion of counterregulatory hormones including glucagon, epinephrine, cortisol, and growth hormone. The release of these hormones can cause moderate autonomic effects, including but not limited to shaking, palpitations, sweating, and hunger (15). Individuals without diabetes do not typically experience dangerously low blood glucose levels because of counterregulatory hormonal regulation of glycemia (16). However, in individuals with type 1 diabetes, there is often a deficiency of the counterregulatory response […]. Moreover, as people with diabetes experience an increased number of episodes of hypoglycemia, the risk of hypoglycemia unawareness, impaired glucose counterregulation (for example, in hypoglycemia-associated autonomic failure [17]), and level 2 and level 3 hypoglycemia […] all increase (18). Therefore, it is important to recognize and treat all hypoglycemic events in people with type 1 diabetes, particularly in populations (children, the elderly) that may not have the ability to recognize and self-treat hypoglycemia. […] More notable clinical symptoms begin at blood glucose levels <54 mg/dL (3.0 mmol/L) (19,20). As the body’s primary utilizer of glucose, the brain is particularly sensitive to decreases in blood glucose concentrations. Both experimental and clinical evidence has shown that, at these levels, neurogenic and neuroglycopenic symptoms including impairments in reaction times, information processing, psychomotor function, and executive function begin to emerge. These neurological symptoms correlate to altered brain activity in multiple brain areas including the prefrontal cortex and medial temporal lobe (2124). At these levels, individuals may experience confusion, dizziness, blurred or double vision, tremors, and tingling sensations (25). Hypoglycemia at this glycemic level may also increase proinflammatory and prothrombotic markers (26). Left untreated, these symptoms can become severe to the point that an individual will require assistance from others to move or function. Prolonged untreated hypoglycemia that continues to drop below 50 mg/dL (2.8 mmol/L) increases the risk of seizures, coma, and death (27,28). Hypoglycemia that affects cognition and stamina may also increase the risk of accidents and falls, which is a particular concern for older adults with diabetes (29,30).

The glycemic thresholds at which these symptoms occur, as well as the severity with which they manifest themselves, may vary in individuals with type 1 diabetes depending on the number of hypoglycemic episodes they have experienced (3133). Counterregulatory physiological responses may evolve in patients with type 1 diabetes who endure repeated hypoglycemia over time (34,35).”

“The Steering Committee defined three levels of hypoglycemia […] Level 1 hypoglycemia is defined as a measurable glucose concentration <70 mg/dL (3.9 mmol/L) but ≥54 mg/dL (3.0 mmol/L) that can alert a person to take action. A blood glucose concentration of 70 mg/dL (3.9 mmol/L) has been recognized as a marker of physiological hypoglycemia in humans, as it approximates the glycemic threshold for neuroendocrine responses to falling glucose levels in individuals without diabetes. As such, blood glucose in individuals without diabetes is generally 70–100 mg/dL (3.9–5.6 mmol/L) upon waking and 70–140 mg/dL (3.9–7.8 mmol/L) after meals, and any excursions beyond those levels are typically countered with physiological controls (16,37). However, individuals with diabetes who have impaired or altered counterregulatory hormonal and neurological responses do not have the same internal regulation as individuals without diabetes to avoid dropping below 70 mg/dL (3.9 mmol/L) and becoming hypoglycemic. Recurrent episodes of hypoglycemia lead to increased hypoglycemia unawareness, which can become dangerous as individuals cease to experience symptoms of hypoglycemia, allowing their blood glucose levels to continue falling. Therefore, glucose levels <70 mg/dL (3.9 mmol/L) are clinically important, independent of the severity of acute symptoms.

Level 2 hypoglycemia is defined as a measurable glucose concentration <54 mg/dL (3.0 mmol/L) that needs immediate action. At ∼54 mg/dL (3.0 mmol/L), neurogenic and neuroglycopenic hypoglycemic symptoms begin to occur, ultimately leading to brain dysfunction at levels <50 mg/dL (2.8 mmol/L) (19,20). […] Level 3 hypoglycemia is defined as a severe event characterized by altered mental and/or physical status requiring assistance. Severe hypoglycemia captures events during which the symptoms associated with hypoglycemia impact a patient to such a degree that the patient requires assistance from others (27,28). […] Hypoglycemia that sets in relatively rapidly, such as in the case of a significant insulin overdose, may induce level 2 or level 3 hypoglycemia with little warning (38).”

“The data regarding the effects of chronic hyperglycemia on long-term outcomes is conclusive, indicating that chronic hyperglycemia is a major contributor to morbidity and mortality in type 1 diabetes (41,4345). […] Although the correlation between long-term poor glucose control and type 1 diabetes complications is well established, the impact of short-term hyperglycemia is not as well understood. However, hyperglycemia has been shown to have physiological effects and in an acute-care setting is linked to morbidity and mortality in people with and without diabetes. Short-term hyperglycemia, regardless of diabetes diagnosis, has been shown to reduce survival rates among patients admitted to the hospital with stroke or myocardial infarction (47,48). In addition to increasing mortality, short-term hyperglycemia is correlated with stroke severity and poststroke disability (49,50).

The effects of short-term hyperglycemia have also been observed in nonacute settings. Evidence indicates that hyperglycemia alters retinal cell firing through sensitization in patients with type 1 diabetes (51). This finding is consistent with similar findings showing increased oxygen consumption and blood flow in the retina during hyperglycemia. Because retinal cells absorb glucose through an insulin-independent process, they respond more strongly to increases in glucose in the blood than other cells in patients with type 1 diabetes. The effects of acute hyperglycemia on retinal response may underlie part of the development of retinopathy known to be a long-term complication of type 1 diabetes.”

“The Steering Committee defines hyperglycemia for individuals with type 1 diabetes as the following:

  • Level 1—elevated glucose: glucose >180 mg/dL (10 mmol/L) and glucose ≤250 mg/dL (13.9 mmol/L)

  • Level 2—very elevated glucose: glucose >250 mg/dL (13.9 mmol/L) […]

Elevated glucose is defined as a glucose concentration >180 mg/dL (10.0 mmol/L) but ≤250 mg/dL (13.9 mmol/L). In clinical practice, measures of hyperglycemia differ based on time of day (e.g., pre- vs. postmeal). This program, however, focused on defining outcomes for use in product development that are universally applicable. Glucose profiles and postprandial blood glucose data for individuals without diabetes suggest that 140 mg/dL (7.8 mmol/L) is the appropriate threshold for defining hyperglycemia. However, data demonstrate that the majority of individuals without diabetes exceed this threshold every day. Moreover, people with diabetes spend >60% of their day above this threshold, which suggests that 140 mg/dL (7.8 mmol/L) is too low of a threshold for measuring hyperglycemia in individuals with diabetes. Current clinical guidelines for people with diabetes indicate that peak prandial glucose should not exceed 180 mg/dL (10.0 mmol/L). As such, the Steering Committee identified 180 mg/dL (10.0 mmol/L) as the initial threshold defining elevated glucose. […]

Very elevated glucose is defined as a glucose concentration >250 mg/dL (13.9 mmol/L). Evidence examining the impact of hyperglycemia does not examine the incremental effects of increasing blood glucose. However, blood glucose values exceeding 250 mg/dL (13.9 mmol/L) increase the risk for DKA (58), and HbA1c readings at that level have been associated with a high likelihood of complications.”

“An individual whose blood glucose levels rarely extend beyond the thresholds defined for hypo- and hyperglycemia is less likely to be subject to the short-term or long-term effects experienced by those with frequent excursions beyond one or both thresholds. It is also evident that if the intent of a given intervention is to safely manage blood glucose but the intervention does not reliably maintain blood glucose within safe levels, then the intervention should not be considered effective.

The time in range outcome is distinguished from traditional HbA1c testing in several ways (4,59). Time in range captures fluctuations in glucose levels continuously, whereas HbA1c testing is done at static points in time, usually months apart (60). Furthermore, time in range is more specific and sensitive than traditional HbA1c testing; for example, a treatment that addresses acute instances of hypo- or hyperglycemia may be detected in a time in range assessment but not necessarily in an HbA1c assessment. As a percentage, time in range is also more likely to be comparable across patients than HbA1c values, which are more likely to have patient-specific variations in significance (61). Finally, time in range may be more likely than HbA1c levels to correlate with PROs, such as quality of life, because the outcome is more representative of the whole patient experience (62). Table 3 illustrates how the concept of time in range differs from current HbA1c testing. […] [V]ariation in what is considered “normal” glucose fluctuations across populations, as well as what is realistically achievable for people with type 1 diabetes, must be taken into account so as not to make the target range definition too restrictive.”

“The Steering Committee defines time in range for individuals with type 1 diabetes as the following:

  • Percentage of readings in the range of 70–180 mg/dL (3.9–10.0 mmol/L) per unit of time

The Steering Committee considered it important to keep the time in range definition wide in order to accommodate variations across the population with type 1 diabetes — including different age-groups — but limited enough to preclude the possibility of negative outcomes. The upper and lower bounds of the time in range definition are consistent with the definitions for hypo- and hyperglycemia defined above. For individuals without type 1 diabetes, 70–140 mg/dL (3.9–7.8 mmol/L) represents a normal glycemic range (66). However, spending most of the day in this range is not generally achievable for people with type 1 diabetes […] To date, there is limited research correlating time in range with positive short-term and long-term type 1 diabetes outcomes, as opposed to the extensive research demonstrating the negative consequences of excursions into hyper- or hypoglycemia. More substantial evidence demonstrating a correlation or a direct causative relationship between time in range for patients with type 1 diabetes and positive health outcomes is needed.”

“DKA is often associated with hyperglycemia. In most cases, in an individual with diabetes, the cause of hyperglycemia is also the cause of DKA, although the two conditions are distinct. DKA develops when a lack of glucose in cells prompts the body to begin breaking down fatty acid reserves. This increases the levels of ketones in the body (ketosis) and causes a drop in blood pH (acidosis). At its most severe, DKA can cause cerebral edema, acute respiratory distress, thromboembolism, coma, and death (69,70). […] Although the current definition for DKA includes a list of multiple criteria that must be met, not all information currently included in the accepted definition is consistently gathered or required to diagnose DKA. The Steering Committee defines DKA in individuals with type 1 diabetes in a clinical setting as the following:

  • Elevated serum or urine ketones (greater than the upper limit of the normal range), and

  • Serum bicarbonate <15 mmol/L or blood pH <7.3

Given the seriousness of DKA, it is unnecessary to stratify DKA into different levels or categories, as the presence of DKA—regardless of the differences observed in the separate biochemical tests—should always be considered serious. In individuals with known diabetes, plasma glucose values are not necessary to diagnose DKA. Further, new therapeutic agents, specifically sodium–glucose cotransporter 2 inhibitors, have been linked to euglycemic DKA, or DKA with blood glucose values <250 mg/dL (13.9 mmol/L).”

“In guidance released in 2009 (72), the U.S. Food and Drug Administration (FDA) defined PROs as “any report of the status of a patient’s health condition that comes directly from the patient, without interpretation of the patient’s response by a clinician or anyone else.” In the same document, the FDA clearly acknowledged the importance of PROs, advising that they be used to gather information that is “best known by the patient or best measured from the patient perspective.”

Measuring and using PROs is increasingly seen as essential to evaluating care from a patient-centered perspective […] Given that type 1 diabetes is a chronic condition primarily treated on an outpatient basis, much of what people with type 1 diabetes experience is not captured through standard clinical measurement. Measures that capture PROs can fill these important information gaps. […] The use of validated PROs in type 1 diabetes clinical research is not currently widespread, and challenges to effectively measuring some PROs, such as quality of life, continue to confront researchers and developers.”

February 20, 2018 Posted by | Cardiology, Diabetes, Medicine, Neurology, Ophthalmology, Studies | Leave a comment

Some things you need to know about machine learning but didn’t know whom to ask (the grad school version)

Some links to stuff related to the lecture’s coverage:
An overview of gradient descent optimization algorithms.
Rectifier (neural networks) [Relu].
Backpropagation.
Escaping From Saddle Points – Online Stochastic Gradient for Tensor Decomposition (Ge et al.).
How to Escape Saddle Points Efficiently (closely related to the paper above, presumably one of the ‘recent improvements’ mentioned in the lecture).
Linear classifier.
Concentration inequality.
A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks (Neyshabur et al.).
Off the convex path (the lecturer’s blog).

February 19, 2018 Posted by | Mathematics, Lectures, Computer science | Leave a comment

Prevention of Late-Life Depression (II)

Some more observations from the book:

In contrast to depression in childhood and youth when genetic and developmental vulnerabilities play a significant role in the development of depression, the development of late-life depression is largely attributed to its interactions with acquired factors, especially medical illness [17, 18]. An analysis of the WHO World Health Survey indicated that the prevalence of depression among medical patients ranged from 9.3 to 23.0 %, significantly higher than that in individuals without medical conditions [19]. Wells et al. [20] found in the Epidemiologic Catchment Area Study that the risk of developing lifetime psychiatric disorders among individuals with at least one medical condition was 27.9 % higher than among those without medical conditions. […] Depression and disability mutually reinforce the risk of each other, and adversely affect disease progression and prognosis [21, 25]. […] disability caused by medical conditions serves as a risk factor for depression [26]. When people lose their normal sensory, motor, cognitive, social, or executive functions, especially in a short period of time, they can become very frustrated or depressed. Inability to perform daily tasks as before decreases self-esteem, reduces independence, increases the level of psychological stress, and creates a sense of hopelessness. On the other hand, depression increases the risk for disability. Negative interpretation, attention bias, and learned hopelessness of depressed persons may increase risky health behaviors that exacerbate physical disorders or disability. Meanwhile, depression-related cognitive impairment also affects role performance and leads to functional disability [25]. For example, Egede [27] found in the 1999 National Health Interview Survey that the risk of having functional disability among patients with the comorbidity of diabetes and depression were approximately 2.5–5 times higher than those with either depression or diabetes alone. […]  A leading cause of disability among medical patients is pain and pain-related fears […] Although a large proportion of pain complaints can be attributed to physiological changes from physical disorders, psychological factors (e.g., attention, interpretation, and coping skills) play an important role in perception of pain […] Bair et al. [31] indicated in a literature review that the prevalence of pain was higher among depressed patients than non-depressed patients, and the prevalence of major depression was also higher among pain patients comparing to those without pain complaints.”

Alcohol use has more serious adverse health effects on older adults than other age groups, since aging-related physiological changes (e.g. reduced liver detoxification and renal clearance) affect alcohol metabolism, increase the blood concentration of alcohol, and magnify negative consequences. More importantly, alcohol interacts with a variety of frequently prescribed medications potentially influencing both treatment and adverse effects. […] Due to age-related changes in pharmacokinetics and pharmacodynamics, older adults are a vulnerable population to […] adverse drug effects. […] Adverse drug events are frequently due to failure to adjust dosage or to account for drug–drug interactions in older adults [64]. […] Loneliness […] is considered as an independent risk factor for depression [46, 47], and has been demonstrated to be associated with low physical activity, increased cardiovascular risks, hyperactivity of the hypothalamic-pituitary-adrenal axis, and activation of immune response [for details, see Cacioppo & Patrick’s book on these topics – US] […] Hopelessness is a key concept of major depression [54], and also an independent risk factor of suicidal ideation […] Hopelessness reduces expectations for the future, and negatively affects judgment for making medical and behavioral decisions, including non-adherence to medical regimens or engaging in unhealthy behaviors.”

Co-occurring depression and medical conditions are associated with more functional impairment and mortality than expected from the severity of the medical condition alone. For example, depression accompanying diabetes confers increased functional impairment [27], complications of diabetes [65, 66], and mortality [6771]. Frasure-Smith and colleagues highlighted the prognostic importance of depression among persons who had sustained a myocardial infarction (MI), finding that depression was a significant predictor of mortality at both 6 and 18 months post MI [72, 73]. Subsequent follow-up studies have borne out the increased risk conferred by depression on the mortality of patients with cardiovascular disease [10, 74, 75]. Over the course of a 2-year follow-up interval, depression contributed as much to mortality as did myocardial infarction or diabetes, with the population attributable fraction of mortality due to depression approximately 13 % (similar to the attributable risk associated with heart attack at 11 % and diabetes at 9 %) [76]. […] Although the bidirectional relationship between physical disorders and depression has been well known, there are still relatively few randomized controlled trials on preventing depression among medically ill patients. […] Rates of attrition [in post-stroke depression prevention trials has been observed to be] high […] Stroke, acute coronary syndrome, cancer, and other conditions impose a variety of treatment burdens on patients so that additional interventions without direct or immediate clinical effects may not be acceptable [95]. So even with good participation rates, lack of adherence to the intervention might limit effects.”

Late-life depression (LLD) is a heterogeneous disease, with multiple risk factors, etiologies, and clinical features. It has been recognized for many years that there is a significant relationship between the presence of depression and cerebrovascular disease in older adults [1, 2]. This subtype of LLD was eventually termed “vascular depression.” […] There have been a multitude of studies associating white matter abnormalities with depression in older adults using MRI technology to visualize lesions, or what appear as hyperintensities in the white matter on T2-weighted scans. A systematic review concluded that white matter hyperintensities (WMH) are more common and severe among older adults with depression compared to their non-depressed peers [9]. […] WMHs are associated with older age [13] and cerebrovascular risk factors, including diabetes, heart disease, and hypertension [14–17]. White matter severity and extent of WMH volume has been related to the severity of depression in late life [18, 19]. For example, among 639 older, community-dwelling adults, white matter lesion (WML) severity was found to predict depressive episodes and symptoms over a 3-year period [19]. […] Another way of investigating white matter integrity is with diffusion tensor imaging (DTI), which measures the diffusion of water in tissues and allows for indirect evidence of the microstructure of white matter, most commonly represented as fractional anisotropy (FA) and mean diffusivity (MD). DTI may be more sensitive to white matter pathology than is quantification of WMH […] A number of studies have found lower FA in widespread regions among individuals with LLD relative to controls [34, 36, 37]. […] lower FA has been associated with poorer performance on measures of cognitive functioning among patients with LLD [35, 38–40] and with measures of cerebrovascular risk severity. […] It is important to recognize that FA reflects the organization of fiber tracts, including fiber density, axonal diameter, or myelination in white matter. Thus, lower FA can result from multiple pathophysiological sources [42, 43]. […] Together, the aforementioned studies provide support for the vascular depression hypothesis. They demonstrate that white matter integrity is reduced in patients with LLD relative to controls, is somewhat specific to regions important for cognitive and emotional functioning, and is associated with cognitive functioning and depression severity. […] There is now a wealth of evidence to support the association between vascular pathology and depression in older age. While the etiology of depression in older age is multifactorial, from the epidemiological, neuroimaging, behavioral, and genetic evidence available, we can conclude that vascular depression represents one important subtype of LLD. The mechanisms underlying the relationship between vascular pathology and depression are likely multifactorial, and may include disrupted connections between key neural regions, reduced perfusion of blood to key brain regions integral to affective and cognitive processing, and inflammatory processes.”

Cognitive changes associated with depression have been the focus of research for decades. Results have been inconsistent, likely as a result of methodological differences in how depression is diagnosed and cognitive functioning measured, as well as the effects of potential subtypes and the severity of depression […], though deficits in executive functioning, learning and memory, and attention have been associated with depression in most studies [75]. In older adults, additional confounding factors include the potential presence of primary degenerative disorders, such as Alzheimer’s disease, which can pose a challenge to differential diagnosis in its early stages. […] LLD with cognitive dysfunction has been shown to result in greater disability than depressive symptoms alone [6], and MCI [mild cognitive impairment, US] with co-occurring LLD has been shown to double the risk of developing Alzheimer’s disease (AD) compared to MCI alone [86]. The conversion from MCI to AD also appears to occur earlier in patients with cooccurring depressive symptoms, as demonstrated by Modrego & Ferrandez [86] in their prospective cohort study of 114 outpatients diagnosed with amnestic MCI. […] Given accruing evidence for abnormal functioning of a number of cortical and subcortical networks in geriatric depression, of particular interest is whether these abnormalities are a reflection of the actively depressed state, or whether they may persist following successful resolution of symptoms. To date, studies have investigated this question through either longitudinal investigation of adults with geriatric depression, or comparison of depressed elders who are actively depressed versus those who have achieved symptom remission. Of encouragement, successful treatment has been reliably associated with normalization of some aspects of disrupted network functioning. For example, successful antidepressant treatment is associated with reduction of the elevated cerebral glucose metabolism observed during depressed states (e.g., [71–74]), with greater symptom reduction associated with greater metabolic change […] Taken together, these studies suggest that although a subset of the functional abnormalities observed during the LLD state may resolve with successful treatment, other abnormalities persist and may be tied to damage to the structural connectivity in important affective and cognitive networks. […] studies suggest a chronic decrement in cognitive functioning associated with LLD that is not adequately addressed through improvement of depressive symptoms alone.”

A review of the literature on evidence-based treatments for LLD found that about 50 % of patients improved on antidepressants, but that the number needed to treat (NNT) was quite high (NNT = 8, [139]) and placebo effects were significant [140]. Additionally, no difference was demonstrated in the effectiveness of one antidepressant drug class over another […], and in one-third of patients, depression was resistant to monotherapy [140]. The addition of medications or switching within or between drug classes appears to result in improved treatment response for these patients [140, 141]. A meta-analysis of patient-level variables demonstrated that duration of depressive symptoms and baseline depression severity significantly predicts response to antidepressant treatment in LLD, with chronically depressed older patients with moderate-to-severe symptoms at baseline experiencing more improvement in symptoms than mildly and acutely depressed patients [142]. Pharmacological treatment response appears to range from incomplete to poor in LLD with co-occurring cognitive impairment.”

“[C]ompared to other formulations of prevention, such as primary, secondary, or tertiary — in which interventions are targeted at the level of disease/stage of disease — the IOM conceptual framework involves interventions that are targeted at the level of risk in the population [2]. […] [S]elective prevention studies have an important “numbers” advantage — similar to that of indicated prevention trials: the relatively high incidence of depression among persons with key risk markers enables investigator to test interventions with strong statistical power, even with somewhat modest sample sizes. This fact was illustrated by Schoevers and colleagues [3], in which the authors were able to account for nearly 50 % of total risk of late-life depression with consideration of only a handful of factors. Indeed, research, largely generated by groups in the Netherlands and the USA, has identified that selective prevention may be one of the most efficient approaches to late-life depression prevention, as they have estimated that targeting persons at high risk for depression — based on risk markers such as medical comorbidity, low social support, or physical/functional disability — can yield theoretical numbers needed to treat (NNTs) of approximately 5–7 in primary care settings [4–7]. […] compared to the findings from selective prevention trials targeting older persons with general health/medical problems, […] trials targeting older persons based on sociodemographic risk factors have been more mixed and did not reveal as consistent a pattern of benefits for selective prevention of depression.”

Few of the studies in the existing literature that involve interventions to prevent depression and/or reduce depressive symptoms in older populations have included economic evaluations [13]. The identification of cost-effective interventions to provide to groups at high risk for depression is an important public health goal, as such treatments may avert or reduce a significant amount of the disease burden. […] A study by Katon and colleagues [8] showed that elderly patients with either subsyndromal or major depression had significantly higher medical costs during the previous 6 months than those without depression; total healthcare costs were $1,045 to $1,700 greater, and total outpatient/ambulatory costs ranged from being $763 to $979 more, on average. Depressed patients had greater usage of health resources in every category of care examined, including those that are not mental health-related, such as emergency department visits. No difference in excess costs was found between patients with a DSM-IV depressive disorder and those with depressive symptoms only, however, as mean total costs were 51 % higher in the subthreshold depression group (95 % CI = 1.39–1.66) and 49 % higher in the MDD/dysthymia group (95 % CI = 1.28–1.72) than in the nondepressed group [8]. In a similar study, the usage of various types of health services by primary care patients in the Netherlands was assessed, and average costs were determined to be 1,403 more in depressed individuals versus control patients [21]. Study investigators once again observed that patients with depression had greater utilization of both non-mental and mental healthcare services than controls.”

“In order for routine depression screening in the elderly to be cost-effective […] appropriate follow-up measures must be taken with those who screen positive, including a diagnostic interview and/or referral to a mental health professional [this – the necessity/requirement of proper follow-up following screens in order for screening to be cost-effective – is incidentally a standard result in screening contexts, see also Juth & Munthe’s book – US] [23, 25]. For example, subsequent steps may include initiation of psychotherapy or antidepressant treatment. Thus, one reason that the USPSTF does not recommend screening for depression in settings where proper mental health resources do not exist is that the evidence suggests that outcomes are unlikely to improve without effective follow-up care […]  as per the USPSTF suggestion, Medicare will only cover the screening when the appropriate supports for proper diagnosis and treatment are available […] In order to determine which interventions to prevent and treat depression should be provided to those who screen positive for depressive symptoms and to high-risk populations in general, cost-effectiveness analyses must be completed for a variety of different treatments and preventive measures. […] questions remain regarding whether annual versus other intervals of screening are most cost-effective. With respect to preventive interventions, the evidence to date suggests that these are cost-effective in settings where those at the highest risk are targeted.”

February 19, 2018 Posted by | Books, Cardiology, Diabetes, Health Economics, Neurology, Pharmacology, Psychiatry, Psychology | Leave a comment

Systems Biology (III)

Some observations from chapter 4 below:

The need to maintain a steady state ensuring homeostasis is an essential concern in nature while negative feedback loop is the fundamental way to ensure that this goal is met. The regulatory system determines the interdependences between individual cells and the organism, subordinating the former to the latter. In trying to maintain homeostasis, the organism may temporarily upset the steady state conditions of its component cells, forcing them to perform work for the benefit of the organism. […] On a cellular level signals are usually transmitted via changes in concentrations of reaction substrates and products. This simple mechanism is made possible due to limited volume of each cell. Such signaling plays a key role in maintaining homeostasis and ensuring cellular activity. On the level of the organism signal transmission is performed by hormones and the nervous system. […] Most intracellular signal pathways work by altering the concentrations of selected substances inside the cell. Signals are registered by forming reversible complexes consisting of a ligand (reaction product) and an allosteric receptor complex. When coupled to the ligand, the receptor inhibits the activity of its corresponding effector, which in turn shuts down the production of the controlled substance ensuring the steady state of the system. Signals coming from outside the cell are usually treated as commands (covalent modifications), forcing the cell to adjust its internal processes […] Such commands can arrive in the form of hormones, produced by the organism to coordinate specialized cell functions in support of general homeostasis (in the organism). These signals act upon cell receptors and are usually amplified before they reach their final destination (the effector).”

“Each concentration-mediated signal must first be registered by a detector. […] Intracellular detectors are typically based on allosteric proteins. Allosteric proteins exhibit a special property: they have two stable structural conformations and can shift from one form to the other as a result of changes in ligand concentrations. […] The concentration of a product (or substrate) which triggers structural realignment in the allosteric protein (such as a regulatory enzyme) depends on the genetically-determined affinity of the active site to its ligand. Low affinity results in high target concentration of the controlled substance while high affinity translates into lower concentration […]. In other words, high concentration of the product is necessary to trigger a low-affinity receptor (and vice versa). Most intracellular regulatory mechanisms rely on noncovalent interactions. Covalent bonding is usually associated with extracellular signals, generated by the organism and capable of overriding the cell’s own regulatory mechanisms by modifying the sensitivity of receptors […]. Noncovalent interactions may be compared to requests while covalent signals are treated as commands. Signals which do not originate in the receptor’s own feedback loop but modify its affinity are known as steering signals […] Hormones which act upon cells are, by their nature, steering signals […] Noncovalent interactions — dependent on substance concentrations — impose spatial restrictions on regulatory mechanisms. Any increase in cell volume requires synthesis of additional products in order to maintain stable concentrations. The volume of a spherical cell is given as V = 4/3 π r3, where r indicates cell radius. Clearly, even a slight increase in r translates into a significant increase in cell volume, diluting any products dispersed in the cytoplasm. This implies that cells cannot expand without incurring great energy costs. It should also be noted that cell expansion reduces the efficiency of intracellular regulatory mechanisms because signals and substrates need to be transported over longer distances. Thus, cells are universally small, regardless of whether they make up a mouse or an elephant.”

An effector is an element of a regulatory loop which counteracts changes in the regulated quantity […] Synthesis and degradation of biological compounds often involves numerous enzymes acting in sequence. The product of one enzyme is a substrate for another enzyme. With the exception of the initial enzyme, each step of this cascade is controlled by the availability of the supplied substrate […] The effector consists of a chain of enzymes, each of which depends on the activity of the initial regulatory enzyme […] as well as on the activity of its immediate predecessor which supplies it with substrates. The function of all enzymes in the effector chain is indirectly dependent on the initial enzyme […]. This coupling between the receptor and the first link in the effector chain is a universal phenomenon. It can therefore be said that the initial enzyme in the effector chain is, in fact, a regulatory enzyme. […] Most cell functions depend on enzymatic activity. […] It seems that a set of enzymes associated with a specific process which involves a negative feedback loop is the most typical form of an intracellular regulatory effector. Such effectors can be controlled through activation or inhibition of their associated enzymes.”

“The organism is a self-contained unit represented by automatic regulatory loops which ensure homeostasis. […] Effector functions are conducted by cells which are usually grouped and organized into tissues and organs. Signal transmission occurs by way of body fluids, hormones or nerve connections. Cells can be treated as automatic and potentially autonomous elements of regulatory loops, however their specific action is dependent on the commands issued by the organism. This coercive property of organic signals is an integral requirement of coordination, allowing the organism to maintain internal homeostasis. […] Activities of the organism are themselves regulated by their own negative feedback loops. Such regulation differs however from the mechanisms observed in individual cells due to its place in the overall hierarchy and differences in signal properties, including in particular:
• Significantly longer travel distances (compared to intracellular signals);
• The need to maintain hierarchical superiority of the organism;
• The relative autonomy of effector cells. […]
The relatively long distance travelled by organism’s signals and their dilution (compared to intracellular ones) calls for amplification. As a consequence, any errors or random distortions in the original signal may be drastically exacerbated. A solution to this problem comes in the form of encoding, which provides the signal with sufficient specificity while enabling it to be selectively amplified. […] a loudspeaker can […] assist in acoustic communication, but due to the lack of signal encoding it cannot compete with radios in terms of communication distance. The same reasoning applies to organism-originated signals, which is why information regarding blood glucose levels is not conveyed directly by glucose but instead by adrenalin, glucagon or insulin. Information encoding is handled by receptors and hormone-producing cells. Target cells are capable of decoding such signals, thus completing the regulatory loop […] Hormonal signals may be effectively amplified because the hormone itself does not directly participate in the reaction it controls — rather, it serves as an information carrier. […] strong amplification invariably requires encoding in order to render the signal sufficiently specific and unambiguous. […] Unlike organisms, cells usually do not require amplification in their internal regulatory loops — even the somewhat rare instances of intracellular amplification only increase signal levels by a small amount. Without the aid of an amplifier, messengers coming from the organism level would need to be highly concentrated at their source, which would result in decreased efficiency […] Most signals originated on organism’s level travel with body fluids; however if a signal has to reach its destination very rapidly (for instance in muscle control) it is sent via the nervous system”.

“Two types of amplifiers are observed in biological systems:
1. cascade amplifier,
2. positive feedback loop. […]
A cascade amplifier is usually a collection of enzymes which perform their action by activation in strict sequence. This mechanism resembles multistage (sequential) synthesis or degradation processes, however instead of exchanging reaction products, amplifier enzymes communicate by sharing activators or by directly activating one another. Cascade amplifiers are usually contained within cells. They often consist of kinases. […] Amplification effects occurring at each stage of the cascade contribute to its final result. […] While the kinase amplification factor is estimated to be on the order of 103, the phosphorylase cascade results in 1010-fold amplification. It is a stunning value, though it should also be noted that the hormones involved in this cascade produce particularly powerful effects. […] A positive feedback loop is somewhat analogous to a negative feedback loop, however in this case the input and output signals work in the same direction — the receptor upregulates the process instead of inhibiting it. Such upregulation persists until the available resources are exhausted.
Positive feedback loops can only work in the presence of a control mechanism which prevents them from spiraling out of control. They cannot be considered self-contained and only play a supportive role in regulation. […] In biological systems positive feedback loops are sometimes encountered in extracellular regulatory processes where there is a need to activate slowly-migrating components and greatly amplify their action in a short amount of time. Examples include blood coagulation and complement factor activation […] Positive feedback loops are often coupled to negative loop-based control mechanisms. Such interplay of loops may impart the signal with desirable properties, for instance by transforming a flat signals into a sharp spike required to overcome the activation threshold for the next stage in a signalling cascade. An example is the ejection of calcium ions from the endoplasmic reticulum in the phospholipase C cascade, itself subject to a negative feedback loop.”

“Strong signal amplification carries an important drawback: it tends to “overshoot” its target activity level, causing wild fluctuations in the process it controls. […] Nature has evolved several means of signal attenuation. The most typical mechanism superimposes two regulatory loops which affect the same parameter but act in opposite directions. An example is the stabilization of blood glucose levels by two contradictory hormones: glucagon and insulin. Similar strategies are exploited in body temperature control and many other biological processes. […] The coercive properties of signals coming from the organism carry risks associated with the possibility of overloading cells. The regulatory loop of an autonomous cell must therefore include an “off switch”, controlled by the cell. An autonomous cell may protect itself against excessive involvement in processes triggered by external signals (which usually incur significant energy expenses). […] The action of such mechanisms is usually timer-based, meaning that they inactivate signals following a set amount of time. […] The ability to interrupt signals protects cells from exhaustion. Uncontrolled hormone-induced activity may have detrimental effects upon the organism as a whole. This is observed e.g. in the case of the vibrio cholerae toxin which causes prolonged activation of intestinal epithelial cells by locking protein G in its active state (resulting in severe diarrhea which can dehydrate the organism).”

“Biological systems in which information transfer is affected by high entropy of the information source and ambiguity of the signal itself must include discriminatory mechanisms. These mechanisms usually work by eliminating weak signals (which are less specific and therefore introduce ambiguities). They create additional obstacles (thresholds) which the signals must overcome. A good example is the mechanism which eliminates the ability of weak, random antigens to activate lymphatic cells. It works by inhibiting blastic transformation of lymphocytes until a so-called receptor cap has accumulated on the surface of the cell […]. Only under such conditions can the activation signal ultimately reach the cell nucleus […] and initiate gene transcription. […] weak, reversible nonspecific interactions do not permit sufficient aggregation to take place. This phenomenon can be described as a form of discrimination against weak signals. […] Discrimination may also be linked to effector activity. […] Cell division is counterbalanced by programmed cell death. The most typical example of this process is apoptosis […] Each cell is prepared to undergo controlled death if required by the organism, however apoptosis is subject to tight control. Cells protect themselves against accidental triggering of the process via IAP proteins. Only strong proapoptotic signals may overcome this threshold and initiate cellular suicide”.

Simply knowing the sequences, structures or even functions of individual proteins does not provide sufficient insight into the biological machinery of living organisms. The complexity of individual cells and entire organisms calls for functional classification of proteins. This task can be accomplished with a proteome — a theoretical construct where individual elements (proteins) are grouped in a way which acknowledges their mutual interactions and interdependencies, characterizing the information pathways in a complex organism.
Most ongoing proteome construction projects focus on individual proteins as the basic building blocks […] [We would instead argue in favour of a model in which] [t]he basic unit of the proteome is one negative feedback loop (rather than a single protein) […]
Due to the relatively large number of proteins (between 25 and 40 thousand in the human organism), presenting them all on a single graph with vertex lengths corresponds to the relative duration of interactions would be unfeasible. This is why proteomes are often subdivided into functional subgroups such as the metabolome (proteins involved in metabolic processes), interactome (complex-forming proteins), kinomes (proteins which belong to the kinase family) etc.”

February 18, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine | Leave a comment

Prevention of Late-Life Depression (I)

Late-life depression is a common and highly disabling condition and is also associated with higher health care utilization and overall costs. The presence of depression may complicate the course and treatment of comorbid major medical conditions that are also highly prevalent among older adults — including diabetes, hypertension, and heart disease. Furthermore, a considerable body of evidence has demonstrated that, for older persons, residual symptoms and functional impairment due to depression are common — even when appropriate depression therapies are being used. Finally, the worldwide phenomenon of a rapidly expanding older adult population means that unprecedented numbers of seniors — and the providers who care for them — will be facing the challenge of late-life depression. For these reasons, effective prevention of late-life depression will be a critical strategy to lower overall burden and cost from this disorder. […] This textbook will illustrate the imperative for preventing late-life depression, introduce a broad range of approaches and key elements involved in achieving effective prevention, and provide detailed examples of applications of late-life depression prevention strategies”.

I gave the book two stars on goodreads. There are 11 chapters in the book, written by 22 different contributors/authors, so of course there’s a lot of variation in the quality of the material included; the two star rating was an overall assessment of the quality of the material, and the last two chapters – but in particular chapter 10 – did a really good job convincing me that the the book did not deserve a 3rd star (if you decide to read the book, I advise you to skip chapter 10). In general I think many of the authors are way too focused on statistical significance and much too hesitant to report actual effect sizes, which are much more interesting. Gender is mentioned repeatedly throughout the coverage as an important variable, to the extent that people who do not read the book carefully might think this is one of the most important variables at play; but when you look at actual effect sizes, you get reported ORs of ~1.4 for this variable, compared to e.g. ORs in the ~8-9 for the bereavement variable (see below). You can quibble about population attributable fraction and so on here, but if the effect size is that small it’s unlikely to be all that useful in terms of directing prevention efforts/resource allocation (especially considering that women make out the majority of the total population in these older age groups anyway, as they have higher life expectancy than their male counterparts).

Anyway, below I’ve added some quotes and observations from the first few chapters of the book.

Meta-analyses of more than 30 randomized trials conducted in the High Income Countries show that the incidence of new depressive and anxiety disorders can be reduced by 25–50 % over 1–2 years, compared to usual care, through the use of learning-based psychotherapies (such as interpersonal psychotherapy, cognitive behavioral therapy, and problem solving therapy) […] The case for depression prevention is compelling and represents the key rationale for this volume: (1) Major depression is both prevalent and disabling, typically running a relapsing or chronic course. […] (2) Major depression is often comorbid with other chronic conditions like diabetes, amplifying the disability associated with these conditions and worsening family caregiver burden. (3) Depression is associated with worse physical health outcomes, partly mediated through poor treatment adherence, and it is associated with excess mortality after myocardial infarction, stroke, and cancer. It is also the major risk factor for suicide across the life span and particularly in old age. (4) Available treatments are only partially effective in reducing symptom burden, sustaining remission, and averting years lived with disability.”

“[M]any people suffering from depression do not receive any care and approximately a third of those receiving care do not respond to current treatments. The risk of recurrence is high, also in older persons: half of those who have experienced a major depression will experience one or even more recurrences [4]. […] Depression increases the risk at death: among people suffering from depression the risk of dying is 1.65 times higher than among people without a depression [7], with a dose-response relation between severity and duration of depression and the resulting excess mortality [8]. In adults, the average length of a depressive episode is 8 months but among 20 % of people the depression lasts longer than 2 years [9]. […] It has been estimated that in Australia […] 60 % of people with an affective disorder receive treatment, and using guidelines and standards only 34 % receives effective treatment [14]. This translates in preventing 15 % of Years Lived with Disability [15], a measure of disease burden [14] and stresses the need for prevention [16]. Primary health care providers frequently do not recognize depression, in particular among elderly. Older people may present their depressive symptoms differently from younger adults, with more emphasis on physical complaints [17, 18]. Adequate diagnosis of late-life depression can also be hampered by comorbid conditions such as Parkinson and dementia that may have similar symptoms, or by the fact that elderly people as well as care workers may assume that “feeling down” is part of becoming older [17, 18]. […] Many people suffering from depression do not seek professional help or are not identied as depressed [21]. Almost 14 % of elderly people living in community-type living suffer from a severe depression requiring clinical attention [22] and more than 50 % of those have a chronic course [4, 23]. Smit et al. reported an incidence of 6.1 % of chronic or recurrent depression among a sample of 2,200 elderly people (ages 55–85) [21].”

“Prevention differs from intervention and treatment as it is aimed at general population groups who vary in risk level for mental health problems such as late-life depression. The Institute of Medicine (IOM) has introduced a prevention framework, which provides a useful model for comprehending the different objectives of the interventions [29]. The overall goal of prevention programs is reducing risk factors and enhancing protective factors.
The IOM framework distinguishes three types of prevention interventions: (1) universal preventive interventions, (2) selective preventive interventions, and (3) indicated preventive interventions. Universal preventive interventions are targeted at the general audience, regardless of their risk status or the presence of symptoms. Selective preventive interventions serve those sub-populations who have a significantly higher than average risk of a disorder, either imminently or over a lifetime. Indicated preventive interventions target identified individuals with minimal but detectable signs or symptoms suggesting a disorder. This type of prevention consists of early recognition and early intervention of the diseases to prevent deterioration [30]. For each of the three types of interventions, the goal is to reduce the number of new cases. The goal of treatment, on the other hand, is to reduce prevalence or the total number of cases. By reducing incidence you also reduce prevalence [5]. […] prevention research differs from treatment research in various ways. One of the most important differences is the fact that participants in treatment studies already meet the criteria for the illness being studied, such as depression. The intervention is targeted at improvement or remission of the specific condition quicker than if no intervention had taken place. In prevention research, the participants do not meet the specific criteria for the illness being studied and the overall goal of the intervention is to prevent the development of a clinical illness at a lower rate than a comparison group [5].”

A couple of risk factors [for depression] occur more frequently among the elderly than among young adults. The loss of a loved one or the loss of a social role (e.g., employment), decrease of social support and network, and the increasing change of isolation occur more frequently among the elderly. Many elderly also suffer from physical diseases: 64 % of elderly aged 65–74 has a chronic disease [36] […]. It is important to note that depression often co-occurs with other disorders such as physical illness and other mental health problems (comorbidity). Losing a spouse can have significant mental health effects. Almost half of all widows and widowers during the first year after the loss meet the criteria for depression according to the DSM-IV [37]. Depression after loss of a loved one is normal in times of mourning. However, when depressive symptoms persist during a longer period of time it is possible that a depression is developing. Zisook and Shuchter found that a year after the loss of a spouse 16 % of widows and widowers met the criteria of a depression compared to 4 % of those who did not lose their spouse [38]. […] People with a chronic physical disease are also at a higher risk of developing a depression. An estimated 12–36 % of those with a chronic physical illness also suffer from clinical depression [40]. […] around 25 % of cancer patients suffer from depression [40]. […] Depression is relatively common among elderly residing in hospitals and retirement- and nursing homes. An estimated 6–11 % of residents have a depressive illness and among 30 % have depressive symptoms [41]. […] Loneliness is common among the elderly. Among those of 60 years or older, 43 % reported being lonely in a study conducted by Perissinotto et al. […] Loneliness is often associated with physical and mental complaints; apart from depression it also increases the chance of developing dementia and excess mortality [43].”

From the public health perspective it is important to know what the potential health benefits would be if the harmful effect of certain risk factors could be removed. What health benefits would arise from this, at which efforts and costs? To measure this the population attributive fraction (PAF) can be used. The PAF is expressed in a percentage and demonstrates the decrease of the percentage of incidences (number of new cases) when the harmful effects of the targeted risk factors are fully taken away. For public health it would be more effective to design an intervention targeted at a risk factor with a high PAF than a low PAF. […] An intervention needs to be effective in order to be implemented; this means that it has to show a statistically significant difference with placebo or other treatment. Secondly, it needs to be effective; it needs to prove its benefits also in real life (“everyday care”) circumstances. Thirdly, it needs to be efficient. The measure to address this is the Number Needed to Be Treated (NNT). The NNT expresses how many people need to be treated to prevent the onset of one new case with the disorder; the lower the number, the more efficient the intervention [45]. To summarize, an indicated preventative intervention would ideally be targeted at a relatively small group of people with a high, absolute chance of developing the disease, and a risk profile that is responsible for a high PAF. Furthermore, there needs to be an intervention that is both effective and efficient. […] a more detailed and specific description of the target group results in a higher absolute risk, a lower NNT, and also a lower PAF. This is helpful in determining the costs and benefits of interventions aiming at more specific or broader subgroups in the population. […] Unfortunately very large samples are required to demonstrate reductions in universal or selected interventions [46]. […] If the incidence rate is higher in the target population, which is usually the case in selective and even more so in indicated prevention, the number of participants needed to prove an effect is much smaller [5]. This shows that, even though universal interventions may be effective, its effect is harder to prove than that of indicated prevention. […] Indicated and selective preventions appear to be the most successful in preventing depression to date; however, more research needs to be conducted in larger samples to determine which prevention method is really most effective.”

Groffen et al. [6] recently conducted an investigation among a sample of 4,809 participants from the Reykjavik Study (aged 66–93 years). Similar to the findings presented by Vink and colleagues [3], education level was related to depression risk: participants with lower education levels were more likely to report depressed mood in late-life than those with a college education (odds ratio [OR] = 1.87, 95 % confidence interval [CI] = 1.35–2.58). […] Results from a meta-analysis by Lorant and colleagues [8] showed that lower SES individuals had a greater odds of developing depression than those in the highest SES group (OR = 1.24, p= 0.004); however, the studies involved in this review did not focus on older populations. […] Cole and Dendukuri [10] performed a meta-analysis of studies involving middle-aged and older adult community residents, and determined that female gender was a risk factor for depression in this population (Pooled OR = 1.4, 95 % CI = 1.2–1.8), but not old age. Blazer and colleagues [11] found a significant positive association between older age and depressive symptoms in a sample consisting of community-dwelling older adults; however, when potential confounders such as physical disability, cognitive impairment, and gender were included in the analysis, the relationship between chronological age and depressive symptoms was reversed (p< 0.01). A study by Schoevers and colleagues [14] had similar results […] these findings suggest that higher incidence of depression observed among the oldest-old may be explained by other relevant factors. By contrast, the association of female gender with increased risk of late-life depression has been observed to be a highly consistent finding.”

In an examination of marital bereavement, Turvey et al. [16] analyzed data among 5,449 participants aged70 years […] recently bereaved participants had nearly nine times the odds of developing syndromal depression as married participants (OR = 8.8, 95 % CI = 5.1–14.9, p<0.0001), and they also had significantly higher risk of depressive symptoms 2 years after the spousal loss. […] Caregiving burden is well-recognized as a predisposing factor for depression among older adults [18]. Many older persons are coping with physically and emotionally challenging caregiving roles (e.g., caring for a spouse/partner with a serious illness or with cognitive or physical decline). Additionally, many caregivers experience elements of grief, as they mourn the loss of relationship with or the decline of valued attributes of their care recipients. […] Concepts of social isolation have also been examined with regard to late-life depression risk. For example, among 892 participants aged 65 years […], Gureje et al. [13] found that women with a poor social network and rural residential status were more likely to develop major depressive disorder […] Harlow and colleagues [21] assessed the association between social network and depressive symptoms in a study involving both married and recently widowed women between the ages of 65 and 75 years; they found that number of friends at baseline had an inverse association with CES-D (Centers for Epidemiologic Studies Depression Scale) score after 1 month (p< 0.05) and 12 months (p= 0.06) of follow-up. In a study that explicitly addressed the concept of loneliness, Jaremka et al. [22] conducted a study relating this factor to late-life depression; importantly, loneliness has been validated as a distinct construct, distinguishable among older adults from depression. Among 229 participants (mean age = 70 years) in a cohort of older adults caring for a spouse with dementia, loneliness (as measured by the NYU scale) significantly predicted incident depression (p<0.001). Finally, social support has been identified as important to late-life depression risk. For example, Cui and colleagues [23] found that low perceived social support significantly predicted worsening depression status over a 2-year period among 392 primary care patients aged 65 years and above.”

“Saunders and colleagues [26] reported […] findings with alcohol drinking behavior as the predictor. Among 701 community-dwelling adults aged 65 years and above, the authors found a significant association between prior heavy alcohol consumption and late-life depression among men: compared to those who were not heavy drinkers, men with a history of heavy drinking had a nearly fourfold higher odds of being diagnosed with depression (OR = 3.7, 95 % CI = 1.3–10.4, p< 0.05). […] Almeida et al. found that obese men were more likely than non-obese (body mass index [BMI] < 30) men to develop depression (HR = 1.31, 95 % CI = 1.05–1.64). Consistent with these results, presence of the metabolic syndrome was also found to increase risk of incident depression (HR = 2.37, 95 % CI = 1.60–3.51). Finally, leisure-time activities are also important to study with regard to late-life depression risk, as these too are readily modifiable behaviors. For example, Magnil et al. [30] examined such activities among a sample of 302 primary care patients aged 60 years. The authors observed that those who lacked leisure activities had an increased risk of developing depressive symptoms over the 2-year study period (OR = 12, 95 % CI = 1.1–136, p= 0.041). […] an important future direction in addressing social and behavioral risk factors in late-life depression is to make more progress in trials that aim to alter those risk factors that are actually modifiable.”

February 17, 2018 Posted by | Books, Epidemiology, Health Economics, Medicine, Psychiatry, Psychology, Statistics | Leave a comment

Peripheral Neuropathy (II)

Chapter 3 included a great new (…new to me, that is…) chemical formula which I can’t not share here: (R)-(+)-[2,3-dihydro-5-methyl-3-(4-morpholinylmethyl)pyrrolo[1,2,3-de]-1,4-benzoxazin-6-yl]-1-naphthalenylmethanonemesylate. It’s a cannabinoid receptor agonist, the properties of which are briefly discussed in the book‘s chapter 3.

Anyway, some more observations from the book below:

Injuries affecting either the peripheral or the central nervous system (PNS, CNS) leads to neuropathic pain characterized by spontaneous pain and distortion or exaggeration of pain sensation. Peripheral nerve pathologies are considered generally easier to treat compared to those affecting the CNS, however peripheral neuropathies still remain a challenge to therapeutic treatment. […] Although first being thought as a disease of purely neuronal nature, several pre-clinical studies indicate that the mechanisms at the basis of the development and maintenance of neuropathic pain involve substantial contributions from the nonneuronal cells of both the PNS and CNS [22]. After peripheral nerve injury, microglia in the normal conditions (usually defined ‘‘resting’’ microglia) in the spinal dorsal horn proliferate and change their phenotype to an “activated” state through a series of cellular and molecular changes. Microglia shift their phenotype to the hypertrophic “activated” form following altered expression of several molecules including cell surface receptors, intracellular signalling molecules and diffusible factors. The activation process consists of distinct cellular functions aimed at repairing damaged neural cells and eliminating debris from the damaged area [23]. Damaged cells release chemo-attractant molecules that both increase the motility (i.e. chemo‐kinesis) and stimulate the migration (i.e. chemotaxis) of microglia, the combination of which recruits the microglia much closer to the damaged cells […] Once microglia become activated, they can exert both proinflammatory or anti-inflammatory/neuroprotective functions depending on the combination of the stimulation of several receptors and the expression of specific genes [31]. Thus, the activation of microglia following a peripheral injury can be considered as an adaptation to tissue stress and malfunction [32] that contribute to the development and subsequent maintenance of chronic pain [33, 34]. […] The signals responsible for neuron-microglia and/or astrocyte communication are being extensively investigated since they may represent new targets for chronic pain management.”

“In the past two decades a notable increase in the incidence of [upper extremity compression neuropathies] has occurred. […] it is mandatory to achieve a prompt diagnosis because they can produce important motor and sensory deficiencies that need to be treated before the development of complications, since, despite the capacity for regeneration bestowed on the peripheral nervous system, functions lost as a result of denervation are never fully restored. […] There are many different situations that may be a direct cause of nerve compression. Anatomically, nerves can be compressed when traversing fibro-osseous tunnels, passing between muscle layers, through traction as they cross joints or buckling during certain movements of the wrist and elbow. Other causes include trauma, direct pressure and space-occupying lesions at any level in the upper extremity. There are other situations that are not a direct cause of nerve compression, but may increase the risk and may predispose the nerve to be compressed specially when the soft tissues are swollen like synovitis, pregnancy, hypothyroidism, diabetes or alcoholism [1]. […] When nerve fibers undergo compression, the response depends on the force applied at the site and the duration. Acute, brief compression results in a focal conduction block as a result of local ischemia, being reversible if the duration of compression is transient. On the other hand, if the focal compression is prolonged, ischemic changes appear, followed by endoneurial edema and secondary perineurial thickening. These histological alterations will aggravate the changes in the microneural circulation and will increase the sensitivity of the neuron sheath to ischemia. If the compression continues, we will find focal demyelination, which typically results in a greater involvement of motor than sensory nerve fibers. […] As the duration of compression increases beyond several hours, more diffuse demyelination will appear […] This process begins at the distal end of compression or injury, a process termed wallerian degeneration. These neural changes may not appear at a uniform fashion among the whole neural sheath depending on the distribution of the compressive forces, causing mixed demyelinating and axonal injury resulting from a combination of mechanical distortion of the nerve, ischemic injury, and impaired axonal flow [2].”

Electrophysiologic testing is part of the evaluation [of compression neuropathies], but it never substitutes a complete history and a thorough physical examination. These tests can detect physiologic abnormalities in the course of motor and sensory axons. There are two main electrophysiologic tests: needle electromyography and nerve conduction […] The electromyography detects the voluntary or spontaneous generated electrical activity. The registry of this activity is made through the needle insertion, at rest and during muscular activity to assess duration, amplitude, configuration and recruitment after injury. […] Nerve conduction assesses for both sensory and motor nerves. This study consists in applying a voltage simulator to the skin over different points of the nerve in order to record the muscular action potential, analyzing the amplitude, duration, area, latency and conduction velocity. The amplitude indicates the number of available nerve fibers.”

There are three well-described entrapment syndromes involving the median nerve or its branches, namely pronator teres syndrome, anterior interosseous syndrome and carpal tunnel syndrome according to the level of entrapment. Each one of these syndromes presents with different clinical signs and symptoms, electrophysiologic results and requires different techniques for their release. […] [In pronator teres syndrome] [t]he onset is insidious and is suggested when the early sensory disturbances are greater on the thumb and index finger, mainly tingling, numbness and dysaesthesia in the median nerve distribution. Patients will also complain of increased pain in the proximal forearm and greater hand numbness with sustained power gripping or rotation […] Surgical decompression is the definitive treatment. […] [Anterior interosseous syndrome] presents principally as weakness of the index finger and thumb, and the patient may complain of diffuse pain in the proximal forearm, which may be exacerbated during exercise and diminished with rest. The vast majority of patients begin with pain in the upper arm, elbow and forearm, often preceding the motor symptoms. […] During physical exam, the patient will be unable to bend the tip of the thumb and tip of index finger. The typical symptom is the inability to form an “O” with the thumb and index finger. […] If the onset was spontaneous and there is no evident lesion on MRI, supportive care and corticosteroid injections with observation for 4 to 6 weeks is usually accepted management. The degree of recovery is unpredictable.”

“[Carpal tunnel syndrome] is the most frequently encountered compression neuropathy in the upper limb. It is a mechanical compression of the median nerve through the fixed space of the rigid carpal tunnel. The incidence in the United States has been estimated at 1 to 3 cases per 1,000 subjects per year, with a prevalence of 50 cases per 1,000 subjects per year. [10] It is more common in women than in men (2:1), perhaps because the carpal tunnel itself may be smaller in women than in men. The dominant hand is usually affected first and produces the most severe pain. It usually occurs in adults […] Abnormalities on electrophysiologic testing, in association with specific symptoms and signs, are considered the criterion standard for carpal tunnel syndrome diagnosis. Electrophysiologic testing also can provide an accurate assessment of how severe the damage to the nerve is, thereby directing management and providing objective criteria for the determination of prognosis. Carpal tunnel syndrome is usually divided into mild, moderate and severe. In general, patients with mild carpal tunnel syndrome have sensory abnormalities alone on electrophysiologic testing, and patients with sensory plus motor abnormalities have moderate carpal tunnel syndrome. However, any evidence of axonal loss is classified as severe carpal tunnel syndrome. […] No imaging studies are considered routine in the diagnosis of carpal tunnel syndrome. […] nonoperative treatment is based in splintage of the wrist in a neutral position for three weeks and steroid injections. This therapy has variable results, with a success rate up to 76% during one year, but with a recurrence rate as high as 94%. Non-operative treatment is indicated in patients with intermittent symptoms, initial stages and during pregnancy [17]. The only definitive treatment for carpal tunnel syndrome is surgical expansion of the carpal tunnel by transection of the transverse carpal ligament.”

Postural control can be defined as the control of the body’s position in space for the purposes of balance and orientation. Balance is the ability to maintain or return the body’s centre of gravity within the limits of stability that are determined by the base of support. Spatial orientation defines our natural ability to maintain our body orientation in relation to the surrounding environment, in static and dynamic conditions. The representation of the body’s static and dynamic geometry may be largely based on muscle proprioceptive inputs that continuously inform the central nervous system about the position of each part of the body in relation to the others. Posture is built up by the sum of several basic mechanisms. […] Postural balance is dependent upon integration of signals from the somatosensory, visual and vestibular systems, to generate motor responses, with cognitive demands that vary according to the task, the age of the individuals and their ability to balance. Descending postural commands are multivariate in nature, and the motion at each joint is affected uniquely by input from multiple sensors.
The proprioceptive system provides information on joint angles, changes in joint angles, joint position and muscle length and tension; while the tactile system is associated mainly with sensations of touch, pressure and vibration. Visual influence on postural control results from a complex synergy that receives multimodal inputs. Vestibular inputs tonically activate the anti-gravity leg muscles and, during dynamic tasks, vestibular information contributes to head stabilization to enable successful gaze control, providing a stable reference frame from which to generate postural responses. In order to assess instability or walking difficulty, it is essential to identify the affected movements and circumstances in which they occur (i.e. uneven surfaces, environmental light, activity) as well as any other associated clinical manifestation that could be related
to balance, postural control, motor control, muscular force, movement limitations or sensory deficiency. The clinical evaluation should include neurological examination; special care should be taken to identify visual and vestibular disorders, and to assess static and dynamic postural control and gait.”

Polyneuropathy modify the amount and the quality of the sensorial information that is necessary for motor control, with increased instability during both, upright stance and gait. Patients with peripheral neuropathy may have decreased stability while standing and when subjected to dynamic balance conditions. […] Balance and gait difficulties are the most frequently cited cause of falling […] Patients with polyneuropathy who have ankle weakness are more likely to experience multiple and injurious falls than are those without specific muscle weakness. […] During upright stance, compared to healthy subjects, recordings of the centre of pressure in patients with diabetic neuropathy have shown larger sway [95-96, 102], as well as increased oscillation […] Compared to healthy subjects, diabetic patients may have poorer balance during standing in diminished light compared to full light and no light conditions [105] […] compared to patients with diabetes but no peripheral neuropathy, patients with diabetic peripheral neuropathy are more likely to report an injury during walking or standing, which may be more frequent when walking on irregular surfaces [110]. Epidemiological surveys have established that a reduction of leg proprioception is a risk factor for falls in the elderly [111-112]. Symptoms and signs of peripheral neuropathy are frequently found during physical examination of older subjects. These clinical manifestations may be related to diabetes mellitus, alcoholism, nutritional deficiencies, autoimmune diseases, among other causes. In this group of patients, loss of plantar sensation may be an important contributor to the dynamic balance deficits and increased risk of falls [34, 109]. […] Apart from sensorymotor compromise, fear of falling may relate to restriction and avoidance of activities, which results in loss of strength especially in the lower extremities, and may also be predictive for future falls [117-119].”

“In patients with various forms of peripheral neuropathy, the use of a cane, ankle orthoses or touching a wall [has been shown to improve] spatial and temporal measures of gait regularity while walking under challenging conditions. Additional hand contact of external objects may reduce postural instability caused by a deficiency of one or more senses. […] Contact of the index finger with a stationary surface can greatly attenuate postural instability during upright stance, even when the level of force applied is far below that necessary to provide mechanical support [42]. […] haptic information about postural sway derived from contact with other parts of the body can also increase stability […] Studies evaluating preventive and treatment strategies through excercise [sic – US] that could improve balance in patients with polyneuropathy are scarce. However, evidence support that physical activity interventions that increase activity probably do not increase the risk of falling in patients with diabetic peripheral neuropathy, and in this group of patients, specific training may improve gait speed, balance, muscle strength and joint mobility.”

“Postherpetic neuralgia (PHN) is a form of refractory chronic neuralgia that […] currently lacks any effective prophylaxis. […] PHN has a variety of symptoms and significantly affects patient quality of life [3-12]. Various studies have statistically analyzed predictive factors for PHN [13-23], but neither obvious pathogenesis nor established treatment has been clarified or established. We designed and conducted a study on the premise that statistical identification of significant predictors for PHN would contribute to the establishment of an evidence-based medicine approach to the optimal treatment of PHN. […] Previous studies have shown that older age, female sex, presence of a prodrome, greater rash severity, and greater acute pain severity are predictors of increased PHN [14-18, 25]. Some other potential predictors (ophthalmic localization, presence of anxiety and depression, presence of allodynia, and serological/virological factors) have also been studied [14, 18]. […] The participants were 73 patients with herpes zoster who had been treated at the pain clinic of our hospital between January 2008 and June 2010. […] Multivariate ordered logistic regression analysis was performed to identify predictive factors for PHN. […] advanced age and deep pain at first visit were identified as predictive factors for PHN. DM [diabetes mellitus – US] and pain reduced by bathing should also be considered as potential predictors of PHN [24].”

February 14, 2018 Posted by | Books, Diabetes, Infectious disease, Medicine, Neurology | Leave a comment

Peripheral Neuropathy (I)

The objective of this book is to update health care professionals on recent advances in the pathogenesis, diagnosis and treatment of peripheral neuropathy. This work was written by a group of clinicians and scientists with large expertise in the field.

The book is not the first book about this topic I’ve read, so a lot of the stuff included was of course review – however it’s a quite decent text, and I decided to blog it in at least some detail anyway. It’s somewhat technical and it’s probably not a very good introduction to this topic if you know next to nothing about neurology – in that case I’m certain Said’s book (see the ‘not’-link above) is a better option.

I have added some observations from the first couple of chapters below. As InTech publications like these explicitly encourage people to share the ideas and observations included in these books, I shall probably cover the book in more detail than I otherwise would have.

“Within the developing world, infectious diseases [2-4] and trauma [5] are the most common sources of neuropathic pain syndromes. The developed world, in contrast, suffers more frequently from diabetic polyneuropathy (DPN) [6, 7], post herpetic neuralgia (PHN) from herpes zoster infections [8], and chemotherapy-induced peripheral neuropathy (CIPN) [9, 10]. There is relatively little epidemiological data regarding the prevalence of neuropathic pain within the general population, but a few estimates suggest it is around 7-8% [11, 12]. Despite the widespread occurrence of neuropathic pain, treatment options are limited and often ineffective […] Neuropathic pain can present as on-going or spontaneous discomfort that occurs in the absence of any observable stimulus or a painful hypersensitivity to temperature and touch. […] people with chronic pain have increased incidence of anxiety and depression and reduced scores in quantitative measures of health related quality of life [15]. Despite significant progress in chronic and neuropathic pain research, which has led to the discovery of several efficacious treatments in rodent models, pain management in humans remains ineffective and insufficient [16]. The lack of translational efficiency may be due to inadequate animal models that do not faithfully recapitulate human disease or from biological differences between rodents and humans […] In an attempt to increase the efficacy of medical treatment for neuropathic pain, clinicians and researchers have been moving away from an etiology based classification towards one that is mechanism based. It is current practice to diagnose a person who presents with neuropathic pain according to the underlying etiology and lesion topography [17]. However, this does not translate to effective patient care as these classification criteria do not suggest efficacious treatment. A more apt diagnosis might include a description of symptoms and the underlying pathophysiology associated with those symptoms.”

Neuropathic pain has been defined […] as “pain arising as the direct consequence of a lesion or disease affecting the somatosensory system” [18]. This is distinct from nociceptive pain – which signals tissue damage through an intact nervous system – in underlying pathophysiology, severity, and associated psychological comorbidities [13]. Individuals who suffer from neuropathic pain syndromes report pain of higher intensity and duration than individuals with non-neuropathic chronic pain and have significantly increased incidence of depression, anxiety, and sleep disorders [13, 19]. […] individuals with seemingly identical diseases who both develop neuropathic pain may experience distinct abnormal sensory phenotypes. This may include a loss of sensory perception in some modalities and increased activity in others. Often a reduction in the perception of vibration and light touch is coupled with positive sensory symptoms such as paresthesia, dysesthesia, and pain[20]. Pain may manifest as either spontaneous, with a burning or shock-like quality, or as a hypersensitivity to mechanical or thermal stimuli [21]. This hypersensitivity takes two forms: allodynia, pain that is evoked from a normally non-painful stimulus, and hyperalgesia, an exaggerated pain response from a moderately painful stimulus. […] Noxious stimuli are perceived by small diameter peripheral neurons whose free nerve endings are distributed throughout the body. These neurons are distinct from, although anatomically proximal to, the low threshold mechanoreceptors responsible for the perception of vibration and light touch.”

In addition to hypersensitivity, individuals with neuropathic pain frequently experience ongoing spontaneous pain as a major source of discomfort and distress. […] In healthy individuals, a quiescent neuron will only generate an action potential when presented with a stimulus of sufficient magnitude to cause membrane depolarization. Following nerve injury, however, significant changes in ion channel expression, distribution, and kinetics lead to disruption of the homeostatic electric potential of the membrane resulting in oscillations and burst firing. This manifests as spontaneous pain that has a shooting or burning quality […] There is reasonable evidence to suggest that individual ion channels contribute to specific neuropathic pain symptoms […] [this observation] provides an intriguing therapeutic possibility: unambiguous pharmacologic ion channel blockers to relieve individual sensory symptoms with minimal unintended effects allowing pain relief without global numbness. […] Central sensitization leads to painful hypersensitivity […] Functional and structural changes of dorsal horn circuitry lead to pain hypersensitivity that is maintained independent of peripheral sensitization [38]. This central sensitization provides a mechanistic explanation for the sensory abnormalities that occur in both acute and chronic pain states, such as the expansion of hypersensitivity beyond the innervation territory of a lesion site, repeated stimulation of a constant magnitude leading to an increasing pain response, and pain outlasting a peripheral stimulus [39-41]. In healthy individuals, acute pain triggers central sensitization, but homeostatic sensitivity returns following clearance of the initial insult. In some individuals who develop neuropathic pain, genotype and environmental factors contribute to maintenance of central sensitization leading to spontaneous pain, hyperalgesia, and allodynia. […] Similarly, facilitation also results in a lowered activation threshold in second order neurons”.

“Chronic pain conditions are associated with vast functional and structural changes of the brain, when compared to healthy controls, but it is currently unclear which comes first: does chronic pain cause distortions of brain circuitry and anatomy or do cerebral abnormalities trigger and/or maintain the perception of chronic pain? […] Brain abnormalities in chronic pain states include modification of brain activity patterns, localized decreases in gray matter volume, and circuitry rerouting [53]. […] Chronic pain conditions are associated with localized reduction in gray matter volume, and the topography of gray matter volume reduction is dictated, at least in part, by the particular pathology. […] These changes appear to represent a form of plasticity as they are reversible when pain is effectively managed [63, 67, 68].”

“By definition, neuropathic pain indicates direct pathology of the nervous system while nociceptive pain is an indication of real or potential tissue damage. Due to the distinction in pathophysiology, conventional treatments prescribed for nociceptive pain are not very effective in treating neuropathic pain and vice versa [78]. Therefore the first step towards meaningful pain relief is an accurate diagnosis. […] Treating neuropathic pain requires a multifaceted approach that aims to eliminate the underlying etiology, when possible, and manage the associated discomforts and emotional distress. Although in some cases it is possible to directly treat the cause of neuropathic pain, for example surgery to alleviate a constricted nerve, it is more likely that the primary cause is untreatable, as is the case with singular traumatic events such as stroke and spinal cord injury and diseases like diabetes. When this is the case, symptom management and pain reduction become the primary focus. Unfortunately, in most cases complete elimination of pain is not a feasible endpoint; a pain reduction of 30% is considered to be efficacious [21]. Additionally, many pharmacological treatments require careful titration and tapering to prevent adverse effects and toxicity. This process may take several weeks to months, and ultimately the drug may be ineffective, necessitating another trial with a different medication. It is therefore necessary that both doctor and patient begin treatment with realistic expectations and goals.”

First-line medications for the treatment of neuropathic pain are those that have proven efficacy in randomized clinical trials (RCTs) and are consistent with pooled clinical observations [81]. These include antidepressants, calcium channel ligands, and topical lidocaine [15]. Tricyclic antidepressants (TCAs) have demonstrated efficacy in treating neuropathic pain with positive results in RCTs for central post-stroke pain, PHN, painful diabetic and non-diabetic polyneuropathy, and post-mastectomy pain syndrome [82]. However they do not seem to be effective in treating painful HIV-neuropathy or CIPN [82]. Duloxetine and venlafaxine, two selective serotonin norepinephrine reuptake inhibitors (SSNRIs), have been found to be effective in DPN and both DPN and painful polyneuropathies, respectively [81]. […] Gabapentin and pregabalin have also demonstrated efficacy in several neuropathic pain conditions including DPN and PHN […] Topical lidocaine (5% patch or gel) has significantly reduced allodynia associated with PHN and other neuropathic pain syndromes in several RCTs [81, 82]. With no reported systemic adverse effects and mild skin irritation as the only concern, lidocaine is an appropriate choice for treating localized peripheral neuropathic pain. In the event that first line medications, alone or in combination, are not effective at achieving adequate pain relief, second line medications may be considered. These include opioid analgesics and tramadol, pharmaceuticals which have proven efficacy in RCTs but are associated with significant adverse effects that warrant cautious prescription [15]. Although opioid analgesics are effective pain relievers in several types of neuropathic pain [81, 82, 84], they are associated with misuse or abuse, hypogonadism, constipation, nausea, and immunological changes […] Careful consideration should be given when prescribing opiates to patients who have a personal or family history of drug or alcohol abuse […] Deep brain stimulation, a neurosurgical technique by which an implanted electrode delivers controlled electrical impulses to targeted brain regions, has demonstrated some efficacy in treating chronic pain but is not routinely employed due to a high risk-to-benefit ratio [91]. […] A major challenge in treating neuropathic pain is the heterogeneity of disease pathogenesis within an individual etiological classification. Patients with seemingly identical diseases may experience completely different neuropathic pain phenotypes […] One of the biggest barriers to successful management of neuropathic pain has been the lack of understanding in the underlying pathophysiology that produces a pain phenotype. To that end, significant progress has been made in basic science research.”

In diabetes mellitus, nerves and their supporting cells are subjected to prolonged hyperglycemia and metabolic disturbances and this culminates in reversible/irreversible nervous system dysfunction and damage, namely diabetic peripheral neuropathy (DPN). Due to the varying compositions and extents of neurological involvements, it is difficult to obtain accurate and thorough prevalence estimates of DPN, rendering this microvascular complication vastly underdiagnosed and undertreated [1-4]. According to American Diabetes Association, DPN occurs to 60-70% of diabetic individuals [5] and represents the leading cause of peripheral neuropathies among all cases [6, 7].”

A quick remark: This number seems really high to me. I won’t rule out that it’s accurate if you go with highly sensitive measures of neuropathy, but the number of patients who will experience significant clinical sequelae as a result of DPN is in my opinion likely to be significantly lower than that. On a peripherally related note, it should however on the other hand also be kept in mind that although diabetes-related neurological complications may display some clustering in patient groups – which will necessarily decrease the magnitude of the problem – no single test will ever completely rule out neurological complications in a diabetic; a patient with a negative Semmes-Weinstein monofilament test may still have autonomic neuropathy. So assessing the full disease burden in the context of diabetes-related neurological complications cannot be done using only a single instrument, and the full disease burden is likely to be higher than individual estimates encountered in the literature (unless a full neurological workup was done, which is unlikely to be the case). They do go into more detail about subgroups, clinical significance, etc. below, but I thought this observation was important to add early on in this part of the coverage.

Because diverse anatomic distributions and fiber types may be differentially affected in patients with diabetes, the disease manifestations, courses and pathologies of clinical and subclinical DPN are rather heterogeneous and encompass a broad spectrum […] Current consensus divides diabetes-associated somatic neuropathic syndromes into the focal/multifocal and diffuse/generalized neuropathies [6, 14]. The first category comprises a group of asymmetrical, acute-in-onset and self-limited single lesion(s) of nerve injury or impairment largely resulting from the increased vulnerability of diabetic nerves to mechanical insults (Carpal Tunnel Syndrome) […]. Such mononeuropathies occur idiopathically and only become a clinical problem in association with aging in 5-10% of those affected. Therefore, focal neuropathies are not extensively covered in this chapter [16]. The rest of the patients frequently develop diffuse neuropathies characterized by symmetrical distribution, insidious onset and chronic progression. In particular, a distal symmetrical sensorimotor polyneuropathy accounts for 90% of all DPN diagnoses in type 1 and type 2 diabetics and affects all types of peripheral sensory and motor fibers in a temporally non-uniform manner [6, 17].
Symptoms begin with prickling, tingling, numbness, paresthesia, dysesthesia and various qualities of pain associated with small sensory fibers at the very distal end (toes) of lower extremities [1, 18]. Presence of the above symptoms together with abnormal nociceptive response of epidermal C and A-δ fibers to pain/temperature (as revealed by clinical examination) constitute the diagnosis of small fiber sensory neuropathy, which produces both painful and insensate phenotypes [19]. Painful diabetic neuropathy is a prominent, distressing and chronic experience in at least 10-30% of DPN populations [20, 21]. Its occurrence does not necessarily correlate with impairment in electrophysiological or quantitative sensory testing (QST). […] Large myelinated sensory fibers that innervate the dermis, such as Aβ, also become involved later on, leading to impaired proprioception, vibration and tactile detection, and mechanical hypoalgesia [19]. Following this “stocking-glove”, length-dependent and dying-back evolvement, neurodegeneration gradually proceeds to proximal muscle sensory and motor nerves. Its presence manifests in neurological testings as reduced nerve impulse conductions, diminished ankle tendon reflex, unsteadiness and muscle weakness [1, 24].
Both the absence of protective sensory response and motor coordination predispose neuropathic foot to impaired wound healing and gangrenous ulceration — often ensued by limb amputation in severe and/or advanced cases […]. Although symptomatic motor deficits only appear in later stages of DPN [25], motor denervation and distal atrophy can increase the rate of fractures by causing repetitive minor trauma or falls [24, 28]. Other unusual but highly disabling late sequelae of DPN include limb ischemia and joint deformity [6]; the latter also being termed Charcot’s neuroarthropathy or Charcot’s joints [1]. In addition to significant morbidities, several separate cohort studies provided evidence that DPN [29], diabetic foot ulcers [30] and increased toe vibration perception threshold (VPT) [31] are all independent risk factors for mortality.”

Unfortunately, current therapy for DPN is far from effective and at best only delays the onset and/or progression of the disease via tight glucose control […] Even with near normoglycemic control, a substantial proportion of patients still suffer the debilitating neurotoxic consequences of diabetes [34]. On the other hand, some with poor glucose control are spared from clinically evident signs and symptoms of neuropathy for a long time after diagnosis [37-39]. Thus, other etiological factors independent of hyperglycemia are likely to be involved in the development of DPN. Data from a number of prospective, observational studies suggested that older age, longer diabetes duration, genetic polymorphism, presence of cardiovascular disease markers, malnutrition, presence of other microvascular complications, alcohol and tobacco consumption, and higher constitutional indexes (e.g. weight and height) interact with diabetes and make for strong predictors of neurological decline [13, 32, 40-42]. Targeting some of these modifiable risk factors in addition to glycemia may improve the management of DPN. […] enormous efforts have been devoted to understanding and intervening with the molecular and biochemical processes linking the metabolic disturbances to sensorimotor deficits by studying diabetic animal models. In return, nearly 2,200 articles were published in PubMed central and at least 100 clinical trials were reported evaluating the efficacy of a number of pharmacological agents; the majority of them are designed to inhibit specific pathogenic mechanisms identified by these experimental approaches. Candidate agents have included aldose reductase inhibitors, AGE inhibitors, γ-linolenic acid, α-lipoic acid, vasodilators, nerve growth factor, protein kinase Cβ inhibitors, and vascular endothelial growth factor. Notwithstanding a fruitful of knowledge and promising results in animals, none has translated into definitive clinical success […] Based on the records published by National Institute of Neurological Disorders and Stroke (NINDS), a main source of DPN research, about 16,488 projects were funded at the expense of over $8 billion for the fiscal years of 2008 through 2012. Of these projects, an estimated 72,200 animals were used annually to understand basic physiology and disease pathology as well as to evaluate potential drugs [255]. As discussed above, however, the usefulness of these pharmaceutical agents developed through such a pipeline in preventing or reducing neuronal damage has been equivocal and usually halted at human trials due to toxicity, lack of efficacy or both […]. Clearly, the pharmacological translation from our decades of experimental modeling to clinical practice with regard to DPN has thus far not even [been] close to satisfactory.”

Whereas a majority of the drugs investigated during preclinical testing executed experimentally desired endpoints without revealing significant toxicity, more than half that entered clinical evaluation for treating DPN were withdrawn as a consequence of moderate to severe adverse events even at a much lower dose. Generally, using other species as surrogates for human population inherently encumbers the accurate prediction of toxic reactions for several reasons […] First of all, it is easy to dismiss drug-induced non-specific effects in animals – especially for laboratory rodents who do not share the same size, anatomy and physical activity with humans. […]  Second, some physiological and behavioral phenotypes observable in humans are impossible for animals to express. In this aspect, photosensitive skin rash and pain serve as two good examples of non-translatable side effects. Rodent skin differs from that of humans in that it has a thinner and hairier epidermis and distinct DNA repair abilities [260]. Therefore, most rodent stains used in diabetes modeling provide poor estimates for the probability of cutaneous hypersensitivity reactions to pharmacological treatments […] Another predicament is to assess pain in rodents. The reason for this is simple: these animals cannot tell us when, where or even whether they are experiencing pain […]. Since there is not any specific type of behavior to which painful reaction can be unequivocally associated, this often leads to underestimation of painful side effects during preclinical drug screening […] The third problem is that animals and humans have different pharmacokinetic and toxicological responses.”

“Genetic or chemical-induced diabetic rats or mice have been a major tool for preclinical pharmacological evaluation of potential DPN treatments. Yet, they do not faithfully reproduce many neuropathological manifestations in human diabetics. The difficulty of such begins with the fact that it is not possible to obtain in rodents a qualitative and quantitative expression of the clinical symptoms that are frequently presented in neuropathic diabetic patients, including spontaneous pain of different characteristics (e.g. prickling, tingling, burning, squeezing), paresthesia and numbness. As symptomatic changes constitute an important parameter of therapeutic outcome, this may well underlie the failure of some aforementioned drugs in clinical trials despite their good performance in experimental tests […] Development of nerve dysfunction in diabetic rodents also does not follow the common natural history of human DPN. […] Besides the lack of anatomical resemblance, the changes in disease severity are often missing in these models. […] importantly, foot ulcers that occur as a late complication to 15% of all individuals with diabetes [14] do not spontaneously develop in hyperglycemic rodents. Superimposed injury by experimental procedure in the foot pads of diabetic rats or mice may lend certain insight in the impaired wound healing in diabetes [278] but is not reflective of the chronic, accumulating pathological changes in diabetic feet of human counterparts. Another salient feature of human DPN that has not been described in animals is the predominant sensory and autonomic nerve damage versus minimal involvement of motor fibers [279]. This should elicit particular caution as the selective susceptibility is critical to our true understanding of the etiopathogenesis underlying distal sensorimotor polyneuropathy in diabetes. In addition to the lack of specificity, most animal models studied only cover a narrow spectrum of clinical DPN and have not successfully duplicated syndromes including proximal motor neuropathy and focal lesions [279].
Morphologically, fiber atrophy and axonal loss exist in STZ-rats and other diabetic rodents but are much milder compared to the marked degeneration and loss of myelinated and unmyelinated nerves readily observed in human specimens [280]. Of significant note, rodents are notoriously resistant to developing some of the histological hallmarks seen in diabetic patients, such as segmental and paranodal demyelination […] the simultaneous presence of degenerating and regenerating fibers that is characteristic of early DPN has not been clearly demonstrated in these animals [44]. Since such dynamic nerve degeneration/regeneration signifies an active state of nerve repair and is most likely to be amenable to therapeutic intervention, absence of this property makes rodent models a poor tool in both deciphering disease pathogenesis and designing treatment approaches […] With particular respect to neuroanatomy, a peripheral axon in humans can reach as long as one meter [296] whereas the maximal length of the axons innervating the hind limb is five centimeters in mice and twelve centimeters in rats. This short length makes it impossible to study in rodents the prominent length dependency and dying-back feature of peripheral nerve dysfunction that characterizes human DPN. […] For decades the cytoarchitecture of human islets was assumed to be just like those in rodents with a clear anatomical subdivision of β-cells and other cell types. By using confocal microscopy and multi-fluorescent labeling, it was finally uncovered that human islets have not only a substantially lower percentage of β-cell population, but also a mixed — rather than compartmentalized — organization of the different cell types [297]. This cellular arrangement was demonstrated to directly alter the functional performance of human islets as opposed to rodent islets. Although it is not known whether such profound disparities in cell composition and association also exist in the PNS, it might as well be anticipated considering the many sophisticated sensory and motor activities that are unique to humans. Considerable species difference also manifest at a molecular level. […] At least 80% of human genes have a counterpart in the mouse and rat genome. However, temporal and spatial expression of these genes can vary remarkably between humans and rodents, in terms of both extent and isoform specificity.”

“Ultimately, a fundamental problem associated with resorting to rodents in DPN research is to study a human disorder that takes decades to develop and progress in organisms with a maximum lifespan of 2-3 years. […] It is […] fair to say that a full clinical spectrum of the maturity-onset DPN likely requires a length of time exceeding the longevity of rodents to present and diabetic rodent models at best only help illustrate the very early aspects of the entire disease syndrome. Since none of the early pathogenetic pathways revealed in diabetic rodents will contribute to DPN in a quantitatively and temporally uniform fashion throughout the prolonged natural history of this disease, it is not surprising that a handful of inhibitors developed against these processes have not benefited patients with relatively long-standing neuropathy. As a matter of fact, any agents targeting single biochemical insults would be too little too late to treat a chronic neurological disorder with established nerve damage and pathogenetic heterogeneity […] It is important to point out that the present review does not argue against the ability of animal models to shed light on basic molecular, cellular and physiological processes that are shared among species. Undoubtedly, animal models of diabetes have provided abundant insights into the disease biology of DPN. Nevertheless, the lack of any meaningful advance in identifying a promising pharmacological target necessitates a reexamination of the validity of current DPN models as well as to offer a plausible alternative methodology to scientific approaches and disease intervention. […] we conclude that the fundamental species differences have led to misinterpretation of rodent data and overall failure of pharmacological investment. As more is being learned, it is becoming prevailing that DPN is a chronic, heterogeneous disease unlikely to benefit from targeting specific and early pathogenetic components revealed by animal studies.”

February 13, 2018 Posted by | Books, Diabetes, Genetics, Medicine, Neurology, Pharmacology | Leave a comment

Complexity

Complexity theory is a topic I’ve previously been exposed to through various channels; examples include Institute for Advanced Studies comp sci lectures, notes included in a few computer science-related books like Louridas and Dasgupta, and probably also e.g. some of the systems analysis/-science books I’ve read – Konieczny et al.’s text which I recently finished reading is another example of a book which peripherally covers content also covered in this book. Holland’s book pretty much doesn’t cover computational complexity theory at all, but some knowledge of computer science will probably still be useful as e.g. concepts from graph theory are touched upon/applied in the coverage; I am also aware that I derived some benefit while reading this book from having previously spent time on signalling models in microeconomics, as there were conceptual similarities between those models and their properties and some of the stuff Holland includes. I’m not really sure if you need to know ‘anything’ to read the book and get something out of it, but although Holland doesn’t use much mathematical formalism some of the ‘hidden’ formalism lurking in the background will probably not be easy to understand if you e.g. haven’t seen a mathematical equation since the 9th grade, and people who e.g. have seen hierarchical models before will definitely have a greater appreciation of some of the material covered than people who have not. Obviously I’ve read a lot of stuff over time that made the book easier for me to read and understand than it otherwise would have been, but how easy would the book have been for me to read if I hadn’t read those other things? It’s really difficult for me to say. I found the book hard to judge/rate/evaluate, so I decided against rating it on goodreads.

Below I have added some quotes from the book.

“[C]omplex systems exhibits a distinctive property called emergence, roughly described by the common phrase ‘the action of the whole is more than the sum of the actions of the parts’. In addition to complex systems, there is a subfield of computer science, called computational complexity, which concerns itself with the difficulty of solving different kinds of problems. […] The object of the computational complexity subfield is to assign levels of difficulty — levels of complexity — to different collections of problems. There are intriguing conjectures about these levels of complexity, but an understanding of the theoretical framework requires a substantial background in theoretical computer science — enough to fill an entire book in this series. For this reason, and because computational complexity does not touch upon emergence, I will confine this book to systems and the ways in which they exhibit emergence. […] emergent behaviour is an essential requirement for calling a system ‘complex’. […] Hierarchical organization is […] closely tied to emergence. Each level of a hierarchy typically is governed by its own set of laws. For example, the laws of the periodic table govern the combination of hydrogen and oxygen to form H2O molecules, while the laws of fluid flow (such as the Navier-Stokes equations) govern the behaviour of water. The laws of a new level must not violate the laws of earlier levels — that is, the laws at lower levels constrain the laws at higher levels. […] Restated for complex systems: emergent properties at any level must be consistent with interactions specified at the lower level(s). […] Much of the motivation for treating a system as complex is to get at questions that would otherwise remain inaccessible. Often the first steps in acquiring a deeper understanding are through comparisons of similar systems. By treating hierarchical organization as sine qua non for complexity we focus on the interactions of emergent properties at various levels. The combination of ‘top–down’ effects (as when the daily market average affects actions of the buyers and sellers in an equities market) and ‘bottom–up’ effects (the interactions of the buyers and sellers determine the market average) is a pervasive feature of complex systems. The present exposition, then, centres on complex systems where emergence, and the reduction(s) involved, offer a key to new kinds of understanding.”

“As the field of complexity studies has developed, it has split into two subfields that examine two different kinds of emergence: the study of complex physical systems (CPS) and the study of complex adaptive systems (CAS): The study of complex physical systems focuses on geometric (often lattice-like) arrays of elements, in which interactions typically depend only on effects propagated from nearest neighbours. […] the study of CPS has a distinctive set of tools and questions centring on elements that have fixed properties – atoms, the squares of the cellular automaton, and the like. […] The tools used for studying CPS come, with rare exceptions, from a well-developed part of mathematics, the theory of partial differential equations […] CAS studies, in contrast to CPS studies, concern themselves with elements that are not fixed. The elements, usually called agents, learn or adapt in response to interactions with other agents. […] It is unusual for CAS agents to converge, even momentarily, to a single ‘optimal’ strategy, or to an equilibrium. As the agents adapt to each other, new agents with new strategies usually emerge. Then each new agent offers opportunities for still further interactions, increasing the overall complexity. […] The complex feedback loops that form make it difficult to analyse, or even describe, CAS. […] Analysis of complex systems almost always turns on finding recurrent patterns in the system’s ever-changing configurations. […] perpetual novelty, produced with a limited number of rules or laws, is a characteristic of most complex systems: DNA consists of strings of the same four nucleotides, yet no two humans are exactly alike; the theorems of Euclidian geometry are based on just five axioms, yet new theorems are still being derived after two millenia; and so it is for the other complex systems.”

“In a typical physical system the whole is (at least approximately) the sum of the parts, making the use of PDEs straightforward for a mathematician, but in a typical generated system the parts are put together in an interconnected, non-additive way. It is possible to write a concise set of partial differential equations to describe the basic elements of a computer, say an interconnected set of binary counters, but the existing theory of PDEs does little to increase our understanding of the circuits so-described. The formal grammar approach, in contrast, has already considerably increased our understanding of computer languages and programs. One of the major tasks of this book is to use a formal grammar to convert common features of complex systems into ‘stylized facts’ that can be examined carefully within the grammar.”

“Many CPS problems (e.g. the flow of electrons in superconductive materials) […] involve flows — flows that are nicely described by networks. Networks provide a detailed snapshot of CPS and complex adaptive systems (CAS) interactions at any given point in their development, but there are few studies of the evolution of networks […]. The distinction between the fast dynamic of flows (change of state) and the slow dynamic of adaptation (change of the network of interactions) often distinguishes CPS studies from CAS studies. […] all well-studied CAS exhibit lever points, points where a small directed action causes large predictable changes in aggregate behaviour, as when a vaccine produces long-term changes in an immune system. At present, lever points are almost always located by trial and error. However, by extracting mechanisms common to different lever points, a relevant CAS theory would provide a principled way of locating and testing lever points. […] activities that are easy to observe in one complex system often suggest ‘where to look’ in other complex systems where the activities are difficult to observe.”

“Observation shows that agents acting in a niche continually undergo ‘improvements’, without ever completely outcompeting other agents in the community. These improvements may come about in either of two ways: (i) an agent may become more of a generalist, processing resources from a wider variety of sources, or (ii) it may become more specialized, becoming more efficient than its competitors at exploiting a particular source of a vital resource. Both changes allow for still more interactions and still greater diversity. […] All CAS that have been examined closely exhibit trends toward increasing numbers of specialists.”

“Emergence is tightly tied to the formation of boundaries. These boundaries can arise from symmetry breaking, […] or they can arise by assembly of component building blocks […]. For CAS, the agent-defining boundaries determine the interactions between agents. […] Adaptation, and the emergence of new kinds of agents, then arises from changes in the relevant boundaries. Typically, a boundary only looks to a small segment of a signal, a tag, to determine whether or not the signal can pass through the boundary. […] an agent can be modelled by a set of conditional IF/THEN rules that represent both the effects of boundaries and internal signal-processing. Because tags are short, a given signal may carry multiple tags, and the rules that process signals can require the presence of more than one tag for the processing to proceed. Agents are parallel processors in the sense that all rules that are satisfied simultaneously in the agent are executed simultaneously. As a result, the interior of an agent will usually be filled with multiple signals […]. The central role of tags in routing signals through this complex interior puts emphasis on the mechanisms for tag modification as a means of adaptation. Recombination of extant conditions and signals […] turns tags into building blocks for specifying new routes. Parallel processing then makes it possible to test new routes so formed without seriously disrupting extant useful routes. Sophisticated agents have another means of adaptation: anticipation (‘lookahead’). If an agent has a set of rules that simulates part of its world, then it can run this internal model to examine the outcomes of different action sequences before those actions are executed.”

“The flow of signals within and between agents can be represented by a directed network, where nodes represent rules, and there is a connection from node x to node y if rule x sends a signal satisfying a condition of rule y. Then, the flow of signals over this network spells out the performance of the agent at a point in time. […] The networks associated with CAS are typically highly tangled, with many loops providing feedback and recirculation […]. An agent adapts by changing its signal-processing rules, with corresponding changes in the structure of the associated network. […] Most machine-learning models, including ‘artificial neural networks’ and ‘Bayesian networks’, lack feedback cycles — they are often called ‘feedforward networks’ (in contrast to networks with substantial feedback). In the terms used in Chapter 4, such networks have no ‘recirculation’ and hence have no autonomous subsystems. Networks with substantial numbers of cycles are difficult to analyse, but a large number of cycles is the essential requirement for the autonomous internal models that make lookahead and planning possible. […] The complexities introduced by loops have so far resisted most attempts at analysis. […] The difficulties of analysing the behaviour of networks with many interior loops has, both historically and currently, encouraged the study of networks without loops called trees. Trees occur naturally in the study of games. […] because trees are easier to analyse, most artificial neural networks constructed for pattern recognition are trees. […] Evolutionary game theory makes use of the tree structure of games to study the ways in which agents can modify their strategies as they interact with other agents playing the same game. […] However, evolutionary game theory does not concern itself with the evolution of the game’s laws.”

“It has been observed that innovation in CAS is mostly a matter of combining well-known components in new ways. […] Recombination abets the formation of new cascades. […] By extracting general mechanisms that modify CAS, such as recombination, we go from examination of particular instances to a unified study of characteristic CAS properties. The mechanisms of interest act mainly on extant substructures, using them as building blocks for more complex substructures […]. Because signals and boundaries are a pervasive feature of CAS, their modification has a central role in this adaptive process.”

February 12, 2018 Posted by | Books, Computer science, Mathematics | Leave a comment

Systems Biology (II)

Some observations from the book’s chapter 3 below:

“Without regulation biological processes would become progressively more and more chaotic. In living cells the primary source of information is genetic material. Studying the role of information in biology involves signaling (i.e. spatial and temporal transfer of information) and storage (preservation of information). Regarding the role of the genome we can distinguish three specific aspects of biological processes: steady-state genetics, which ensure cell-level and body homeostasis; genetics of development, which controls cell differentiation and genesis of the organism; and evolutionary genetics, which drives speciation. […] The ever growing demand for information, coupled with limited storage capacities, has resulted in a number of strategies for minimizing the quantity of the encoded information that must be preserved by living cells. In addition to combinatorial approaches based on noncontiguous genes structure, self-organization plays an important role in cellular machinery. Nonspecific interactions with the environment give rise to coherent structures despite the lack of any overt information store. These mechanisms, honed by evolution and ubiquitous in living organisms, reduce the need to directly encode large quantities of data by adopting a systemic approach to information management.”

Information is commonly understood as a transferable description of an event or object. Information transfer can be either spatial (communication, messaging or signaling) or temporal (implying storage). […] The larger the set of choices, the lower the likelihood [of] making the correct choice by accident and — correspondingly — the more information is needed to choose correctly. We can therefore state that an increase in the cardinality of a set (the number of its elements) corresponds to an increase in selection indeterminacy. This indeterminacy can be understood as a measure of “a priori ignorance”. […] Entropy determines the uncertainty inherent in a given system and therefore represents the relative difficulty of making the correct choice. For a set of possible events it reaches its maximum value if the relative probabilities of each event are equal. Any information input reduces entropy — we can therefore say that changes in entropy are a quantitative measure of information. […] Physical entropy is highest in a state of equilibrium, i.e. lack of spontaneity (G = 0,0) which effectively terminates the given reaction. Regulatory processes which counteract the tendency of physical systems to reach equilibrium must therefore oppose increases in entropy. It can be said that a steady inflow of information is a prerequisite of continued function in any organism. As selections are typically made at the entry point of a regulatory process, the concept of entropy may also be applied to information sources. This approach is useful in explaining the structure of regulatory systems which must be “designed” in a specific way, reducing uncertainty and enabling accurate, error-free decisions.

The fire ant exudes a pheromone which enables it to mark sources of food and trace its own path back to the colony. In this way, the ant conveys pathing information to other ants. The intensity of the chemical signal is proportional to the abundance of the source. Other ants can sense the pheromone from a distance of several (up to a dozen) centimeters and thus locate the source themselves. […] As can be expected, an increase in the entropy of the information source (i.e. the measure of ignorance) results in further development of regulatory systems — in this case, receptors capable of receiving signals and processing them to enable accurate decisions. Over time, the evolution of regulatory mechanisms increases their performance and precision. The purpose of various structures involved in such mechanisms can be explained on the grounds of information theory. The primary goal is to select the correct input signal, preserve its content and avoid or eliminate any errors.”

Genetic information stored in nucleotide sequences can be expressed and transmitted in two ways:
a. via replication (in cell division);
b. via transcription and translation (also called gene expression […]
)
Both processes act as effectors and can be triggered by certain biological signals transferred on request.
Gene expression can be defined as a sequence of events which lead to the synthesis of proteins or their products required for a particular function. In cell division, the goal of this process is to generate a copy of the entire genetic code (S phase), whereas in gene expression only selected fragments of DNA (those involved in the requested function) are transcribed and translated. […] Transcription calls for exposing a section of the cell’s genetic code and although its product (RNA) is short-lived, it can be recreated on demand, just like a carbon copy of a printed text. On the other hand, replication affects the entire genetic material contained in the cell and must conform to stringent precision requirements, particularly as the size of the genome increases.”

The magnitude of effort involved in replication of genetic code can be visualized by comparing the DNA chain to a zipper […]. Assuming that the zipper consists of three pairs of interlocking teeth per centimeter (300 per meter) and that the human genome is made up of 3 billion […] base pairs, the total length of our uncoiled DNA in “zipper form” would be equal to […] 10,000 km […] If we were to unfasten the zipper at a rate of 1 m per second, the entire unzipping process would take approximately 3 months […]. This comparison should impress upon the reader the length of the DNA chain and the precision with which individual nucleotides must be picked to ensure that the resulting code is an exact copy of the source. It should also be noted that for each base pair the polymerase enzyme needs to select an appropriate matching nucleotide from among four types of nucleotides present in the solution, and attach it to the chain (clearly, no such problem occurs in zippers). The reliability of an average enzyme is on the order of 10-3–10-4, meaning that one error occurs for every 1,000–10,000 interactions between the enzyme and its substrate. Given this figure, replication of 3*109 base pairs would introduce approximately 3 million errors (mutations) per genome, resulting in a highly inaccurate copy. Since the observed reliability of replication is far higher, we may assume that some corrective mechanisms are involved. Really, the remarkable precision of genetic replication is ensured by DNA repair processes, and in particular by the corrective properties of polymerase itself.

Many mutations are caused by the inherent chemical instability of nucleic acids: for example, cytosine may spontaneously convert to uracil. In the human genome such an event occurs approximately 100 times per day; however uracil is not normally encountered in DNA and its presence alerts defensive mechanisms which correct the error. Another type of mutation is spontaneous depurination, which also triggers its own, dedicated error correction procedure. Cells employ a large number of corrective mechanisms […] DNA repair mechanisms may be treated as an “immune system” which protects the genome from loss or corruption of genetic information. The unavoidable mutations which sometimes occur despite the presence of error correction-mechanisms can be masked due to doubled presentation (alleles) of genetic information. Thus, most mutations are recessive and not expressed in the phenotype. As the length of the DNA chain increases, mutations become more probable. It should be noted that the number of nucleotides in DNA is greater than the relative number of aminoacids participating in polypeptide chains. This is due to the fact that each aminoacid is encoded by exactly three nucleotides — a general principle which applies to all living organisms. […] Fidelity is, of course, fundamentally important in DNA replication as any harmful mutations introduced in its course are automatically passed on to all successive generations of cells. In contrast, transcription and translation processes can be more error-prone as their end products are relatively short-lived. Of note is the fact that faulty transcripts appear in relatively low quantities and usually do not affect cell functions, since regulatory processes ensure continued synthesis of the required substances until a suitable level of activity is reached. Nevertheless, it seems that reliable transcription of genetic material is sufficiently significant for cells to have developed appropriate proofreading mechanisms, similar to those which assist replication. […] the entire information pathway — starting with DNA and ending with active proteins — is protected against errors. We can conclude that fallibility is an inherent property of genetic information channels, and that in order to perform their intended function, these channels require error correction mechanisms.”

The discrete nature of genetic material is an important property which distinguishes prokaryotes from eukaryotes. […] The ability to select individual nucleotide fragments and construct sequences from predetermined “building blocks” results in high adaptability to environmental stimuli and is a fundamental aspect of evolution. The discontinuous nature of genes is evidenced by the presence of fragments which do not convey structural information (introns), as opposed to structure-encoding fragments (exons). The initial transcript (pre-mRNA) contains introns as well as exons. In order to provide a template for protein synthesis, it must undergo further processing (also known as splicing): introns must be cleaved and exon fragments attached to one another. […] Recognition of intron-exon boundaries is usually very precise, while the reattachment of adjacent exons is subject to some variability. Under certain conditions, alternative splicing may occur, where the ordering of the final product does not reflect the order in which exon sequences appear in the source chain. This greatly increases the number of potential mRNA combinations and thus the variety of resulting proteins. […] While access to energy sources is not a major problem, sources of information are usually far more difficult to manage — hence the universal tendency to limit the scope of direct (genetic) information storage. Reducing the length of genetic code enables efficient packing and enhances the efficiency of operations while at the same time decreasing the likelihood of errors. […] The number of genes identified in the human genome is lower than the number of distinct proteins by a factor of 4; a difference which can be attributed to alternative splicing. […] This mechanism increases the variety of protein structures without affecting core information storage, i.e. DNA sequences. […] Primitive organisms often possess nearly as many genes as humans, despite the essential differences between both groups. Interspecies diversity is primarily due to the properties of regulatory sequences.”

The discontinuous nature of genes is evolutionarily advantageous but comes at the expense of having to maintain a nucleus where such splicing processes can be safely conducted, in addition to efficient transport channels allowing transcripts to penetrate the nuclear membrane. While it is believed that at early stages of evolution RNA was the primary repository of genetic information, its present function can best be described as an information carrier. Since unguided proteins cannot ensure sufficient specificity of interaction with nucleic acids, protein-RNA complexes are used often in cases where specific fragments of genetic information need to be read. […] The use of RNA in protein complexes is common across all domains of the living world as it bridges the gap between discrete and continuous storage of genetic information.”

Epigenetic differentiation mechanisms are particularly important in embryonic development. […] Unlike the function of mature organisms, embryonic programming refers to structures which do not yet exist but which need to be created through cell proliferation and differentiation. […] Differentiation of cells results in phenotypic changes. This phenomenon is the primary difference between development genetics and steady-state genetics. Functional differences are not, however, associated with genomic changes: instead they are mediated by the transcriptome where certain genes are preferentially selected for transcription while others are suppressed. […] In a mature, specialized cell only a small portion of the transcribable genome is actually expressed. The remainder of the cell’s genetic material is said to be silenced. Gene silencing is a permanent condition. Under normal circumstances mature cells never alter their function, although such changes may be forced in a laboratory setting […] Cells which make up the embryo at a very early stage of development are pluripotent, meaning that their purpose can be freely determined and that all of their genetic information can potentially be expressed (under certain conditions). […] At each stage of the development process the scope of pluripotency is reduced until, ultimately, the cell becomes monopotent. Monopotency implies that the final function of the cell has already been determined, although the cell itself may still be immature. […] functional dissimilarities between specialized cells are not associated with genetic mutations but rather with selective silencing of genes. […] Most genes which determine biological functions have a biallelic representation (i.e. a representation consisting of two alleles). The remainder (approximately 10 % of genes) is inherited from one specific parent, as a result of partial or complete silencing of their sister alleles (called paternal or maternal imprinting) which occurs during gametogenesis. The suppression of a single copy of the X chromosome is a special case of this phenomenon.”

Evolutionary genetics is subject to two somewhat contradictory criteria. On the one hand, there is clear pressure on accurate and consistent preservation of biological functions and structures while on the other hand it is also important to permit gradual but persistent changes. […] the observable progression of adaptive traits which emerge as a result of evolution suggests a mechanism which promotes constructive changes over destructive ones. Mutational diversity cannot be considered truly random if it is limited to certain structures or functions. […] Approximately 50 % of the human genome consists of mobile segments, capable of migrating to various positions in the genome. These segments are called transposons and retrotransposons […] The mobility of genome fragments not only promotes mutations (by increasing the variability of DNA) but also affects the stability and packing of chromatin strands wherever such mobile sections are reintegrated with the genome. Under normal circumstances the activity of mobile sections is tempered by epigenetic mechanisms […]; however in certain situations gene mobility may be upregulated. In particular, it seems that in “prehistoric” (remote evolutionary) times such events occurred at a much faster pace, accelerating the rate of genetic changes and promoting rapid evolution. Cells can actively promote mutations by way of the so-called AID process (activity-dependent cytosine deamination). It is an enzymatic mechanism which converts cytosine into uracil, thereby triggering repair mechanisms and increasing the likelihood of mutations […] The existence of AID proves that cells themselves may trigger evolutionary changes and that the role of mutations in the emergence of new biological structures is not strictly passive.”

Regulatory mechanisms which receive signals characterized by high degrees of uncertainty, must be able to make informed choices to reduce the overall entropy of the system they control. This property is usually associated with development of information channels. Special structures ought to be exposed within information channels connecting systems of different character as for example linking transcription to translation or enabling transduction of signals through the cellular membrane. Examples of structures which convey highly entropic information are receptor systems associated with blood coagulation and immune responses. The regulatory mechanism which triggers an immune response relies on relatively simple effectors (complement factor enzymes, phages and killer cells) coupled to a highly evolved receptor system, represented by specific antibodies and organized set of cells. Compared to such advanced receptors the structures which register the concentration of a given product (e.g. glucose in blood) are rather primitive. Advanced receptors enable the immune system to recognize and verify information characterized by high degrees of uncertainty. […] In sequential processes it is usually the initial stage which poses the most problems and requires the most information to complete successfully. It should come as no surprise that the most advanced control loops are those associated with initial stages of biological pathways.”

February 10, 2018 Posted by | Biology, Books, Chemistry, Evolutionary biology, Genetics, Immunology, Medicine | Leave a comment

Endocrinology (part 4 – reproductive endocrinology)

Some observations from chapter 4 of the book below.

“*♂. The whole process of spermatogenesis takes approximately 74 days, followed by another 12-21 days for sperm transport through the epididymis. This means that events which may affect spermatogenesis may not be apparent for up to three months, and successful induction of spermatogenesis treatment may take 2 years. *♀. From primordial follicle to primary follicle, it takes about 180 days (a continuous process). It is then another 60 days to form a preantral follicle which then proceeds to ovulation three menstrual cycles later. Only the last 2-3 weeks of this process is under gonadotrophin drive, during which time the follicle grows from 2 to 20mm.”

“Hirsutism (not a diagnosis in itself) is the presence of excess hair growth in ♀ as a result of androgen production and skin sensitivity to androgens. […] In ♀, testosterone is secreted primarily by the ovaries and adrenal glands, although a significant amount is produced by the peripheral conversion of androstenedione and DHEA. Ovarian androgen production is regulated by luteinizing hormone, whereas adrenal production is ACTH-dependent. The predominant androgens produced by the ovaries are testosterone and androstenedione, and the adrenal glands are the main source of DHEA. Circulating testosterone is mainly bound to sex hormone-binding globulin (SHBG), and it is the free testosterone which is biologically active. […] Slowly progressive hirsutism following puberty suggests a benign cause, whereas rapidly progressive hirsutism of recent onset requires further immediate investigation to rule out an androgen-secreting neoplasm. [My italics, US] […] Serum testosterone should be measured in all ♀ presenting with hirsutism. If this is <5nmol/L, then the risk of a sinister cause for her hirsutism is low.”

“Polycystic ovary syndrome (PCOS) *A heterogeneous clinical syndrome characterized by hyperandrogenism, mainly of ovarian origin, menstrual irregularity, and hyperinsulinaemia, in which other causes of androgen excess have been excluded […] *A distinction is made between polycystic ovary morphology on ultrasound (PCO which also occurs in congenital adrenal hyperplasia, acromegaly, Cushing’s syndrome, and testesterone-secreting tumours) and PCOS – the syndrome. […] PCOS is the most common endocrinopathy in ♀ of reproductive age; >95% of ♀ presenting to outpatients with hirsutism have PCOS. *The estimated prevalence of PCOS ranges from 5 to 10% on clinical criteria. Polycystic ovaries on US alone are present in 20-25% of ♀ of reproductive age. […] family history of type 2 diabetes mellitus is […] more common in ♀ with PCOS. […] Approximately 70% of ♀ with PCOS are insulin-resistant, depending on the definition. […] Type 2 diabetes mellitus is 2-4 x more common in ♀ with PCOS. […] Hyperinsulinaemia is exacerbated by obesity but can also be present in lean ♀ with PCOS. […] Insulin […] inhibits SHBG synthesis by the liver, with a consequent rise in free androgen levels. […] Symptoms often begin around puberty, after weight gain, or after stopping the oral contraceptive pill […] Oligo-/amenorrhoea [is present in] 70% […] Hirsutism [is present in] 66% […] Obesity [is present in] 50% […] *Infertility (30%). PCOS accounts for 75% of cases of anovulatory infertility. The risk of spontaneous miscarriage is also thought to be higher than the general population, mainly because of obesity. […] The aims of investigations [of PCOS] are mainly to exclude serious underlying disorders and to screen for complications, as the diagnosis is primarily clinical […] Studies have uniformly shown that weight reduction in obese ♀ with PCOS will improve insulin sensitivity and significantly reduce hyperandrogenaemia. Obese ♀ are less likely to respond to antiandrogens and infertility treatment.”

“Androgen-secreting tumours [are] [r]are tumours of the ovary or adrenal gland which may be benign or malignant, which cause virilization in ♀ through androgen production. […] Virilization […] [i]ndicates severe hyperandrogenism, is associated with clitoromegaly, and is present in 98% of ♀ with androgen-producing tumours. Not usually a feature of PCOS. […] Androgen-secreting ovarian tumours[:] *75% develop before the age of 40 years. *Account for 0.4% of all ovarian tumours; 20% are malignant. *Tumours are 5-25cm in size. The larger they are, the more likely they are to be malignant. They are rarely bilateral. […] Androgen-secreting adrenal tumours[:] *50% develop before the age of 50 years. *Larger tumours […] are more likely to be malignant. *Usually with concomitant cortisol secretion as a variant of Cushing’s syndrome. […] Symptoms and signs of Cushing’s syndrome are present in many of ♀ with adrenal tumours. […] Onset of symptoms. Usually recent onset of rapidly progressive symptoms. […] Malignant ovarian and adrenal androgen-secreting tumours are usually resistant to chemotherapy and radiotherapy. […] *Adrenal tumours. 20% 5-year survival. Most have metastatic disease at the time of surgery. *Ovarian tumours. 30% disease-free survival and 40% overall survival at 5 years. […] Benign tumours. *Prognosis excellent. *Hirsutism improves post-operatively, but clitoromegaly, male pattern balding, and deep voice may persist.”

*Oligomenorrhoea is defined as the reduction in the frequency of menses to <9 periods a year. *1° amenorrhoea is the failure of menarche by the age of 16 years. Prevalence ~0.3% *2° amenorrhoea refers to the cessation of menses for >6 months in ♀ who had previously menstruated. Prevalence ~3%. […] Although the list of causes is long […], the majority of cases of secondary amenorrhoea can be accounted for by four conditions: *Polycystic ovary syndrome. *Hypothalamic amenorrhoea. *Hyperprolactinaemia. *Ovarian failure. […] PCOS is the only common endocrine cause of amenorrhoea with normal oestrogenization – all other causes are oestrogen-deficient. Women with PCOS, therefore, are at risk of endometrial hyperplasia, and all others are at risk of osteoporosis. […] Anosmia may indicate Kallman’s syndrome. […] In routine practice, a common differential diagnosis is between mild version of PCOS and hypothalamic amenorrhoea. The distinction between these conditions may require repeated testing, as a single snapshot may not discriminate. The reason to be precise is that PCOS is oestrogen-replete and will, therefore, respond to clomiphene citrate (an antioestrogen) for fertility. HA will be oestrogen-deficient and will need HRT and ovulation induction with pulsatile GnRH or hMG [human Menopausal Gonadotropins – US]. […] […] 75% of ♀ who develop 2° amenorrhoea report hot flushes, night sweats, mood changes, fatigue, or dyspareunia; symptoms may precede the onset of menstrual disturbances.”

“POI [Premature Ovarian Insufficiency] is a disorder characterized by amenorrhoea, oestrogen deficiency, and elevated gonadotrophins, developing in ♀ <40 years, as a result of loss of ovarian follicular function. […] *Incidence – 0.1% of ♀ <30 years and 1% of those <40 years. *Accounts for 10% of all cases of 2° amenorrhoea. […] POI is the result of accelerated depletion of ovarian germ cells. […] POI is usually permanent and progressive, although a remitting course is also experienced and cannot be fully predicted, so all women must know that pregnancy is possible, even though fertility treatments are not effective (often a difficult paradox to describe). Spontaneous pregnancy has been reported in 5%. […] 80% of [women with Turner’s syndrome] have POI. […] All ♀ presenting with hypergonadotrophic amenorrhoea below age 40 should be karyotyped.”

“The menopause is the permanent cessation of menstruation as a result of ovarian failure and is a retrospective diagnosis made after 12 months of amenorrhoea. The average age of at the time of the menopause is ~50 years, although smokers reach the menopause ~2 years earlier. […] Cycles gradually become increasingly anovulatory and variable in length (often shorter) from about 4 years prior to the menopause. Oligomenorrhoea often precedes permanent amenorrhoea. in 10% of ♀, menses cease abruptly, with no preceding transitional period. […] During the perimenopausal period, there is an accelerated loss of bone mineral density (BMD), rendering post-menopausal more susceptible to osteoporotic fractures. […] Post-menopausal are 2-3 x more likely to develop IHD [ischaemic heart disease] than premenopausal , even after age adjustments. The menopause is associated with an increase in risk factors for atherosclerosis, including less favourable lipid profile, insulin sensitivity, and an ↑ thrombotic tendency. […] ♀ are 2-3 x more likely to develop Alzheimer’s disease than ♂. It is suggested that oestrogen deficiency may play a role in the development of dementia. […] The aim of treatment of perimenopausal ♀ is to alleviate menopausal symptoms and optimize quality of life. The majority of women with mild symptoms require no HRT. […] There is an ↑ risk of breast cancer in HRT users which is related to the duration of use. The risk increases by 35%, following 5 years of use (over the age of 50), and falls to never-used risk 5 years after discontinuing HRT. For ♀ aged 50 not using HRT, about 45 in every 1,000 will have cancer diagnosed over the following 20 years. This number increases to 47/1,000 ♀ using HRT for 5 years, 51/1,000 using HRT for 10 years, and 57/1,000 after 15 years of use. The risk is highest in ♀ on combined HRT compared with oestradiol alone. […] Oral HRT increases the risk [of venous thromboembolism] approximately 3-fold, resulting in an extra two cases/10,000 women-years. This risk is markedly ↑ in ♀ who already have risk factors for DVT, including previous DVT, cardiovascular disease, and within 90 days of hospitalization. […] Data from >30 observational studies suggest that HRT may reduce the risk of developing CVD [cardiovascular disease] by up to 50%. However, randomized placebo-controlled trials […] have failed to show that HRT protects against IHD. Currently, HRT should not be prescribed to prevent cardiovascular disease.”

“Any chronic illness may affect testicular function, in particular chronic renal failure, liver cirrhosis, and haemochromatosis. […] 25% of  who develop mumps after puberty have associated orchitis, and 25-50% of these will develop 1° testicular failure. […] Alcohol excess will also cause 1° testicular failure. […] Cytotoxic drugs, particularly alkylating agents, are gonadotoxic. Infertility occurs in 50% of patients following chemotherapy, and a significant number of  require androgen replacement therapy because of low testosterone levels. […] Testosterone has direct anabolic effects on skeletal muscle and has been shown to increase muscle mass and strength when given to hypogonadal men. Lean body mass is also with a reduction in fat mass. […] Hypogonadism is a risk factor for osteoporosis. Testosterone inhibits bone resorption, thereby reducing bone turnover. Its administration to hypogonadal has been shown to improve bone mineral density and reduce the risk of developing osteoporosis. […] *Androgens stimulate prostatic growth, and testosterone replacement therapy may therefore induce symptoms of bladder outflow obstruction in with prostatic hypertrophy. *It is unlikely that testosterone increases the risk of developing prostrate cancer, but it may promote the growth of an existing cancer. […] Testosterone replacement therapy may cause a fall in both LDL and HDL cholesterol levels, the significance of which remains unclear. The effect of androgen replacement therapy on the risk of developing coronary artery disease is unknown.”

“Erectile dysfunction [is] [t]he consistent inability to achieve or maintain an erect penis sufficient for satisfactory sexual intercourse. Affects approximately 10% of and >50% of >70 years. […] Erectile dysfunction may […] occur as a result of several mechanisms: *Neurological damage. *Arterial insufficiency. *Venous incompetence. *Androgen deficiency. *Penile abnormalities. […] *Abrupt onset of erectile dysfunction which is intermittent is often psychogenic in origin. *Progressive and persistent dysfunction indicates an organic cause. […] Absence of morning erections suggests an organic cause of erectile dysfunction.”

“*Infertility, defined as failure of pregnancy after 1 year of unprotected regular (2 x week) sexual intercourse, affects ~10% of all couples. *Couples who fail to conceive after 1 years of regular unprotected sexual intercourse should be investigated. […] Causes[:] *♀ factors (e.g. PCOS, tubal damage) 35%. *♂ factors (idiopathic gonadal failure in 60%) 25%. *Combined factors 25%. *Unexplained infertility 15%. […] [♀] Fertility declines rapidly after the age of 36 years. […] Each episode of acute PID causes infertility in 10-15% of cases. *Trachomatis is responsible for half the cases of PID in developed countries. […] Unexplained infertility [is] [i]nfertility despite normal sexual intercourse occurring at least twice weakly, normal semen analysis, documentation of ovulation in several cycles, and normal patent tubes (by laparoscopy). […] 30-50% will become pregnant within 3 years of expectant management. If not pregnant by then, chances that spontaneous pregnancy will occur are greatly reduced, and ART should be considered. In ♀>34 years of age, then expectant management is not an option, and up to six cycles of IUI or IVF should be considered.”

February 9, 2018 Posted by | Books, Cancer/oncology, Cardiology, Diabetes, Genetics, Medicine, Pharmacology | Leave a comment

Systems Biology (I)

This book is really dense and is somewhat tough for me to blog. One significant problem is that: “The authors assume that the reader is already familiar with the material covered in a classic biochemistry course.” I know enough biochem to follow most of the stuff in this book, and I was definitely quite happy to have recently read John Finney’s book on the biochemical properties of water and Christopher Hall’s introduction to materials science, as both of those books’ coverage turned out to be highly relevant (these are far from the only relevant books I’ve read semi-recently – Atkins introduction to thermodynamics is another book that springs to mind) – but even so, what do you leave out when writing a post like this? I decided to leave out a lot. Posts covering books like this one are hard to write because it’s so easy for them to blow up in your face because you have to include so many details for the material included in the post to even start to make sense to people who didn’t read the original text. And if you leave out all the details, what’s really left? It’s difficult..

Anyway, some observations from the first chapters of the book below.

“[T]he biological world consists of self-managing and self-organizing systems which owe their existence to a steady supply of energy and information. Thermodynamics introduces a distinction between open and closed systems. Reversible processes occurring in closed systems (i.e. independent of their environment) automatically gravitate toward a state of equilibrium which is reached once the velocity of a given reaction in both directions becomes equal. When this balance is achieved, we can say that the reaction has effectively ceased. In a living cell, a similar condition occurs upon death. Life relies on certain spontaneous processes acting to unbalance the equilibrium. Such processes can only take place when substrates and products of reactions are traded with the environment, i.e. they are only possible in open systems. In turn, achieving a stable level of activity in an open system calls for regulatory mechanisms. When the reaction consumes or produces resources that are exchanged with the outside world at an uneven rate, the stability criterion can only be satisfied via a negative feedback loop […] cells and living organisms are thermodynamically open systems […] all structures which play a role in balanced biological activity may be treated as components of a feedback loop. This observation enables us to link and integrate seemingly unrelated biological processes. […] the biological structures most directly involved in the functions and mechanisms of life can be divided into receptors, effectors, information conduits and elements subject to regulation (reaction products and action results). Exchanging these elements with the environment requires an inflow of energy. Thus, living cells are — by their nature — open systems, requiring an energy source […] A thermodynamically open system lacking equilibrium due to a steady inflow of energy in the presence of automatic regulation is […] a good theoretical model of a living organism. […] Pursuing growth and adapting to changing environmental conditions calls for specialization which comes at the expense of reduced universality. A specialized cell is no longer self-sufficient. As a consequence, a need for higher forms of intercellular organization emerges. The structure which provides cells with suitable protection and ensures continued homeostasis is called an organism.”

“In biology, structure and function are tightly interwoven. This phenomenon is closely associated with the principles of evolution. Evolutionary development has produced structures which enable organisms to develop and maintain its architecture, perform actions and store the resources needed to survive. For this reason we introduce a distinction between support structures (which are akin to construction materials), function-related structures (fulfilling the role of tools and machines), and storage structures (needed to store important substances, achieving a compromise between tight packing and ease of access). […] Biology makes extensive use of small-molecule structures and polymers. The physical properties of polymer chains make them a key building block in biological structures. There are several reasons as to why polymers are indispensable in nature […] Sequestration of resources is subject to two seemingly contradictory criteria: 1. Maximize storage density; 2. Perform sequestration in such a way as to allow easy access to resources. […] In most biological systems, storage applies to energy and information. Other types of resources are only occasionally stored […]. Energy is stored primarily in the form of saccharides and lipids. Saccharides are derivatives of glucose, rendered insoluble (and thus easy to store) via polymerization.Their polymerized forms, stabilized with α-glycosidic bonds, include glycogen (in animals) and starch (in plantlife). […] It should be noted that the somewhat loose packing of polysaccharides […] makes them unsuitable for storing large amounts of energy. In a typical human organism only ca. 600 kcal of energy is stored in the form of glycogen, while (under normal conditions) more than 100,000 kcal exists as lipids. Lipids deposit usually assume the form of triglycerides (triacylglycerols). Their properties can be traced to the similarities between fatty acids and hydrocarbons. Storage efficiency (i.e. the amount of energy stored per unit of mass) is twice that of polysaccharides, while access remains adequate owing to the relatively large surface area and high volume of lipids in the organism.”

“Most living organisms store information in the form of tightly-packed DNA strands. […] It should be noted that only a small percentage of DNA (about few %) conveys biologically relevant information. The purpose of the remaining ballast is to enable suitable packing and exposure of these important fragments. If all of DNA were to consist of useful code, it would be nearly impossible to devise a packing strategy guaranteeing access to all of the stored information.”

“The seemingly endless diversity of biological functions frustrates all but the most persistent attempts at classification. For the purpose of this handbook we assume that each function can be associated either with a single cell or with a living organism. In both cases, biological functions are strictly subordinate to automatic regulation, based — in a stable state — on negative feedback loops, and in processes associated with change (for instance in embryonic development) — on automatic execution of predetermined biological programs. Individual components of a cell cannot perform regulatory functions on their own […]. Thus, each element involved in the biological activity of a cell or organism must necessarily participate in a regulatory loop based on processing information.”

“Proteins are among the most basic active biological structures. Most of the well-known proteins studied thus far perform effector functions: this group includes enzymes, transport proteins, certain immune system components (complement factors) and myofibrils. Their purpose is to maintain biological systems in a steady state. Our knowledge of receptor structures is somewhat poorer […] Simple structures, including individual enzymes and components of multienzyme systems, can be treated as “tools” available to the cell, while advanced systems, consisting of many mechanically-linked tools, resemble machines. […] Machinelike mechanisms are readily encountered in living cells. A classic example is fatty acid synthesis, performed by dedicated machines called synthases. […] Multiunit structures acting as machines can be encountered wherever complex biochemical processes need to be performed in an efficient manner. […] If the purpose of a machine is to generate motion then a thermally powered machine can accurately be called a motor. This type of action is observed e.g. in myocytes, where transmission involves reordering of protein structures using the energy generated by hydrolysis of high-energy bonds.”

“In biology, function is generally understood as specific physiochemical action, almost universally mediated by proteins. Most such actions are reversible which means that a single protein molecule may perform its function many times. […] Since spontaneous noncovalent surface interactions are very infrequent, the shape and structure of active sites — with high concentrations of hydrophobic residues — makes them the preferred area of interaction between functional proteins and their ligands. They alone provide the appropriate conditions for the formation of hydrogen bonds; moreover, their structure may determine the specific nature of interaction. The functional bond between a protein and a ligand is usually noncovalent and therefore reversible.”

“In general terms, we can state that enzymes accelerate reactions by lowering activation energies for processes which would otherwise occur very slowly or not at all. […] The activity of enzymes goes beyond synthesizing a specific protein-ligand complex (as in the case of antibodies or receptors) and involves an independent catalytic attack on a selected bond within the ligand, precipitating its conversion into the final product. The relative independence of both processes (binding of the ligand in the active site and catalysis) is evidenced by the phenomenon of noncompetitive inhibition […] Kinetic studies of enzymes have provided valuable insight into the properties of enzymatic inhibitors — an important field of study in medicine and drug research. Some inhibitors, particularly competitive ones (i.e. inhibitors which outcompete substrates for access to the enzyme), are now commonly used as drugs. […] Physical and chemical processes may only occur spontaneously if they generate energy, or non-spontaneously if they consume it. However, all processes occurring in a cell must have a spontaneous character because only these processes may be catalyzed by enzymes. Enzymes merely accelerate reactions; they do not provide energy. […] The change in enthalpy associated with a chemical process may be calculated as a net difference in the sum of molecular binding energies prior to and following the reaction. Entropy is a measure of the likelihood that a physical system will enter a given state. Since chaotic distribution of elements is considered the most probable, physical systems exhibit a general tendency to gravitate towards chaos. Any form of ordering is thermodynamically disadvantageous.”

“The chemical reactions which power biological processes are characterized by varying degrees of efficiency. In general, they tend to be on the lower end of the efficiency spectrum, compared to energy sources which drive matter transformation processes in our universe. In search for a common criterion to describe the efficiency of various energy sources, we can refer to the net loss of mass associated with a release of energy, according to Einstein’s formula:
E = mc2
The
M/M coefficient (relative loss of mass, given e.g. in %) allows us to compare the efficiency of energy sources. The most efficient processes are those involved in the gravitational collapse of stars. Their efficiency may reach 40 %, which means that 40 % of the stationary mass of the system is converted into energy. In comparison, nuclear reactions have an approximate efficiency of 0.8 %. The efficiency of chemical energy sources available to biological systems is incomparably lower and amounts to approximately 10(-7) % […]. Among chemical reactions, the most potent sources of energy are found in oxidation processes, commonly exploited by biological systems. Oxidation tends  to result in the largest net release of energy per unit of mass, although the efficiency of specific types of oxidation varies. […] given unrestricted access to atmospheric oxygen and to hydrogen atoms derived from hydrocarbons — the combustion of hydrogen (i.e. the synthesis of water; H2 + 1/2O2 = H2O) has become a principal source of energy in nature, next to photosynthesis, which exploits the energy of solar radiation. […] The basic process associated with the release of hydrogen and its subsequent oxidation (called the Krebs cycle) is carried by processes which transfer electrons onto oxygen atoms […]. Oxidation occurs in stages, enabling optimal use of the released energy. An important byproduct of water synthesis is the universal energy carrier known as ATP (synthesized separately). As water synthesis is a highly spontaneous process, it can be exploited to cover the energy debt incurred by endergonic synthesis of ATP, as long as both processes are thermodynamically coupled, enabling spontaneous catalysis of anhydride bonds in ATP. Water synthesis is a universal source of energy in heterotrophic systems. In contrast, autotrophic organisms rely on the energy of light which is exploited in the process of photosynthesis. Both processes yield ATP […] Preparing nutrients (hydrogen carriers) for participation in water synthesis follows different paths for sugars, lipids and proteins. This is perhaps obvious given their relative structural differences; however, in all cases the final form, which acts as a substrate for dehydrogenases, is acetyl-CoA“.

“Photosynthesis is a process which — from the point of view of electron transfer — can be treated as a counterpart of the respiratory chain. In heterotrophic organisms, mitochondria transport electrons from hydrogenated compounds (sugars, lipids, proteins) onto oxygen molecules, synthesizing water in the process, whereas in the course of photosynthesis electrons released by breaking down water molecules are used as a means of reducing oxydised carbon compounds […]. In heterotrophic organisms the respiratory chain has a spontaneous quality (owing to its oxidative properties); however any reverse process requires energy to occur. In the case of photosynthesis this energy is provided by sunlight […] Hydrogen combustion and photosynthesis are the basic sources of energy in the living world. […] For an energy source to become useful, non-spontaneous reactions must be coupled to its operation, resulting in a thermodynamically unified system. Such coupling can be achieved by creating a coherent framework in which the spontaneous and non-spontaneous processes are linked, either physically or chemically, using a bridging component which affects them both. If the properties of both reactions are different, the bridging component must also enable suitable adaptation and mediation. […] Direct exploitation of the energy released via the hydrolysis of ATP is possible usually by introducing an active binding carrier mediating the energy transfer. […] Carriers are considered active as long as their concentration ensures a sufficient release of energy to synthesize a new chemical bond by way of a non-spontaneous process. Active carriers are relatively short-lived […] Any active carrier which performs its function outside of the active site must be sufficiently stable to avoid breaking up prior to participating in the synthesis reaction. Such mobile carriers are usually produced when the required synthesis consists of several stages or cannot be conducted in the active site of the enzyme for sterical reasons. Contrary to ATP, active energy carriers are usually reaction-specific. […] Mobile energy carriers are usually formed as a result of hydrolysis of two high-energy ATP bonds. In many cases this is the minimum amount of energy required to power a reaction which synthesizes a single chemical bond. […] Expelling a mobile or unstable reaction component in order to increase the spontaneity of active energy carrier synthesis is a process which occurs in many biological mechanisms […] The action of active energy carriers may be compared to a ball rolling down a hill. The descending snowball gains sufficient energy to traverse another, smaller mound, adjacent to its starting point. In our case, the smaller hill represents the final synthesis reaction […] Understanding the role of active carriers is essential for the study of metabolic processes.”

“A second category of processes, directly dependent on energy sources, involves structural reconfiguration of proteins, which can be further differentiated into low and high-energy reconfiguration. Low-energy reconfiguration occurs in proteins which form weak, easily reversible bonds with ligands. In such cases, structural changes are powered by the energy released in the creation of the complex. […] Important low-energy reconfiguration processes may occur in proteins which consist of subunits. Structural changes resulting from relative motion of subunits typically do not involve significant expenditures of energy. Of particular note are the so-called allosteric proteins […] whose rearrangement is driven by a weak and reversible bond between the protein and an oxygen molecule. Allosteric proteins are genetically conditioned to possess two stable structural configurations, easily swapped as a result of binding or releasing ligands. Thus, they tend to have two comparable energy minima (separated by a low threshold), each of which may be treated as a global minimum corresponding to the native form of the protein. Given such properties, even a weakly interacting ligand may trigger significant structural reconfiguration. This phenomenon is of critical importance to a variety of regulatory proteins. In many cases, however, the second potential minimum in which the protein may achieve relative stability is separated from the global minimum by a high threshold requiring a significant expenditure of energy to overcome. […] Contrary to low-energy reconfigurations, the relative difference in ligand concentrations is insufficient to cover the cost of a difficult structural change. Such processes are therefore coupled to highly exergonic reactions such as ATP hydrolysis. […]  The link between a biological process and an energy source does not have to be immediate. Indirect coupling occurs when the process is driven by relative changes in the concentration of reaction components. […] In general, high-energy reconfigurations exploit direct coupling mechanisms while indirect coupling is more typical of low-energy processes”.

Muscle action requires a major expenditure of energy. There is a nonlinear dependence between the degree of physical exertion and the corresponding energy requirements. […] Training may improve the power and endurance of muscle tissue. Muscle fibers subjected to regular exertion may improve their glycogen storage capacity, ATP production rate, oxidative metabolism and the use of fatty acids as fuel.

February 4, 2018 Posted by | Biology, Books, Chemistry, Genetics, Pharmacology, Physics | Leave a comment

Lakes (II)

(I have had some computer issues over the last couple of weeks, which was the explanation for my brief blogging hiatus, but they should be resolved by now and as I’m already starting to fall quite a bit behind in terms of my intended coverage of the books I’ve read this year I hope to get rid of some of the backlog in the days to come.)

I have added some more observations from the second half of the book, as well as some related links, below.

“[R]ecycling of old plant material is especially important in lakes, and one way to appreciate its significance is to measure the concentration of CO2, an end product of decomposition, in the surface waters. This value is often above, sometimes well above, the value to be expected from equilibration of this gas with the overlying air, meaning that many lakes are net producers of CO2 and that they emit this greenhouse gas to the atmosphere. How can that be? […] Lakes are not sealed microcosms that function as stand-alone entities; on the contrary, they are embedded in a landscape and are intimately coupled to their terrestrial surroundings. Organic materials are produced within the lake by the phytoplankton, photosynthetic cells that are suspended in the water and that fix CO2, release oxygen (O2), and produce biomass at the base of the aquatic food web. Photosynthesis also takes place by attached algae (the periphyton) and submerged water plants (aquatic macrophytes) that occur at the edge of the lake where enough sunlight reaches the bottom to allow their growth. But additionally, lakes are the downstream recipients of terrestrial runoff from their catchments […]. These continuous inputs include not only water, but also subsidies of plant and soil organic carbon that are washed into the lake via streams, rivers, groundwater, and overland flows. […] The organic carbon entering lakes from the catchment is referred to as ‘allochthonous’, meaning coming from the outside, and it tends to be relatively old […] In contrast, much younger organic carbon is available […] as a result of recent photosynthesis by the phytoplankton and littoral communities; this carbon is called ‘autochthonous’, meaning that it is produced within the lake.”

“It used to be thought that most of the dissolved organic matter (DOM) entering lakes, especially the coloured fraction, was unreactive and that it would transit the lake to ultimately leave unchanged at the outflow. However, many experiments and field observations have shown that this coloured material can be partially broken down by sunlight. These photochemical reactions result in the production of CO2, and also the degradation of some of the organic polymers into smaller organic molecules; these in turn are used by bacteria and decomposed to CO2. […] Most of the bacterial species in lakes are decomposers that convert organic matter into mineral end products […] This sunlight-driven chemistry begins in the rivers, and continues in the surface waters of the lake. Additional chemical and microbial reactions in the soil also break down organic materials and release CO2 into the runoff and ground waters, further contributing to the high concentrations in lake water and its emission to the atmosphere. In algal-rich ‘eutrophic’ lakes there may be sufficient photosynthesis to cause the drawdown of CO2 to concentrations below equilibrium with the air, resulting in the reverse flux of this gas, from the atmosphere into the surface waters.”

“There is a precarious balance in lakes between oxygen gains and losses, despite the seemingly limitless quantities in the overlying atmosphere. This balance can sometimes tip to deficits that send a lake into oxygen bankruptcy, with the O2 mostly or even completely consumed. Waters that have O2 concentrations below 2mg/L are referred to as ‘hypoxic’, and will be avoided by most fish species, while waters in which there is a complete absence of oxygen are called ‘anoxic’ and are mostly the domain for specialized, hardy microbes. […] In many temperate lakes, mixing in spring and again in autumn are the critical periods of re-oxygenation from the overlying atmosphere. In summer, however, the thermocline greatly slows down that oxygen transfer from air to deep water, and in cooler climates, winter ice-cover acts as another barrier to oxygenation. In both of these seasons, the oxygen absorbed into the water during earlier periods of mixing may be rapidly consumed, leading to anoxic conditions. Part of the reason that lakes are continuously on the brink of anoxia is that only limited quantities of oxygen can be stored in water because of its low solubility. The concentration of oxygen in the air is 209 millilitres per litre […], but cold water in equilibrium with the atmosphere contains only 9ml/L […]. This scarcity of oxygen worsens with increasing temperature (from 4°C to 30°C the solubility of oxygen falls by 43 per cent), and it is compounded by faster rates of bacterial decomposition in warmer waters and thus a higher respiratory demand for oxygen.”

“Lake microbiomes play multiple roles in food webs as producers, parasites, and consumers, and as steps into the animal food chain […]. These diverse communities of microbes additionally hold centre stage in the vital recycling of elements within the lake ecosystem […]. These biogeochemical processes are not simply of academic interest; they totally alter the nutritional value, mobility, and even toxicity of elements. For example, sulfate is the most oxidized and also most abundant form of sulfur in natural waters, and it is the ion taken up by phytoplankton and aquatic plants to meet their biochemical needs for this element. These photosynthetic organisms reduce the sulfate to organic sulfur compounds, and once they die and decompose, bacteria convert these compounds to the rotten-egg smelling gas, H2S, which is toxic to most aquatic life. In anoxic waters and sediments, this effect is amplified by bacterial sulfate reducers that directly convert sulfate to H2S. Fortunately another group of bacteria, sulfur oxidizers, can use H2S as a chemical energy source, and in oxygenated waters they convert this reduced sulfur back to its benign, oxidized, sulfate form. […] [The] acid neutralizing capacity (or ‘alkalinity’) varies greatly among lakes. Many lakes in Europe, North America, and Asia have been dangerously shifted towards a low pH because they lacked sufficient carbonate to buffer the continuous input of acid rain that resulted from industrial pollution of the atmosphere. The acid conditions have negative effects on aquatic animals, including by causing a shift in aluminium to its more soluble and toxic form Al3+. Fortunately, these industrial emissions have been regulated and reduced in most of the developed world, although there are still legacy effects of acid rain that have resulted in a long-term depletion of carbonates and associated calcium in certain watersheds.”

“Rotifers, cladocerans, and copepods are all planktonic, that is their distribution is strongly affected by currents and mixing processes in the lake. However, they are also swimmers, and can regulate their depth in the water. For the smallest such as rotifers and copepods, this swimming ability is limited, but the larger zooplankton are able to swim over an impressive depth range during the twenty-four-hour ‘diel’ (i.e. light–dark) cycle. […] the cladocerans in Lake Geneva reside in the thermocline region and deep epilimnion during the day, and swim upwards by about 10m during the night, while cyclopoid copepods swim up by 60m, returning to the deep, dark, cold waters of the profundal zone during the day. Even greater distances up and down the water column are achieved by larger animals. The opossum shrimp, Mysis (up to 25mm in length) lives on the bottom of lakes during the day and in Lake Tahoe it swims hundreds of metres up into the surface waters, although not on moon-lit nights. In Lake Baikal, one of the main zooplankton species is the endemic amphipod, Macrohectopus branickii, which grows up to 38mm in size. It can form dense swarms at 100–200m depth during the day, but the populations then disperse and rise to the upper waters during the night. These nocturnal migrations connect the pelagic surface waters with the profundal zone in lake ecosystems, and are thought to be an adaptation towards avoiding visual predators, especially pelagic fish, during the day, while accessing food in the surface waters under the cover of nightfall. […] Although certain fish species remain within specific zones of the lake, there are others that swim among zones and access multiple habitats. […] This type of fish migration means that the different parts of the lake ecosystem are ecologically connected. For many fish species, moving between habitats extends all the way to the ocean. Anadromous fish migrate out of the lake and swim to the sea each year, and although this movement comes at considerable energetic cost, it has the advantage of access to rich marine food sources, while allowing the young to be raised in the freshwater environment with less exposure to predators. […] With the converse migration pattern, catadromous fish live in freshwater and spawn in the sea.”

“Invasive species that are the most successful and do the most damage once they enter a lake have a number of features in common: fast growth rates, broad tolerances, the capacity to thrive under high population densities, and an ability to disperse and colonize that is enhanced by human activities. Zebra mussels (Dreissena polymorpha) get top marks in each of these categories, and they have proven to be a troublesome invader in many parts of the world. […] A single Zebra mussel can produce up to one million eggs over the course of a spawning season, and these hatch into readily dispersed larvae (‘veligers’), that are free-swimming for up to a month. The adults can achieve densities up to hundreds of thousands per square metre, and their prolific growth within water pipes has been a serious problem for the cooling systems of nuclear and thermal power stations, and for the intake pipes of drinking water plants. A single Zebra mussel can filter a litre a day, and they have the capacity to completely strip the water of bacteria and protists. In Lake Erie, the water clarity doubled and diatoms declined by 80–90 per cent soon after the invasion of Zebra mussels, with a concomitant decline in zooplankton, and potential impacts on planktivorous fish. The invasion of this species can shift a lake from dominance of the pelagic to the benthic food web, but at the expense of native unionid clams on the bottom that can become smothered in Zebra mussels. Their efficient filtering capacity may also cause a regime shift in primary producers, from turbid waters with high concentrations of phytoplankton to a clearer lake ecosystem state in which benthic water plants dominate.”

“One of the many distinguishing features of H2O is its unusually high dielectric constant, meaning that it is a strongly polar solvent with positive and negative charges that can stabilize ions brought into solution. This dielectric property results from the asymmetrical electron cloud over the molecule […] and it gives liquid water the ability to leach minerals from rocks and soils as it passes through the ground, and to maintain these salts in solution, even at high concentrations. Collectively, these dissolved minerals produce the salinity of the water […] Sea water is around 35ppt, and its salinity is mainly due to the positively charged ions sodium (Na+), potassium (K+), magnesium (Mg2+), and calcium (Ca2+), and the negatively charged ions chloride (Cl), sulfate (SO42-), and carbonate CO32-). These solutes, collectively called the ‘major ions’, conduct electrons, and therefore a simple way to track salinity is to measure the electrical conductance of the water between two electrodes set a known distance apart. Lake and ocean scientists now routinely take profiles of salinity and temperature with a CTD: a submersible instrument that records conductance, temperature, and depth many times per second as it is lowered on a rope or wire down the water column. Conductance is measured in Siemens (or microSiemens (µS), given the low salt concentrations in freshwater lakes), and adjusted to a standard temperature of 25°C to give specific conductivity in µS/cm. All freshwater lakes contain dissolved minerals, with specific conductivities in the range 50–500µS/cm, while salt water lakes have values that can exceed sea water (about 50,000µS/cm), and are the habitats for extreme microbes”.

“The World Register of Dams currently lists 58,519 ‘large dams’, defined as those with a dam wall of 15m or higher; these collectively store 16,120km3 of water, equivalent to 213 years of flow of Niagara Falls on the USA–Canada border. […] Around a hundred large dam projects are in advanced planning or construction in Africa […]. More than 300 dams are planned or under construction in the Amazon Basin of South America […]. Reservoirs have a number of distinguishing features relative to natural lakes. First, the shape (‘morphometry’) of their basins is rarely circular or oval, but instead is often dendritic, with a tree-like main stem and branches ramifying out into the submerged river valleys. Second, reservoirs typically have a high catchment area to lake area ratio, again reflecting their riverine origins. For natural lakes, this ratio is relatively low […] These proportionately large catchments mean that reservoirs have short water residence times, and water quality is much better than might be the case in the absence of this rapid flushing. Nonetheless, noxious algal blooms can develop and accumulate in isolated bays and side-arms, and downstream next to the dam itself. Reservoirs typically experience water level fluctuations that are much larger and more rapid than in natural lakes, and this limits the development of littoral plants and animals. Another distinguishing feature of reservoirs is that they often show a longitudinal gradient of conditions. Upstream, the river section contains water that is flowing, turbulent, and well mixed; this then passes through a transition zone into the lake section up to the dam, which is often the deepest part of the lake and may be stratified and clearer due to decantation of land-derived particles. In some reservoirs, the water outflow is situated near the base of the dam within the hypolimnion, and this reduces the extent of oxygen depletion and nutrient build-up, while also providing cool water for fish and other animal communities below the dam. There is increasing attention being given to careful regulation of the timing and magnitude of dam outflows to maintain these downstream ecosystems. […] The downstream effects of dams continue out into the sea, with the retention of sediments and nutrients in the reservoir leaving less available for export to marine food webs. This reduction can also lead to changes in shorelines, with a retreat of the coastal delta and intrusion of seawater because natural erosion processes can no longer be offset by resupply of sediments from upstream.”

“One of the most serious threats facing lakes throughout the world is the proliferation of algae and water plants caused by eutrophication, the overfertilization of waters with nutrients from human activities. […] Nutrient enrichment occurs both from ‘point sources’ of effluent discharged via pipes into the receiving waters, and ‘nonpoint sources’ such the runoff from roads and parking areas, agricultural lands, septic tank drainage fields, and terrain cleared of its nutrient- and water-absorbing vegetation. By the 1970s, even many of the world’s larger lakes had begun to show worrying signs of deterioration from these sources of increasing enrichment. […] A sharp drop in water clarity is often among the first signs of eutrophication, although in forested areas this effect may be masked for many years by the greater absorption of light by the coloured organic materials that are dissolved within the lake water. A drop in oxygen levels in the bottom waters during stratification is another telltale indicator of eutrophication, with the eventual fall to oxygen-free (anoxic) conditions in these lower strata of the lake. However, the most striking impact with greatest effect on ecosystem services is the production of harmful algal blooms (HABs), specifically by cyanobacteria. In eutrophic, temperate latitude waters, four genera of bloom-forming cyanobacteria are the usual offenders […]. These may occur alone or in combination, and although each has its own idiosyncratic size, shape, and lifestyle, they have a number of impressive biological features in common. First and foremost, their cells are typically full of hydrophobic protein cases that exclude water and trap gases. These honeycombs of gas-filled chambers, called ‘gas vesicles’, reduce the density of the cells, allowing them to float up to the surface where there is light available for growth. Put a drop of water from an algal bloom under a microscope and it will be immediately apparent that the individual cells are extremely small, and that the bloom itself is composed of billions of cells per litre of lake water.”

“During the day, the [algal] cells capture sunlight and produce sugars by photosynthesis; this increases their density, eventually to the point where they are heavier than the surrounding water and sink to more nutrient-rich conditions at depth in the water column or at the sediment surface. These sugars are depleted by cellular respiration, and this loss of ballast eventually results in cells becoming less dense than water and floating again towards the surface. This alternation of sinking and floating can result in large fluctuations in surface blooms over the twenty-four-hour cycle. The accumulation of bloom-forming cyanobacteria at the surface gives rise to surface scums that then can be blown into bays and washed up onto beaches. These dense populations of colonies in the water column, and especially at the surface, can shade out bottom-dwelling water plants, as well as greatly reduce the amount of light for other phytoplankton species. The resultant ‘cyanobacterial dominance’ and loss of algal species diversity has negative implications for the aquatic food web […] This negative impact on the food web may be compounded by the final collapse of the bloom and its decomposition, resulting in a major drawdown of oxygen. […] Bloom-forming cyanobacteria are especially troublesome for the management of drinking water supplies. First, there is the overproduction of biomass, which results in a massive load of algal particles that can exceed the filtration capacity of a water treatment plant […]. Second, there is an impact on the taste of the water. […] The third and most serious impact of cyanobacteria is that some of their secondary compounds are highly toxic. […] phosphorus is the key nutrient limiting bloom development, and efforts to preserve and rehabilitate freshwaters should pay specific attention to controlling the input of phosphorus via point and nonpoint discharges to lakes.”

Ultramicrobacteria.
The viral shunt in marine foodwebs.
Proteobacteria. Alphaproteobacteria. Betaproteobacteria. Gammaproteobacteria.
Mixotroph.
Carbon cycle. Nitrogen cycle. AmmonificationAnammox. Comammox.
Methanotroph.
Phosphorus cycle.
Littoral zone. Limnetic zone. Profundal zone. Benthic zone. Benthos.
Phytoplankton. Diatom. Picoeukaryote. Flagellates. Cyanobacteria.
Trophic state (-index).
Amphipoda. Rotifer. Cladocera. Copepod. Daphnia.
Redfield ratio.
δ15N.
Thermistor.
Extremophile. Halophile. Psychrophile. Acidophile.
Caspian Sea. Endorheic basin. Mono Lake.
Alpine lake.
Meromictic lake.
Subglacial lake. Lake Vostock.
Thermus aquaticus. Taq polymerase.
Lake Monoun.
Microcystin. Anatoxin-a.

 

 

February 2, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Engineering, Zoology | Leave a comment

Books 2018

This is a list of books I’ve read this year. As usual ‘f’ = fiction, ‘m’ = miscellaneous, ‘nf’ = non-fiction; the numbers in parentheses indicate my goodreads ratings of the books (from 1-5).

I’ll try to keep updating the post throughout the year.

i. Complexity: A Very Short Introduction (nf. Oxford University Press). Blog coverage here.

ii. Rivers: A Very Short Introduction (1, nf. Oxford University Press). Short goodreads review here. Blog coverage here and here.

iii. Something for the Pain: Compassion and Burnout in the ER (2, m. W. W. Norton & Company/Paul Austin).

iv. Mountains: A Very Short Introduction (1, nf. Oxford University Press). Short goodreads review here.

v. Water: A Very Short Introduction (4, nf. Oxford University Press). Goodreads review here.

vi. Assassin’s Quest (3, f). Robin Hobb. Goodreads review here.

vii. Oxford Handbook of Endocrinology and Diabetes (3rd edition) (5, nf. Oxford University Press). Goodreads review here. Blog coverage here, here, here, and here. I added this book to my list of favourite books on goodreads. Some of the specific chapters included are ‘book-equivalents’; this book is very long and takes a lot of work.

viii. Desolation Island (3, f). Patrick O’Brian.

ix. The Fortune of War (4, f). Patrick O’Brian.

x. Lakes: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here and here.

xi. The Surgeon’s Mate (4, f). Patrick O’Brian. Short goodreads review here.

xii. Domestication of Plants in the Old World: The Origin and Spread of Domesticated Plants in South-West Asia, Europe, and the Mediterranean Basin (5, nf. Oxford University Press). Goodreads review here. I added this book to my list of favourite books on goodreads.

xiii. The Ionian Mission (4, f). Patrick O’Brian.

xiv. Systems Biology: Functional Strategies of Living Organisms (4, nf. Springer). Blog coverage here and here.

xv. Treason’s Harbour (4, f). Patrick O’Brian.

xvi. Peripheral Neuropathy – A New Insight into the Mechanism, Evaluation and Management of a Complex Disorder (3, nf. InTech). Blog coverage here and here.

xvii. The portable door (5, f). Tom Holt. Goodreads review here.

xviii. Prevention of Late-Life Depression: Current Clinical Challenges and Priorities (2, nf. Humana Press). Blog coverage here.

xix. In your dreams (4, f). Tom Holt.

February 2, 2018 Posted by | Books, Personal | Leave a comment

Lakes (I)

“The aim of this book is to provide a condensed overview of scientific knowledge about lakes, their functioning as ecosystems that we are part of and depend upon, and their responses to environmental change. […] Each chapter briefly introduces concepts about the physical, chemical, and biological nature of lakes, with emphasis on how these aspects are connected, the relationships with human needs and impacts, and the implications of our changing global environment.”

I’m currently reading this book and I really like it so far. I have added some observations from the first half of the book and some coverage-related links below.

“High resolution satellites can readily detect lakes above 0.002 kilometres square (km2) in area; that’s equivalent to a circular waterbody some 50m across. Using this criterion, researchers estimate from satellite images that the world contains 117 million lakes, with a total surface area amounting to 5 million km2. […] continuous accumulation of materials on the lake floor, both from inflows and from the production of organic matter within the lake, means that lakes are ephemeral features of the landscape, and from the moment of their creation onwards, they begin to fill in and gradually disappear. The world’s deepest and most ancient freshwater ecosystem, Lake Baikal in Russia (Siberia), is a compelling example: it has a maximum depth of 1,642m, but its waters overlie a much deeper basin that over the twenty-five million years of its geological history has become filled with some 7,000m of sediments. Lakes are created in a great variety of ways: tectonic basins formed by movements in the Earth’s crust, the scouring and residual ice effects of glaciers, as well as fluvial, volcanic, riverine, meteorite impacts, and many other processes, including human construction of ponds and reservoirs. Tectonic basins may result from a single fault […] or from a series of intersecting fault lines. […] The oldest and deepest lakes in the world are generally of tectonic origin, and their persistence through time has allowed the evolution of endemic plants and animals; that is, species that are found only at those sites.”

“In terms of total numbers, most of the world’s lakes […] owe their origins to glaciers that during the last ice age gouged out basins in the rock and deepened river valleys. […] As the glaciers retreated, their terminal moraines (accumulations of gravel and sediments) created dams in the landscape, raising water levels or producing new lakes. […] During glacial retreat in many areas of the world, large blocks of glacial ice broke off and were left behind in the moraines. These subsequently melted out to produce basins that filled with water, called ‘kettle’ or ‘pothole’ lakes. Such waterbodies are well known across the plains of North America and Eurasia. […] The most violent of lake births are the result of volcanoes. The craters left behind after a volcanic eruption can fill with water to form small, often circular-shaped and acidic lakes. […] Much larger lakes are formed by the collapse of a magma chamber after eruption to produce caldera lakes. […] Craters formed by meteorite impacts also provide basins for lakes, and have proved to be of great scientific as well as human interest. […] There was a time when limnologists paid little attention to small lakes and ponds, but, this has changed with the realization that although such waterbodies are modest in size, they are extremely abundant throughout the world and make up a large total surface area. Furthermore, these smaller waterbodies often have high rates of chemical activity such as greenhouse gas production and nutrient cycling, and they are major habitats for diverse plants and animals”.

“For Forel, the science of lakes could be subdivided into different disciplines and subjects, all of which continue to occupy the attention of freshwater scientists today […]. First, the physical environment of a lake includes its geological origins and setting, the water balance and exchange of heat with the atmosphere, as well as the penetration of light, the changes in temperature with depth, and the waves, currents, and mixing processes that collectively determine the movement of water. Second, the chemical environment is important because lake waters contain a great variety of dissolved materials (‘solutes’) and particles that play essential roles in the functioning of the ecosystem. Third, the biological features of a lake include not only the individual species of plants, microbes, and animals, but also their organization into food webs, and the distribution and functioning of these communities across the bottom of the lake and in the overlying water.”

“In the simplest hydrological terms, lakes can be thought of as tanks of water in the landscape that are continuously topped up by their inflowing rivers, while spilling excess water via their outflow […]. Based on this model, we can pose the interesting question: how long does the average water molecule stay in the lake before leaving at the outflow? This value is referred to as the water residence time, and it can be simply calculated as the total volume of the lake divided by the water discharge at the outlet. This lake parameter is also referred to as the ‘flushing time’ (or ‘flushing rate’, if expressed as a proportion of the lake volume discharged per unit of time) because it provides an estimate of how fast mineral salts and pollutants can be flushed out of the lake basin. In general, lakes with a short flushing time are more resilient to the impacts of human activities in their catchments […] Each lake has its own particular combination of catchment size, volume, and climate, and this translates into a water residence time that varies enormously among lakes [from perhaps a month to more than a thousand years, US] […] A more accurate approach towards calculating the water residence time is to consider the question: if the lake were to be pumped dry, how long would it take to fill it up again? For most lakes, this will give a similar value to the outflow calculation, but for lakes where evaporation is a major part of the water balance, the residence time will be much shorter.”

“Each year, mineral and organic particles are deposited by wind on the lake surface and are washed in from the catchment, while organic matter is produced within the lake by aquatic plants and plankton. There is a continuous rain of this material downwards, ultimately accumulating as an annual layer of sediment on the lake floor. These lake sediments are storehouses of information about past changes in the surrounding catchment, and they provide a long-term memory of how the limnology of a lake has responded to those changes. The analysis of these natural archives is called ‘palaeolimnology’ (or ‘palaeoceanography’ for marine studies), and this branch of the aquatic sciences has yielded enormous insights into how lakes change through time, including the onset, effects, and abatement of pollution; changes in vegetation both within and outside the lake; and alterations in regional and global climate.”

“Sampling for palaeolimnological analysis is typically undertaken in the deepest waters to provide a more integrated and complete picture of the lake basin history. This is also usually the part of the lake where sediment accumulation has been greatest, and where the disrupting activities of bottom-dwelling animals (‘bioturbation’ of the sediments) may be reduced or absent. […] Some of the most informative microfossils to be found in lake sediments are diatoms, an algal group that has cell walls (‘frustules’) made of silica glass that resist decomposition. Each lake typically contains dozens to hundreds of different diatom species, each with its own characteristic set of environmental preferences […]. A widely adopted approach is to sample many lakes and establish a statistical relationship or ‘transfer function’ between diatom species composition (often by analysis of surface sediments) and a lake water variable such as temperature, pH, phosphorus, or dissolved organic carbon. This quantitative species–environment relationship can then be applied to the fossilized diatom species assemblage in each stratum of a sediment core from a lake in the same region, and in this way the physical and chemical fluctuations that the lake has experienced in the past can be reconstructed or ‘hindcast’ year-by-year. Other fossil indicators of past environmental change include algal pigments, DNA of algae and bacteria including toxic bloom species, and the remains of aquatic animals such as ostracods, cladocerans, and larval insects.”

“In lake and ocean studies, the penetration of sunlight into the water can be […] precisely measured with an underwater light meter (submersible radiometer), and such measurements always show that the decline with depth follows a sharp curve rather than a straight line […]. This is because the fate of sunlight streaming downwards in water is dictated by the probability of the photons being absorbed or deflected out of the light path; for example, a 50 per cent probability of photons being lost from the light beam by these processes per metre depth in a lake would result in sunlight values dropping from 100 per cent at the surface to 50 per cent at 1m, 25 per cent at 2m, 12.5 per cent at 3m, and so on. The resulting exponential curve means that for all but the clearest of lakes, there is only enough solar energy for plants, including photosynthetic cells in the plankton (phytoplankton), in the upper part of the water column. […] The depth limit for underwater photosynthesis or primary production is known as the ‘compensation depth‘. This is the depth at which carbon fixed by photosynthesis exactly balances the carbon lost by cellular respiration, so the overall production of new biomass (net primary production) is zero. This depth often corresponds to an underwater light level of 1 per cent of the sunlight just beneath the water surface […] The production of biomass by photosynthesis takes place at all depths above this level, and this zone is referred to as the ‘photic’ zone. […] biological processes in [the] ‘aphotic zone’ are mostly limited to feeding and decomposition. A Secchi disk measurement can be used as a rough guide to the extent of the photic zone: in general, the 1 per cent light level is about twice the Secchi depth.”

“[W]ater colour is now used in […] many powerful ways to track changes in water quality and other properties of lakes, rivers, estuaries, and the ocean. […] Lakes have different colours, hues, and brightness levels as a result of the materials that are dissolved and suspended within them. The purest of lakes are deep blue because the water molecules themselves absorb light in the green and, to a greater extent, red end of the spectrum; they scatter the remaining blue photons in all directions, mostly downwards but also back towards our eyes. […] Algae in the water typically cause it to be green and turbid because their suspended cells and colonies contain chlorophyll and other light-capturing molecules that absorb strongly in the blue and red wavebands, but not green. However there are some notable exceptions. Noxious algal blooms dominated by cyanobacteria are blue-green (cyan) in colour caused by their blue-coloured protein phycocyanin, in addition to chlorophyll.”

“[A]t the largest dimension, at the scale of the entire lake, there has to be a net flow from the inflowing rivers to the outflow, and […] from this landscape perspective, lakes might be thought of as enlarged rivers. Of course, this riverine flow is constantly disrupted by wind-induced movements of the water. When the wind blows across the surface, it drags the surface water with it to generate a downwind flow, and this has to be balanced by a return movement of water at depth. […] In large lakes, the rotation of the Earth has plenty of time to exert its weak effect as the water moves from one side of the lake to the other. As a result, the surface water no longer flows in a straight line, but rather is directed into two or more circular patterns or gyres that can move nearshore water masses rapidly into the centre of the lake and vice-versa. Gyres can therefore be of great consequence […] Unrelated to the Coriolis Effect, the interaction between wind-induced currents and the shoreline can also cause water to flow in circular, individual gyres, even in smaller lakes. […] At a much smaller scale, the blowing of wind across a lake can give rise to downward spiral motions in the water, called ‘Langmuir cells‘. […] These circulation features are commonly observed in lakes, where the spirals progressing in the general direction of the wind concentrate foam (on days of white-cap waves) or glossy, oily materials (on less windy days) into regularly spaced lines that are parallel to the direction of the wind. […] Density currents must also be included in this brief discussion of water movement […] Cold river water entering a warm lake will be denser than its surroundings and therefore sinks to the buttom, where it may continue to flow for considerable distances. […] Density currents contribute greatly to inshore-offshore exchanges of water, with potential effects on primary productivity, depp-water oxygenation, and the dispersion of pollutants.”

Links:

Limnology.
Drainage basin.
Lake Geneva. Lake Malawi. Lake Tanganyika. Lake Victoria. Lake Biwa. Lake Titicaca.
English Lake District.
Proglacial lakeLake Agassiz. Lake Ojibway.
Lake Taupo.
Manicouagan Reservoir.
Subglacial lake.
Thermokarst (-lake).
Bathymetry. Bathymetric chart. Hypsographic curve.
Várzea forest.
Lake Chad.
Colored dissolved organic matter.
H2O Temperature-density relationship. Thermocline. Epilimnion. Hypolimnion. Monomictic lake. Dimictic lake. Lake stratification.
Capillary wave. Gravity wave. Seiche. Kelvin wave. Poincaré wave.
Benthic boundary layer.
Kelvin–Helmholtz instability.

January 22, 2018 Posted by | Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Magnus Carlsen playing bullet on Lichess

This guy’s ‘pretty good’. Here’s an unrelated video of Svidler and Carlsen analyzing their game at Wijk ann Zee:

(There should be more videos like this! This stuff’s awesome!)

The first round of the Pro Chess League was played earlier this week. This is a pretty good time to be alive if you like chess.

January 20, 2018 Posted by | Chess | 1 Comment

Words

The great majority of the words included below are words which I encountered while reading Gene Wolfe’s The Shadow of the torturer. The rest of the words are words which I encountered while reading The Oxford Handbook of Endocrinology and Diabetes as well as various ‘A Short Introduction to…‘-books.

Coloboma. Paresis. Exstrophy. Transhumance. Platybasia. Introitus. Ichthyology. Atresia. Nival. Dormer. Tussock. Mullion. Tholus. Delectation. Carnelian. Camisa. Soubrette. Cacogenic. Anacrisis. Sedge.

Barbican. Gallipot. Stele. Badelaire. Chalcedony. Helve. Armiger. Caracara. Saros. Blazon. Presentment. Refectory. Citrine. Eidolon. Obverse. Glaive. Inutile. Hypostase. Leman. Pursuivant.

Cabochon. Palfrenier. Limpid. Burse. Thurible. Anacreontic. Pardine. Nigrescent. Chrism. Pageantry. Capybara. Tinsel. Rebec. Shewbread. Excruciation. Cataphract. Sateen. Dhow. Rheostat. Caique.

Baldric. Paterissa. Bartizan. Peltast. Dray. Lochage. Miter. Discommode. Lambrequin. Dross. Proscenium. Jelab. Cymar/simar. Vicuna. Monomachy. Champian. Dulcimer. Lamia. Nidorous. Mensal.

January 19, 2018 Posted by | Books, Language | Leave a comment

Endocrinology (part 3 – adrenal glands)

Some observations from chapter 3 below.

“The normal adrenal gland weigh 4-5g. The cortex represents 90% of the normal gland and surrounds the medulla. […] Glucocorticoid (cortisol […]) production occurs from the zona fasciculata, and adrenal androgens arise from the zona reticularis. Both of these are under the control of ACTH [see also my previous post about the book – US], which regulates both steroid synthesis and also adrenocortical growth. […] Mineralocorticoid (aldosterone […]) synthesis occurs in zona glomerulosa, predominantly under the control of the renin-angiotensin system […], although ACTH also contributes to its regulation. […] The adrenal gland […] also produces sex steroids in the form of dehydroepiandrostenedione (DHEA) and androstenedione. The synthetic pathway is under the control of ACTH. Urinary steroid profiling provides quantitative information on the biosynthetic and catabolic pathways. […] CT is the most widely used modality for imaging the adrenal glands. […] MRI can also reliably detect adrenal masses >5-10mm in diameter and, in some circumstances, provides additional information to CT […] PET can be useful in locating tumours and metastases. […] Adrenal vein sampling (AVS) […] can be useful to lateralize an adenoma or to differentiate an adenoma from bilateral hyperplasia. […] AVS is of particular value in lateralizing small aldosterone-producing adenomas that cannot easily be visualized on CT or MRI. […] The procedure should only be undertaken in patients in whom surgery is feasible and desired […] [and] should be carried out in specialist centres only; centres with <20 procedures per year have been shown to have poor success rates”.

“The majority of cases of mineralocorticoid excess are due to excess aldosterone production, […] typically associated with hypertension and hypokalemia. *Primary hyperaldosteronism is a disorder of autonomous aldosterone hypersecretion with suppressed renin levels. *Secondary hyperaldosteronism occurs when aldosterone hypersecretion occurs 2° [secondary, US] to elevated circulating renin levels. This is typical of heart failure, cirrhosis, or nephrotic syndrome but can also be due to renal artery stenosis and, occasionally, a very rare renin-producing tumour (reninoma). […] Primary hyperaldosteronism is present in around 10% of hypertensive patients. It is the most prevalent form of secondary hypertension. […] Aldosterone causes renal sodium retention and potassium loss. This results in expansion of body sodium content, leading to suppression of renal renin synthesis. The direct action of aldosterone on the distal nephron causes sodium retention and loss and hydrogen and potassium ions, resulting in a hypokalaemic alkalosis, although serum potassium […] may be normal in up to 50% of cases. Aldosterone has pathophysiological effects on a range of other tissues, causing cardiac fibrosis, vascular endothelial dysfunction, and nephrosclerosis. […] hypertension […] is often resistant to conventional therapy. […] Hypokalaemia is usually asymptomatic. […] Occasionally, the clinical syndrome of hyperaldosteronism is not associated with excess aldosterone. […] These conditions are rare.”

“Bilateral adrenal hyperplasia [make up] 60% [of cases of primary hyperaldosteronism]. […] Conn’s syndrome (aldosterone-producing adrenal adenoma) [make up] 35%. […] The pathophysiology of bilateral adrenal hyperplasia is not understood, and it is possible that it represents an extreme end of the spectrum of low renin essential hypertension. […] Aldosterone-producing carcinoma[s] [are] [r]are and usually associated with excessive secretion of other corticosteroids (cortisol, androgen, oestrogen). […] Indications [for screening include:] *Patients resistant to conventional antihypertensive medication (i.e. not controlled on three agents). *Hypertension associated with hypokalaemia […] *Hypertension developing before age of 40 years. […] Confirmation of autonomous aldosterone production is made by demonstrating failure to suppress aldosterone in face of sodium/volume loading. […] A number of tests have been described that are said to differentiate between the various subtypes of 1° [primary, US] aldosteronism […]. However, none of these are sufficiently specific to influence management decisions”.

“Laparoscopic adrenalectomy is the treatment of choice for aldosterone-secreting adenomas […] and laparoscopic adrenalectomy […] has become the procedure of choice for removal of most adrenal tumours. *Hypertension is cured in about 70%. *If it persists […], it is more amenable to medical treatment. *Overall, 50% become normotensive in 1 month and 70% within 1 year. […] Medical therapy remains an option for patients with bilateral disease and those with a solitary adrenal adenoma who are unlikely to be cured by surgery, who are unfit for operation, or who express a preference for medical management. *The mineralocorticoid receptor antagonist spironolactone […] has been used successfully for many years to treat hypertension and hypokalaemia associated with bilateral adrenal hyperplasia […] Side effects are common – particularly gynaecomastia and impotence in ♂, menstrual irregularities in ♀, and GI effects. […] Eplerenone […] is a mineralocorticoid receptor antagonist without antiandrogen effects and hence greater selectivity and less side effects than spironolactone. *Alternative drugs include the potassium-sparing diuretics amiloride and triamterene.”

“Cushing’s syndrome results from chronic excess cortisol [see also my second post in this series] […] The causes may be classified as ACTH-dependent and ACTH-independent. […] ACTH-independent Cushing’s syndrome […] is due to adrenal tumours (benign and malignant), and is responsible for 10-15% of cases of Cushing’s syndrome. […] Benign adrenocortical adenomas (ACA) are usually encapsulated and <4cm in diameter. They are usually associated with pure glucocorticoid excess. *Adrenocortical carcinomas (ACC) are usually >6cm in diameter, […] and are not infrequently associated with local invasion and metastases at the time of diagnosis. Adrenal carcinomas are characteristically associated with the excess secretion of several hormones; most frequently found is the combination of cortisol and androgen (precursors) […] ACTH-dependent Cushing’s results in bilateral adrenal hyperplasia, thus one has to firmly differentiate between ACTH-dependent and independent causes of Cushing’s before assuming bilateral adrenal hyperplasia as the primary cause of disease. […] It is important to note that, in patients with adrenal carcinoma, there may also be features related to excessive androgen production in ♀ and also a relatively more rapid time course of development of the syndrome. […] Patients with ACTH-independent Cushing’s syndrome do not suppress cortisol […] on high-dose dexamethasone testing and fail to show a rise in cortisol and ACTH following administration of CRH. […] ACTH-independent causes are adrenal in origin, and the mainstay of further investigation is adrenal imaging by CT”.

“Adrenal adenomas, which are successfully treated with surgery, have a good prognosis, and recurrence is unlikely. […] Bilateral adrenalectomy [in the context of bilateral adrenal hyperplasia] is curative. Lifelong glucocorticoid and mineralocorticoid treatment is [however] required. […] The prognosis for adrenal carcinoma is very poor despite surgery. Reports suggest a 5-year survival of 22% and median survival time of 14 months […] Treatment of adrenocortical carcinoma (ACC) should be carried out in a specialist centre, with expert surgeons, oncologists, and endocrinologists with extensive treatment in treating ACC. This improves survival.”

“Adrenal insufficiency [AI, US] is defined by the lack of cortisol, i.e. glucocorticoid deficiency, may be due to destruction of the adrenal cortex (1°, Addison’s disease and congenital adrenal hyperplasia (CAH) […] or due to disordered pituitary and hypothalamic function (2°). […] *Permanent adrenal insufficiency is found in 5 in 10,000 population. *The most frequent cause is hypothalamic-pituitary damage, which is the cause of AI in 60% of affected patients. *The remaining 40% of cases are due to primary failure of the adrenal to synthesize cortisol, almost equal prevalence of Addison’s disease (mostly of autoimmune origin, prevalence 0.9-1.4 in 10,000) and congenital adrenal hyperplasia (0.7-1.0 in 10,000). *2° adrenal insufficiency due to suppression of pituitary-hypothalamic function by exogenously administered, supraphysiological glucocorticoid doses for treatment of, for example, COPD or rheumatoid arthritis, is much more common (50-200 in 10,000 population). However, adrenal function in these patients can recover”.

“[In primary AI] [a]drenal gland destruction or dysfunction occurs due to a disease process which usually involves all three zones of the adrenal cortex, resulting in inadequate glucocorticoid, mineralocorticoid, and adrenal androgen precursor secretion. The manifestations of insufficiency do not usually appear until at least 90% of the gland has been destroyed and are usually gradual in onset […] Acute adrenal insufficiency may occur in the context of acute septicaemia […] Mineralocorticoid deficiency leads to reduced sodium retention and hyponatraemia and hypotension […] Androgen deficiency presents in ♀ with reduced axillary and pubic hair and reduced libido. (Testicular production of androgens is more important in ♂). [In secondary AI] [i]nadequate ACTH results in deficient cortisol production (and ↓ androgens in ♀). […] Mineralocorticoid secretion remains normal […] The onset is usually gradual, with partial ACTH deficiency resulting in reduced response to stress. […] Lack of stimulation of skin MC1R due to ACTH deficiency results in pale skin appearance. […] [In 1° adrenal insufficiency] hyponatraemia is present in 90% and hyperkalaemia in 65%. […] Undetectable serum cortisol is diagnostic […], but the basal cortisol is often in the normal range. A cortisol >550nmol/L precludes the diagnosis. At times of acute stress, an inappropriately low cortisol is very suggestive of the diagnosis.”

“Autoimmune adrenalitis[:] Clinical features[:] *Anorexia and weight loss (>90%). *Tiredness. *Weakness – generalized, no particular muscle groups. […] Dizziness and postural hypotension. *GI symptoms – nausea and vomiting, abdominal pain, diarrhea. *Arthralgia and myalgia. […] *Mediated by humoral and cell-mediated immune mechanisms. Autoimmune insufficiency associated with polyglandular autoimmune syndrome is more common in ♀ (70%). *Adrenal cortex antibodies are present in the majority of patients at diagnosis, and […] they are still found in approximately 70% of patients 10 years later. Up to 20% patients/year with [positive] antibodies develop adrenal insufficiency. […] *Antiadrenal antibodies are found in <2% of patients with other autoimmune endocrine disease (Hashimoto’s thyroiditis, diabetes mellitus, autoimmune hypothyroidism, hypoparathyroidism, pernicious anemia). […] antibodies to other endocrine glands are commonly found in patients with autoimmune adrenal insufficiency […] However, the presence of antibodies does not predict subsequent manifestation of organ-specific autoimmunity. […] Patients with type 1 diabetes mellitus and autoimmune thyroid disease only rarely develop autoimmune adrenal insufficiency. Approximately 60% of patients with Addison’s disease have other autoimmune or endocrine disorders. […] The adrenals are small and atrophic in chronic autoimmune adrenalitis.”

“Autoimmune polyglandular syndrome (APS) type 1[:] *Also known as autoimmune polyendocrinopathy, candidiasis, and ectodermal dystrophy (APECED). […] [C]hildhood onset. *Chronic mucocutaneous candidiasis. *Hypoparathyroidism (90%), 1° adrenal insufficiency (60%). *1° gonadal failure (41%) – usually after Addison’s diagnosis. *1° hypothyroidism. *Rarely hypopituitarism, diabetes insipidus, type 1 diabetes mellitus. […] APS type 2[:] *Adult onset. *Adrenal insufficiency (100%). 1° autoimmune thyroid disease (70%) […] Type 1 diabetes mellitus (5-20%) – often before Addison’s diagnosis. *1° gonadal failure in affected women (5-20%). […] Schmidt’s syndrome: *Addison’s disease, and *Autoimmune hypothyroidism. *Carpenter syndrome: *Addison’s disease, and *Autoimmune hypothyroidism, and/or *Type 1 diabetes mellitus.”

“An adrenal incidentaloma is an adrenal mass that is discovered incidentally upon imaging […] carried out for reasons other than a suspected adrenal pathology.  […] *Autopsy studies suggest incidence prevalence of adrenal masses of 1-6% in the general population. *Imagining studies suggest that adrenal masses are present 2-3% in the general population. Incidence increases with ageing, and 8-10% of 70-year olds harbour an adrenal mass. […] It is important to determine whether the incidentally discovered adrenal mass is: *Malignant. *Functioning and associated with excess hormonal secretion.”

January 17, 2018 Posted by | Books, Cancer/oncology, Diabetes, Epidemiology, Immunology, Medicine, Nephrology, Pharmacology | Leave a comment

Rivers (II)

Some more observations from the book and related links below.

“By almost every measure, the Amazon is the greatest of all the large rivers. Encompassing more than 7 million square kilometres, its drainage basin is the largest in the world and makes up 5% of the global land surface. The river accounts for nearly one-fifth of all the river water discharged into the oceans. The flow is so great that water from the Amazon can still be identified 125 miles out in the Atlantic […] The Amazon has some 1,100 tributaries, and 7 of these are more than 1,600 kilometres long. […] In the lowlands, most Amazonian rivers have extensive floodplains studded with thousands of shallow lakes. Up to one-quarter of the entire Amazon Basin is periodically flooded, and these lakes become progressively connected with each other as the water level rise.”

“To hydrologists, the term ‘flood’ refers to a river’s annual peak discharge period, whether the water inundates the surrounding landscape or not. In more common parlance, however, a flood is synonymous with the river overflowing it’s banks […] Rivers flood in the natural course of events. This often occurs on the floodplain, as the name implies, but flooding can affect almost all of the length of the river. Extreme weather, particularly heavy or protracted rainfall, is the most frequent cause of flooding. The melting of snow and ice is another common cause. […] River floods are one of the most common natural hazards affecting human society, frequently causing social disruption, material damage, and loss of life. […] Most floods have a seasonal element in their occurence […] It is a general rule that the magnitude of a flood is inversely related to its frequency […] Many of the less predictable causes of flooding occur after a valley has been blocked by a natural dam as a result of a landslide, glacier, or lava flow. Natural dams may cause upstream flooding as the blocked river forms a lake and downstream flooding as a result of failure of the dam.”

“The Tigris-Euphrates, Nile, and Indus are all large, exotic river systems, but in other respects they are quite different. The Nile has a relatively gentle gradient in Egypt and a channel that has experienced only small changes over the last few thousand years, by meander cut-off and a minor shift eastwards. The river usually flooded in a regular and predictable way. The stability and long continuity of the Egyptian civilization may be a reflection of its river’s relative stability. The steeper channel of the Indus, by contrast, has experienced major avulsions over great distances on the lower Indus Plain and some very large floods caused by the failure of glacier ice dams in the Himalayan mountains. Likely explanations for the abandonment of many Harappan cities […] take account of damage caused by major floods and/or the disruption caused by channel avulsion leading to a loss of water supply. Channel avulsion was also a problem for the Sumerian civilization on the alluvial plain called Mesopotamia […] known for the rise and fall of its numerous city states. Most of these cities were situated along the Euphrates River, probably because it was more easily controlled for irrigation purposes than the Tigris, which flowed faster and carried much more water. However, the Euphrates was an anastomosing river with multiple channels that diverge and rejoin. Over time, individual branch channels ceased to flow as others formed, and settlements located on these channels inevitably declined and were abandoned as their water supply ran dry, while others expanded as their channels carried greater amounts of water.”

“During the colonization of the Americas in the mid-18th century and the imperial expansion into Africa and Asia in the late 19th century, rivers were commonly used as boundaries because they were the first, and frequently the only, features mapped by European explorers. The diplomats in Europe who negotiated the allocation of colonial territories claimed by rival powers knew little of the places they were carving up. Often, their limited knowledge was based solely on maps that showed few details, rivers being the only distinct physical features marked. Today, many international river boundaries remain as legacies of those historical decisions based on poor geographical knowledge because states have been reluctant to alter their territorial boundaries from original delimitation agreements. […] no less than three-quarters of the world’s international boundaries follow rivers for at least part of their course. […] approximately 60% of the world’s fresh water is drawn from rivers shared by more than one country.”

“The sediments carried in rivers, laid down over many years, represent a record of the changes that have occurred in the drainage basin through the ages. Analysis of these sediments is one way in which physical geographers can interpret the historical development of landscapes. They can study the physical and chemical characteristics of the sediments itself and/or the biological remains they contain, such as pollen or spores. […] The simple rate at which material is deposited by a river can be a good reflection of how conditions have changed in the drainage basin. […] Pollen from surrounding plants is often found in abundance in fluvial sediments, and the analysis of pollen can yield a great deal of information about past conditions in an area. […] Very long sediment cores taken from lakes and swamps enable us to reconstruct changes in vegetation over very long time periods, in some cases over a million years […] Because climate is a strong determinant of vegetation, pollen analysis has also proved to be an important method for tracing changes in past climates.”

“The energy in flowing and falling water has been harnessed to perform work by turning water-wheels for more than 2,000 years. The moving water turns a large wheel and a shaft connected to the wheel axle transmits the power from the water through a system of gears and cogs to work machinery, such as a millstone to grind corn. […] The early medieval watermill was able to do the work of between 30 and 60 people, and by the end of the 10th century in Europe, waterwheels were commonly used in a wide range of industries, including powering forge hammers, oil and silk mills, sugar-cane crushers, ore-crushing mills, breaking up bark in tanning mills, pounding leather, and grinding stones. Nonetheless, most were still used for grinding grains for preparation into various types of food and drink. The Domesday Book, a survey prepared in England in AD 1086, lists 6,082 watermills, although this is probably a conservative estimate because many mills were not recorded in the far north of the country. By 1300, this number had risen to exceed 10,000. [..] Medieval watermills typically powered their wheels by using a dam or weir to concentrate the falling water and pond a reserve supply. These modifications to rivers became increasingly common all over Europe, and by the end of the Middle Ages, in the mid-15th century, watermills were in use on a huge number of rivers and streams. The importance of water power continued into the Industrial Revolution […]. The early textile factories were built to produce cloth using machines driven by waterwheels, so they were often called mills. […] [Today,] about one-third of all countries rely on hydropower for more than half their electricity. Globally, hydropower provides about 20% of the world’s total electricity supply.”

“Deliberate manipulation of river channels through engineering works, including dam construction, diversion, channelization, and culverting, […] has a long history. […] In Europe today, almost 80% of the total discharge of the continent’s major rivers is affected by measures designed to regulate flow, whether for drinking water supply, hydroelectric power generation, flood control, or any other reason. The proportion in individual countries is higher still. About 90% of rivers in the UK are regulated as a result of these activities, while in the Netherlands this percentage is close to 100. By contrast, some of the largest rivers on other continents, including the Amazon and the Congo, are hardly manipulated at all. […] Direct and intentional modifications to rivers are complemented by the impacts of land use and land use changes which frequently result in the alteration of rivers as an unintended side effect. Deforestation, afforestation, land drainage, agriculture, and the use of fire have all had significant impacts, with perhaps the most extreme effects produced by construction activity and urbanization. […] The major methods employed in river regulation are the construction of large dams […], the building of run-of-river impoundments such as weirs and locks, and by channelization, a term that covers a range of river engineering works including widening, deepening, straightening, and the stabilization of banks. […] Many aspects of a dynamic river channel and its associated ecosystems are mutually adjusting, so a human activity in a landscape that affects the supply of water or sediment is likely to set off a complex cascade of other alterations.”

“The methods of storage (in reservoirs) and distribution (by canal) have not changed fundamentally since the earliest river irrigation schemes, with the exception of some contemporary projects’ use of pumps to distribute water over greater distances. Nevertheless, many irrigation canals still harness the force of gravity. Half the world’s large dams (defined as being 15 metres or higher) were built exclusively or primarily for irrigation, and about one-third of the world’s irrigated cropland relies on reservoir water. In several countries, including such populous nations as India and China, more than 50% of arable land is irrigated by river water supplied from dams. […] Sadly, many irrigation schemes are not well managed and a number of environmental problems are frequently experienced as a result, both on-site and off-site. In many large networks of irrigation canals, less than half of the water diverted from a river or reservoir actually benefits crops. A lot of water seeps away through unlined canals or evaporates before reaching the fields. Some also runs off the fields or infiltrates through the soil, unused by plants, because farmers apply too much water or at the wrong time. Much of this water seeps back into nearby streams or joins underground aquifers, so can be used again, but the quality of water may deteriorate if it picks up salts, fertilizers, or pesticides. Excessive applications of irrigation water often result in rising water tables beneath fields, causing salinization and waterlogging. These processes reduce crop yields on irrigation schemes all over the world.”

“[Deforestation can contribute] to the degradation of aquatic habitats in numerous ways. The loss of trees along river banks can result in changes in the species found in the river because fewer trees means a decline in plant matter and insects falling from them, items eaten by some fish. Fewer trees on river banks also results in less shade. More sunlight reaching the river results in warmer water and the enhanced growth of algae. A change in species can occur as fish that feed on falling food are edged out by those able to feed on algae. Deforestation also typically results in more runoff and more soil erosion. This sediment may cover spawning grounds, leading to lower reproduction rates. […] Grazing and trampling by livestock reduces vegetation cover and causes the compaction of soil, which reduces its infiltration capacity. As rainwater passes over or through the soil in areas of intensive agriculture, it picks up residues from pesticides and fertilizers and transport them to rivers. In this way, agriculture has become a leading source of river pollution in certain parts of the world. Concentration of nitrates and phosphates, derived from fertilizers, have risen notably in many rivers in Europe and North America since the 1950s and have led to a range of […] problems encompassed under the term ‘eutrophication’ – the raising of biological productivity caused by nutrient enrichment. […] In slow-moving rivers […] the growth of algae reduces light penetration and depletes the oxygen in the water, sometimes causing fish kills.”

“One of the most profound ways in which people alter rivers is by damming them. Obstructing a river and controlling its flow in this way brings about a raft of changes. A dam traps sediments and nutrients, alters the river’s temperature and chemistry, and affects the processes of erosion and deposition by which the river sculpts the landscape. Dams create more uniform flow in rivers, usually by reducing peak flows and increasing minimum flows. Since the natural variation in flow is important for river ecosystems and their biodiversity, when dams even out flows the result is commonly fewer fish of fewer species. […] the past 50 years or so has seen a marked escalation in the rate and scale of construction of dams all over the world […]. At the beginning of the 21st century, there were about 800,000 dams worldwide […] In some large river systems, the capacity of dams is sufficient to hold more than the entire annual discharge of the river. […] Globally, the world’s major reservoirs are thought to control about 15% of the runoff from the land. The volume of water trapped worldwide in reservoirs of all sizes is no less than five times the total global annual river flow […] Downstream of a reservoir, the hydrological regime of a river is modified. Discharge, velocity, water quality, and thermal characteristics are all affected, leading to changes in the channel and its landscape, plants, and animals, both on the river itself and in deltas, estuaries, and offshore. By slowing the flow of river water, a dam acts as a trap for sediment and hence reduces loads in the river downstream. As a result, the flow downstream of the dam is highly erosive. A relative lack of silt arriving at a river’s delta can result in more coastal erosion and the intrusion of seawater that brings salt into delta ecosystems. […] The dam-barrier effect on migratory fish and their access to spawning grounds has been recognized in Europe since medieval times.”

“One of the most important effects cities have on rivers is the way in which urbanization affects flood runoff. Large areas of cities are typically impermeable, being covered by concrete, stone, tarmac, and bitumen. This tends to increase the amount of runoff produced in urban areas, an effect exacerbated by networks of storm drains and sewers. This water carries relatively little sediment (again, because soil surfaces have been covered by impermeable materials), so when it reaches a river channel it typically causes erosion and widening. Larger and more frequent floods are another outcome of the increase in runoff generated by urban areas. […] It […] seems very likely that efforts to manage the flood hazard on the Mississippi have contributed to an increased risk of damage from tropical storms on the Gulf of Mexico coast. The levées built along the river have contributed to the loss of coastal wetlands, starving them of sediment and fresh water, thereby reducing their dampening effect on storm surge levels. This probably enhanced the damage from Hurricane Katrina which struck the city of New Orleans in 2005.”

Links:

Onyx River.
Yangtze. Yangtze floods.
Missoula floods.
Murray River.
Ganges.
Thalweg.
Southeastern Anatolia Project.
Water conflict.
Hydropower.
Fulling mill.
Maritime transport.
Danube.
Lock (water navigation).
Hydrometry.
Yellow River.
Aswan High Dam. Warragamba Dam. Three Gorges Dam.
Onchocerciasis.
River restoration.

January 16, 2018 Posted by | Biology, Books, Ecology, Engineering, Geography, Geology, History | Leave a comment