Econstudentlog

Molecular biology (I?)

“For a brief while I was considering giving this book five stars, but it didn’t quite get there. However this is a great publication, considering the format. These authors in my opinion managed to get quite close to what I’d consider to be ‘the ideal level of coverage’ for books of this nature.”

The above was what I wrote in my short goodreads review of the book. In this post I’ve added some quotes from the first chapters of the book and some links to topics covered.

Quotes:

“Once the base-pairing double helical structure of DNA was understood it became apparent that by holding and preserving the genetic code DNA is the source of heredity. The heritable material must also be capable of faithful duplication every time a cell divides. The DNA molecule is ideal for this. […] The effort then concentrated on how the instructions held by the DNA were translated into the choice of the twenty different amino acids that make up proteins. […] George Gamov [yes, that George Gamov! – US] made the suggestion that information held in the four bases of DNA (A, T, C, G) must be read as triplets, called codons. Each codon, made up of three nucleotides, codes for one amino acid or a ‘start’ or ‘stop’ signal. This information, which determines an organism’s biochemical makeup, is known as the genetic code. An encryption based on three nucleotides means that there are sixty-four possible three-letter combinations. But there are only twenty amino acids that are universal. […] some amino acids can be coded for by more than one codon.”

“The mechanism of gene expression whereby DNA transfers its information into proteins was determined in the early 1960s by Sydney Brenner, Francois Jacob, and Matthew Meselson. […] Francis Crick proposed in 1958 that information flowed in one direction only: from DNA to RNA to protein. This was called the ‘Central Dogma‘ and describes how DNA is transcribed into RNA, which then acts as a messenger carrying the information to be translated into proteins. Thus the flow of information goes from DNA to RNA to proteins and information can never be transferred back from protein to nucleic acid. DNA can be copied into more DNA (replication) or into RNA (transcription) but only the information in mRNA [messenger RNA] can be translated into protein”.

“The genome is the entire DNA contained within the forty-six chromosomes located in the nucleus of each human somatic (body) cell. […] The complete human genome is composed of over 3 billion bases and contain approximately 20,000 genes that code for proteins. This is much lower than earlier estimates of 80,000 to 140,000 and astonished the scientific community when revealed through human genome sequencing. Equally surprising was the finding that genomes of much simpler organisms sequenced at the same time contained a higher number of protein-coding genes than humans. […] It is now clear that the size of the genome does not correspond with the number of protein-coding genes, and these do not determine the complexity of an organism. Protein-coding genes can be viewed as ‘transcription units’. These are made up of sequences called exons that code for amino acids, and separated by by non-coding sequences called introns. Associated with these are additional sequences termed promoters and enhancers that control the expression of that gene.”

“Some sections of the human genome code for RNA molecules that do not have the capacity to produce proteins. […] it is now becoming apparent that many play a role in controlling gene expression. Despite the importance of proteins, less than 1.5 per cent of the genome is made up of exon sequences. A recent estimate is that about 80 per cent of the genome is transcribed or involved in regulatory functions with the rest mainly composed of repetitive sequences. […] Satellite DNA […] is a short sequence repeated many thousands of times in tandem […] A second type of repetitive DNA is the telomere sequence. […] Their role is to prevent chromosomes from shortening during DNA replication […] Repetitive sequences can also be found distributed or interspersed throughout the genome. These repeats have the ability to move around the genome and are referred to as mobile or transposable DNA. […] Such movements can be harmful sometimes as gene sequences can be disrupted causing disease. […] The vast majority of transposable sequences are no longer able to move around and are considered to be ‘silent’. However, these movements have contributed, over evolutionary time, to the organization and evolution of the genome, by creating new or modified genes leading to the production of proteins with novel functions.”

“A very important property of DNA is that it can make an accurate copy of itself. This is necessary since cells die during the normal wear and tear of tissues and need to be replenished. […] DNA replication is a highly accurate process with an error occurring every 10,000 to 1 million bases in human DNA. This low frequency is because the DNA polymerases carry a proofreading function. If an incorrect nucleotide is incorporated during DNA synthesis, the polymerase detects the error and excises the incorrect base. Following excision, the polymerase reinserts the correct base and replication continues. Any errors that are not corrected through proofreading are repaired by an alternative mismatch repair mechanism. In some instances, proofreading and repair mechanisms fail to correct errors. These become permanent mutations after the next cell division cycle as they are no longer recognized as errors and are therefore propagated each time the DNA replicates.”

DNA sequencing identifies the precise linear order of the nucleotide bases A, C, G, T, in a DNA fragment. It is possible to sequence individual genes, segments of a genome, or whole genomes. Sequencing information is fundamental in helping us understand how our genome is structured and how it functions. […] The Human Genome Project, which used Sanger sequencing, took ten years to sequence and cost 3 billion US dollars. Using high-throughput sequencing, the entire human genome can now be sequenced in a few days at a cost of 3,000 US dollars. These costs are continuing to fall, making it more feasible to sequence whole genomes. The human genome sequence published in 2003 was built from DNA pooled from a number of donors to generate a ‘reference’ or composite genome. However, the genome of each individual is unique and so in 2005 the Personal Genome Project was launched in the USA aiming to sequence and analyse the genomes of 100,000 volunteers across the world. Soon after, similar projects followed in Canada and Korea and, in 2013, in the UK. […] To store and analyze the huge amounts of data, computational systems have developed in parallel. This branch of biology, called bioinformatics, has become an extremely important collaborative research area for molecular biologists drawing on the expertise of computer scientists, mathematicians, and statisticians.”

“[T]he structure of RNA differs from DNA in three fundamental ways. First, the sugar is a ribose, whereas in DNA it is a deoxyribose. Secondly, in RNA the nucleotide bases are A, G, C, and U (uracil) instead of A, G, C, and T. […] Thirdly, RNA is a single-stranded molecule unlike double-stranded DNA. It is not helical in shape but can fold to form a hairpin or stem-loop structure by base-pairing between complementary regions within the same RNA molecule. These two-dimensional secondary structures can further fold to form complex three-dimensional, tertiary structures. An RNA molecule is able to interact not only with itself, but also with other RNAs, with DNA, and with proteins. These interactions, and the variety of conformations that RNAs can adopt, enables them to carry out a wide range of functions. […] RNAs can influence many normal cellular and disease processes by regulating gene expression. RNA interference […] is one of the main ways in which gene expression is regulated.”

“Translation of the mRNA to a protein takes place in the cell cytoplasm on ribosomes. Ribosomes are cellular structures made up primarily of rRNA and proteins. At the ribosomes, the mRNA is decoded to produce a specific protein according to the rules defined by the genetic code. The correct amino acids are brought to the mRNA at the ribosomes by molecules called transfer RNAs (tRNAs). […] At the start of translation, a tRNA binds to the mRNA at the start codon AUG. This is followed by the binding of a second tRNA matching the adjacent mRNA codon. The two neighbouring amino acids linked to the tRNAs are joined together by a chemical bond called the peptide bond. Once the peptide bond forms, the first tRNA detaches leaving its amino acid behind. The ribosome then moves one codon along the mRNA and a third tRNA binds. In this way, tRNAs sequentially bind to the mRNA as the ribosome moves from codon to codon. Each time a tRNA molecule binds, the linked amino acid is transferred to the growing amino acid chain. Thus the mRNA sequence is translated into a chain of amino acids connected by peptide bonds to produce a polypeptide chain. Translation is terminated when the ribosome encounters a stop codon […]. After translation, the chain is folded and very often modified by the addition of sugar or other molecules to produce fully functional proteins.”

“The naturally occurring RNAi pathway is now extensively exploited in the laboratory to study the function of genes. It is possible to design synthetic siRNA molecules with a sequence complementary to the gene under study. These double-stranded RNA molecules are then introduced into the cell by special techniques to temporarily knock down the expression of that gene. By studying the phenotypic effects of this severe reduction of gene expression, the function of that gene can be identified. Synthetic siRNA molecules also have the potential to be used to treat diseases. If a disease is caused or enhanced by a particular gene product, then siRNAs can be designed against that gene to silence its expression. This prevents the protein which drives the disease from being produced. […] One of the major challenges to the use of RNAi as therapy is directing siRNA to the specific cells in which gene silencing is required. If released directly into the bloodstream, enzymes in the bloodstream degrade siRNAs. […] Other problems are that siRNAs can stimulate the body’s immune response and can produce off-target effects by silencing RNA molecules other than those against which they were specifically designed. […] considerable attention is currently focused on designing carrier molecules that can transport siRNA through the bloodstream to the diseased cell.”

“Both Northern blotting and RT-PCR enable the expression of one or a few genes to be measured simultaneously. In contrast, the technique of microarrays allows gene expression to be measured across the full genome of an organism in a single step. This massive scale genome analysis technique is very useful when comparing gene expression profiles between two samples. […] This can identify gene subsets that are under- or over-expressed in one sample relative to the second sample to which it is compared.”

Links:

Molecular biology.
Charles Darwin. Alfred Wallace. Gregor Mendel. Wilhelm Johannsen. Heinrich Waldeyer. Theodor Boveri. Walter Sutton. Friedrich Miescher. Phoebus Levene. Oswald Avery. Colin MacLeod. Maclyn McCarty. James Watson. Francis Crick. Rosalind Franklin. Andrew Fire. Craig Mello.
Gene. Genotype. Phenotype. Chromosome. Nucleotide. DNA. RNA. Protein.
Chargaff’s rules.
Photo 51.
Human Genome Project.
Long interspersed nuclear elements (LINEs). Short interspersed nuclear elements (SINEs).
Histone. Nucleosome.
Chromatin. Euchromatin. Heterochromatin.
Mitochondrial DNA.
DNA replication. Helicase. Origin of replication. DNA polymeraseOkazaki fragments. Leading strand and lagging strand. DNA ligase. Semiconservative replication.
Mutation. Point mutation. Indel. Frameshift mutation.
Genetic polymorphism. Single-nucleotide polymorphism (SNP).
Genome-wide association study (GWAS).
Molecular cloning. Restriction endonuclease. Multiple cloning site (MCS). Bacterial artificial chromosome.
Gel electrophoresis. Southern blot. Polymerase chain reaction (PCR). Reverse transcriptase PCR (RT-PCR). Quantitative PCR (qPCR).
GenBank. European Molecular Biology Laboratory (EMBL). Encyclopedia of DNA Elements (ENCODE).
RNA polymerase II. TATA box. Transcription factor IID. Stop codon.
Protein biosynthesis.
SmRNA (small nuclear RNA).
Untranslated region (/UTR sequences).
Transfer RNA.
Micro RNA (miRNA).
Dicer (enzyme).
RISC (RNA-induced silencing complex).
Argonaute.
Lipid-Based Nanoparticles for siRNA Delivery in Cancer Therapy.
Long non-coding RNA.
Ribozyme/catalytic RNA.
RNA-sequencing (RNA-seq).

Advertisements

May 5, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine | Leave a comment

A few diabetes papers of interest

i. Economic Costs of Diabetes in the U.S. in 2017.

“This study updates previous estimates of the economic burden of diagnosed diabetes and quantifies the increased health resource use and lost productivity associated with diabetes in 2017. […] The total estimated cost of diagnosed diabetes in 2017 is $327 billion, including $237 billion in direct medical costs and $90 billion in reduced productivity. For the cost categories analyzed, care for people with diagnosed diabetes accounts for 1 in 4 health care dollars in the U.S., and more than half of that expenditure is directly attributable to diabetes. People with diagnosed diabetes incur average medical expenditures of ∼$16,750 per year, of which ∼$9,600 is attributed to diabetes. People with diagnosed diabetes, on average, have medical expenditures ∼2.3 times higher than what expenditures would be in the absence of diabetes. Indirect costs include increased absenteeism ($3.3 billion) and reduced productivity while at work ($26.9 billion) for the employed population, reduced productivity for those not in the labor force ($2.3 billion), inability to work because of disease-related disability ($37.5 billion), and lost productivity due to 277,000 premature deaths attributed to diabetes ($19.9 billion). […] After adjusting for inflation, economic costs of diabetes increased by 26% from 2012 to 2017 due to the increased prevalence of diabetes and the increased cost per person with diabetes. The growth in diabetes prevalence and medical costs is primarily among the population aged 65 years and older, contributing to a growing economic cost to the Medicare program.”

The paper includes a lot of details about how they went about estimating these things, but I decided against including these details here – read the full paper if you’re interested. I did however want to add some additional details, so here goes:

Absenteeism is defined as the number of work days missed due to poor health among employed individuals, and prior research finds that people with diabetes have higher rates of absenteeism than the population without diabetes. Estimates from the literature range from no statistically significant diabetes effect on absenteeism to studies reporting 1–6 extra missed work days (and odds ratios of more absences ranging from 1.5 to 3.3) (1214). Analyzing 2014–2016 NHIS data and using a negative binomial regression to control for overdispersion in self-reported missed work days, we estimate that people with diabetes have statistically higher missed work days—ranging from 1.0 to 4.2 additional days missed per year by demographic group, or 1.7 days on average — after controlling for age-group, sex, race/ethnicity, diagnosed hypertension status (yes/no), and body weight status (normal, overweight, obese, unknown). […] Presenteeism is defined as reduced productivity while at work among employed individuals and is generally measured through worker responses to surveys. Multiple recent studies report that individuals with diabetes display higher rates of presenteeism than their peers without diabetes (12,1517). […] We model productivity loss associated with diabetes-attributed presenteeism using the estimate (6.6%) from the 2012 study—which is toward the lower end of the 1.8–38% range reported in the literature. […] Reduced performance at work […] accounted for 30% of the indirect cost of diabetes.”

It is of note that even with a somewhat conservative estimate of presenteeism, this cost component is an order of magnitude larger than the absenteeism variable. It is worth keeping in mind that this ratio is likely to be different elsewhere; due to the way the American health care system is structured/financed – health insurance is to a significant degree linked to employment – you’d expect the estimated ratio to be different from what you might observe in countries like the UK or Denmark. Some more related numbers from the paper:

Inability to work associated with diabetes is estimated using a conservative approach that focuses on unemployment related to long-term disability. Logistic regression with 2014–2016 NHIS data suggests that people aged 18–65 years with diabetes are significantly less likely to be in the workforce than people without diabetes. […] we use a conservative approach (which likely underestimates the cost associated with inability to work) to estimate the economic burden associated with reduced labor force participation. […] Study results suggest that people with diabetes have a 3.1 percentage point higher rate of being out of the workforce and receiving disability payments compared with their peers without diabetes. The diabetes effect increases with age and varies by demographic — ranging from 2.1 percentage points for non-Hispanic white males aged 60–64 years to 10.6 percentage points for non-Hispanic black females aged 55–59 years.”

“In 2017, an estimated 24.7 million people in the U.S. are diagnosed with diabetes, representing ∼7.6% of the total population (and 9.7% of the adult population). The estimated national cost of diabetes in 2017 is $327 billion, of which $237 billion (73%) represents direct health care expenditures attributed to diabetes and $90 billion (27%) represents lost productivity from work-related absenteeism, reduced productivity at work and at home, unemployment from chronic disability, and premature mortality. Particularly noteworthy is that excess costs associated with medications constitute 43% of the total direct medical burden. This includes nearly $15 billion for insulin, $15.9 billion for other antidiabetes agents, and $71.2 billion in excess use of other prescription medications attributed to higher disease prevalence associated with diabetes. […] A large portion of medical costs associated with diabetes costs is for comorbidities.”

Insulin is ~$15 billion/year, out of a total estimated cost of $327 billion. This is less than 5% of the total cost. Take note of the 70 billion. I know I’ve said this before, but it bears repeating: Most of diabetes-related costs are not related to insulin.

“…of the projected 162 million hospital inpatient days in the U.S. in 2017, an estimated 40.3 million days (24.8%) are incurred by people with diabetes [who make up ~7.6% of the population – see above], of which 22.6 million days are attributed to diabetes. About one-fourth of all nursing/residential facility days are incurred by people with diabetes. About half of all physician office visits, emergency department visits, hospital outpatient visits, and medication prescriptions (excluding insulin and other antidiabetes agents) incurred by people with diabetes are attributed to their diabetes. […] The largest contributors to the cost of diabetes are higher use of prescription medications beyond antihyperglycemic medications ($71.2 billion), higher use of hospital inpatient services ($69.7 billion), medications and supplies to directly treat diabetes ($34.6 billion), and more office visits to physicians and other health providers ($30.0 billion). Approximately 61% of all health care expenditures attributed to diabetes are for health resources used by the population aged ≥65 years […] we estimate the average annual excess expenditures for the population aged <65 years and ≥65 years, respectively, at $6,675 and $13,239. Health care expenditures attributed to diabetes generally increase with age […] The population with diabetes is older and sicker than the population without diabetes, and consequently annual medical expenditures are much higher (on average) than for people without diabetes“.

“Of the estimated 24.7 million people with diagnosed diabetes, analysis of NHIS data suggests that ∼8.1 million are in the workforce. If people with diabetes participated in the labor force at rates similar to their peers without diabetes, there would be ∼2 million additional people aged 18–64 years in the workforce.”

Comparing the 2017 estimates with those produced for 2012, the overall cost of diabetes appears to have increased by ∼25% after adjusting for inflation, reflecting an 11% increase in national prevalence of diagnosed diabetes and a 13% increase in the average annual diabetes-attributed cost per person with diabetes.”

ii. Current Challenges and Opportunities in the Prevention and Management of Diabetic Foot Ulcers.

“Diabetic foot ulcers remain a major health care problem. They are common, result in considerable suffering, frequently recur, and are associated with high mortality, as well as considerable health care costs. While national and international guidance exists, the evidence base for much of routine clinical care is thin. It follows that many aspects of the structure and delivery of care are susceptible to the beliefs and opinion of individuals. It is probable that this contributes to the geographic variation in outcome that has been documented in a number of countries. This article considers these issues in depth and emphasizes the urgent need to improve the design and conduct of clinical trials in this field, as well as to undertake systematic comparison of the results of routine care in different health economies. There is strong suggestive evidence to indicate that appropriate changes in the relevant care pathways can result in a prompt improvement in clinical outcomes.”

“Despite considerable advances made over the last 25 years, diabetic foot ulcers (DFUs) continue to present a very considerable health care burden — one that is widely unappreciated. DFUs are common, the median time to healing without surgery is of the order of 12 weeks, and they are associated with a high risk of limb loss through amputation (14). The 5-year survival following presentation with a new DFU is of the order of only 50–60% and hence worse than that of many common cancers (4,5). While there is evidence that mortality is improving with more widespread use of cardiovascular risk reduction (6), the most recent data — derived from a Veterans Health Adminstration population—reported that 1-, 2-, and 5-year survival was only 81, 69, and 29%, respectively, and the association between mortality and DFU was stronger than that of any macrovascular disease (7). […] There is […] wide variation in clinical outcome within the same country (1315), suggesting that some people are being managed considerably less well than others.”

“Data on community-wide ulcer incidence are very limited. Overall incidences of 5.8 and 6.0% have been reported in selected populations of people with diabetes in the U.S. (2,12,20) while incidences of 2.1 and 2.2% have been reported from less selected populations in Europe—either in all people with diabetes (21) or in those with type 2 disease alone (22). It is not known whether the incidence is changing […] Although a number of risk factors associated with the development of ulceration are well recognized (23), there is no consensus on which dominate, and there are currently no reports of any studies that might justify the adoption of any specific strategy for population selection in primary prevention.”

“The incidence of major amputation is used as a surrogate measure of the failure of DFUs to heal. Its main value lies in the relative ease of data capture, but its value is limited because it is essentially a treatment and not a true measure of disease outcome. In no other major disease (including malignancies, cardiovascular disease, or cerebrovascular disease) is the number of treatments used as a measure of outcome. But despite this and other limitations of major amputation as an outcome measure (36), there is evidence that the overall incidence of major amputation is falling in some countries with nationwide databases (37,38). Perhaps the most convincing data come from the U.K., where the unadjusted incidence has fallen dramatically from about 3.0–3.5 per 1,000 people with diabetes per year in the mid-1990s to 1.0 or less per 1,000 per year in both England and Scotland (14,39).”

New ulceration after healing is high, with ∼40% of people having a new ulcer (whether at the same site or another) within 12 months (10). This is a critical aspect of diabetic foot disease—emphasizing that when an ulcer heals, foot disease must be regarded not as cured, but in remission (10). In this respect, diabetic foot disease is directly analogous to malignancy. It follows that the person whose foot disease is in remission should receive the same structured follow-up as a person who is in remission following treatment for cancer. Of all areas concerned with the management of DFUs, this long-term need for specialist surveillance is arguably the one that should command the greatest attention.

“There is currently little evidence to justify the adoption of very many of the products and procedures currently promoted for use in clinical practice. Guidelines are required to encourage clinicians to adopt only those treatments that have been shown to be effective in robust studies and principally in RCTs. The design and conduct of such RCTs needs improved governance because many are of low standard and do not always provide the evidence that is claimed.”

Incidence numbers like the ones included above will not always give you the full picture when there are a lot of overlapping data points in the sample (due to recurrence), but sometimes that’s all you have. However in the type 1 context we also do have some additional numbers that make it easier to appreciate the scale of the problem in that context. Here are a few additional data from a related publication I blogged some time ago (do keep in mind that estimates are likely to be lower in community samples of type 2 diabetics, even if perhaps nobody actually know precisely how much lower):

“The rate of nontraumatic amputation in T1DM is high, occurring at 0.4–7.2% per year (28). By 65 years of age, the cumulative probability of lower-extremity amputation in a Swedish administrative database was 11% for women with T1DM and 20.7% for men (10). In this Swedish population, the rate of lower-extremity amputation among those with T1DM was nearly 86-fold that of the general population.” (link)

Do keep in mind that people don’t stop getting ulcers once they reach retirement age (the 11%/20.7% is not lifetime risk, it’s a biased lower bound).

iii. Excess Mortality in Patients With Type 1 Diabetes Without Albuminuria — Separating the Contribution of Early and Late Risks.

“The current study investigated whether the risk of mortality in patients with type 1 diabetes without any signs of albuminuria is different than in the general population and matched control subjects without diabetes.”

“Despite significant improvements in management, type 1 diabetes remains associated with an increase in mortality relative to the age- and sex-matched general population (1,2). Acute complications of diabetes may initially account for this increased risk (3,4). However, with increasing duration of disease, the leading contributor to excess mortality is its vascular complications including diabetic kidney disease (DKD) and cardiovascular disease (CVD). Consequently, patients who subsequently remain free of complications may have little or no increased risk of mortality (1,2,5).”

“Mortality was evaluated in a population-based cohort of 10,737 children (aged 0–14 years) with newly diagnosed type 1 diabetes in Finland who were listed on the National Public Health Institute diabetes register, Central Drug Register, and Hospital Discharge Register in 1980–2005 […] We excluded patients with type 2 diabetes and diabetes occurring secondary to other conditions, such as steroid use, Down syndrome, and congenital malformations of the pancreas. […] FinnDiane participants who died were more likely to be male, older, have a longer duration of diabetes, and later age of diabetes onset […]. Notably, none of the conventional variables associated with complications (e.g., HbA1c, hypertension, smoking, lipid levels, or AER) were associated with all-cause mortality in this cohort of patients without albuminuria. […] The most frequent cause of death in the FinnDiane cohort was IHD [ischaemic heart disease, US] […], largely driven by events in patients with long-standing diabetes and/or previously established CVD […]. The mortality rate ratio for IHD was 4.34 (95% CI 2.49–7.57, P < 0.0001). There remained a number of deaths due to acute complications of diabetes, including ketoacidosis and hypoglycemia. This was most significant in patients with a shorter duration of diabetes but still apparent in those with long-standing diabetes[…]. Notably, deaths due to “risk-taking behavior” were lower in adults with type 1 diabetes compared with matched individuals without diabetes: mortality rate ratio was 0.42 (95% CI 0.22–0.79, P = 0.006) […] This was largely driven by the 80% reduction (95% CI 0.06–0.66) in deaths due to alcohol and drugs in males with type 1 diabetes (Table 3). No reduction was observed in female patients (rate ratio 0.90 [95% CI 0.18–4.44]), although the absolute event rate was already more than seven times lower in Finnish women than in men.”

The chief determinant of excess mortality in patients with type 1 diabetes is its complications. In the first 10 years of type 1 diabetes, the acute complications of diabetes dominate and result in excess mortality — more than twice that observed in the age- and sex-matched general population. This early excess explains why registry studies following patients with type 1 diabetes from diagnosis have consistently reported reduced life expectancy, even in patients free of chronic complications of diabetes (68). By contrast, studies of chronic complications, like FinnDiane and the Pittsburgh Epidemiology of Diabetes Complications Study (1,2), have followed participants with, usually, >10 years of type 1 diabetes at baseline. In these patients, the presence or absence of chronic complications of diabetes is critical for survival. In particular, the presence and severity of albuminuria (as a marker of vascular burden) is strongly associated with mortality outcomes in type 1 diabetes (1). […] the FinnDiane normoalbuminuric patients showed increased all-cause mortality compared with the control subjects without diabetes in contrast to when the comparison was made with the Finnish general population, as in our previous publication (1). Two crucial causes behind the excess mortality were acute diabetes complications and IHD. […] Comparisons with the general population, rather than matched control subjects, may overestimate expected mortality, diluting the SMR estimate”.

Despite major improvements in the delivery of diabetes care and other technological advances, acute complications remain a major cause of death both in children and in adults with type 1 diabetes. Indeed, the proportion of deaths due to acute events has not changed significantly over the last 30 years. […] Even in patients with long-standing diabetes (>20 years), the risk of death due to hypoglycemia or ketoacidosis remains a constant companion. […] If it were possible to eliminate all deaths from acute events, the observed mortality rate would have been no different from the general population in the early cohort. […] In long-term diabetes, avoiding chronic complications may be associated with mortality rates comparable with those of the general population; although death from IHD remains increased, this is offset by reduced risk-taking behavior, especially in men.”

“It is well-known that CVD is strongly associated with DKD (15). However, in the current study, mortality from IHD remained higher in adults with type 1 diabetes without albuminuria compared with matched control subjects in both men and women. This is concordant with other recent studies also reporting increased mortality from CVD in patients with type 1 diabetes in the absence of DKD (7,8) and reinforces the need for aggressive cardiovascular risk reduction even in patients without signs of microvascular disease. However, it is important to note that the risk of death from CVD, though significant, is still at least 10-fold lower than observed in patients with albuminuria (1). Alcohol- and drug-related deaths were substantially lower in patients with type 1 diabetes compared with the age-, sex-, and region-matched control subjects. […] This may reflect a selection bias […] Nonparticipation in health studies is associated with poorer health, stress, and lower socioeconomic status (17,18), which are in turn associated with increased risk of premature mortality. It can be speculated that with inclusion of patients with risk-taking behavior, the mortality rate in patients with diabetes would be even higher and, consequently, the SMR would also be significantly higher compared with the general population. Selection of patients who despite long-standing diabetes remained free of albuminuria may also have included individuals more accepting of general health messages and less prone to depression and nihilism arising from treatment failure.”

I think the selection bias problem is likely to be quite significant, as these results don’t really match what I’ve seen in the past. For example a recent Norwegian study on young type 1 diabetics found high mortality in their sample in significant degree due to alcohol-related causes and suicide: “A relatively high proportion of deaths were related to alcohol. […] Death was related to alcohol in 15% of cases. SMR for alcohol-related death was 6.8 (95% CI 4.5–10.3), for cardiovascular death was 7.3 (5.4–10.0), and for violent death was 3.6 (2.3–5.3).” That doesn’t sound very similar to the study above, and that study’s also from Scandinavia. In this study, in which they used data from diabetic organ donors, they found that a large proportion of the diabetics included in the study used illegal drugs: “we observed a high rate of illicit substance abuse: 32% of donors reported or tested positive for illegal substances (excluding marijuana), and multidrug use was common.”

Do keep in mind that one of the main reasons why ‘alcohol-related’ deaths are higher in diabetes is likely to be that ‘drinking while diabetic’ is a lot more risky than is ‘drinking while not diabetic’. On a related note, diabetics may not appreciate the level of risk they’re actually exposed to while drinking, due to community norms etc., so there might be a disconnect between risk preferences and observed behaviour (i.e., a diabetic might be risk averse but still engage in risky behaviours because he doesn’t know how risky those behaviours in which he’s engaging actually are).

Although the illicit drugs study indicates that diabetics at least in some samples are not averse to engaging in risky behaviours, a note of caution is probably warranted in the alcohol context: High mortality from alcohol-mediated acute complications needn’t be an indication that diabetics drink more than non-diabetics; that’s a separate question, you might see numbers like these even if they in general drink less. And a young type 1 diabetic who suffers a cardiac arrhythmia secondary to long-standing nocturnal hypoglycemia and subsequently is found ‘dead in bed’ after a bout of drinking is conceptually very different from a 50-year old alcoholic dying from a variceal bleed or acute pancreatitis. Parenthetically, if it is true that illicit drugs use is common in type 1 diabetics one reason might be that they are aware of the risks associated with alcohol (which is particularly nasty in terms of the metabolic/glycemic consequences in diabetes, compared to some other drugs) and thus they deliberately make a decision to substitute this drug with other drugs less likely to cause acute complications like severe hypoglycemic episodes or DKA (depending on the setting and the specifics, alcohol might be a contributor to both of these complications). If so, classical ‘risk behaviours’ may not always be ‘risk behaviours’ in diabetes. You need to be careful, this stuff’s complicated.

iv. Are All Patients With Type 1 Diabetes Destined for Dialysis if They Live Long Enough? Probably Not.

“Over the past three decades there have been numerous innovations, supported by large outcome trials that have resulted in improved blood glucose and blood pressure control, ultimately reducing cardiovascular (CV) risk and progression to nephropathy in type 1 diabetes (T1D) (1,2). The epidemiological data also support the concept that 25–30% of people with T1D will progress to end-stage renal disease (ESRD). Thus, not everyone develops progressive nephropathy that ultimately requires dialysis or transplantation. This is a result of numerous factors […] Data from two recent studies reported in this issue of Diabetes Care examine the long-term incidence of chronic kidney disease (CKD) in T1D. Costacou and Orchard (7) examined a cohort of 932 people evaluated for 50-year cumulative kidney complication risk in the Pittsburgh Epidemiology of Diabetes Complications study. They used both albuminuria levels and ESRD/transplant data for assessment. By 30 years’ duration of diabetes, ESRD affected 14.5% and by 40 years it affected 26.5% of the group with onset of T1D between 1965 and 1980. For those who developed diabetes between 1950 and 1964, the proportions developing ESRD were substantially higher at 34.6% at 30 years, 48.5% at 40 years, and 61.3% at 50 years. The authors called attention to the fact that ESRD decreased by 45% after 40 years’ duration between these two cohorts, emphasizing the beneficial roles of improved glycemic control and blood pressure control. It should also be noted that at 40 years even in the later cohort (those diagnosed between 1965 and 1980), 57.3% developed >300 mg/day albuminuria (7).”

Numbers like these may seem like ancient history (data from the 60s and 70s), but it’s important to keep in mind that many type 1 diabetics are diagnosed in early childhood, and that they don’t ‘get better’ later on – if they’re still alive, they’re still diabetic. …And very likely macroalbuminuric, at least if they’re from Pittsburgh. I was diagnosed in ’87.

“Gagnum et al. (8), using data from a Norwegian registry, also examined the incidence of CKD development over a 42-year follow-up period in people with childhood-onset (<15 years of age) T1D (8). The data from the Norwegian registry noted that the cumulative incidence of ESRD was 0.7% after 20 years and 5.3% after 40 years of T1D. Moreover, the authors noted the risk of developing ESRD was lower in women than in men and did not identify any difference in risk of ESRD between those diagnosed with diabetes in 1973–1982 and those diagnosed in 1989–2012. They concluded that there is a very low incidence of ESRD among patients with childhood-onset T1D diabetes in Norway, with a lower risk in women than men and among those diagnosed at a younger age. […] Analyses of population-based studies, similar to the Pittsburgh and Norway studies, showed that after 30 years of T1D the cumulative incidences of ESRD were only 10% for those diagnosed with T1D in 1961–1984 and 3% for those diagnosed in 1985–1999 in Japan (11), 3.3% for those diagnosed with T1D in 1977–2007 in Sweden (12), and 7.8% for those diagnosed with T1D in 1965–1999 in Finland (13) (Table 1).”

Do note that ESRD (end stage renal disease) is not the same thing as DKD (diabetic kidney disease), and that e.g. many of the Norwegians who did not develop ESRD nevertheless likely have kidney complications from their diabetes. That 5.3% is not the number of diabetics in that cohort who developed diabetes-related kidney complications, it’s the proportion of them who did and as a result of this needed a new kidney or dialysis in order not to die very soon. Do also keep in mind that both microalbuminuria and macroalbuminuria will substantially increase the risk of cardiovascular disease and -cardiac death. I recall a study where they looked at the various endpoints and found that more diabetics with microalbuminuria eventually died of cardiovascular disease than did ever develop kidney failure – cardiac risk goes up a lot long before end-stage renal disease. ESRD estimates don’t account for the full risk profile, and even if you look at mortality risk the number accounts for perhaps less than half of the total risk attributable to DKD. One thing the ESRD diagnosis does have going for it is that it’s a much more reliable variable indicative of significant pathology than is e.g. microalbuminuria (see e.g. this paper). The paper is short and not at all detailed, but they do briefly discuss/mention these issues:

“…there is a substantive difference between the numbers of people with stage 3 CKD (estimated glomerular filtration rate [eGFR] 30–59 mL/min/1.73 m2) versus those with stages 4 and 5 CKD (eGFR <30 mL/min/1.73 m2): 6.7% of the National Health and Nutrition Examination Survey (NHANES) population compared with 0.1–0.3%, respectively (14). This is primarily because of competing risks, such as death from CV disease that occurs in stage 3 CKD; hence, only the survivors are progressing into stages 4 and 5 CKD. Overall, these studies are very encouraging. Since the 1980s, risk of ESRD has been greatly reduced, while risk of CKD progression persists but at a slower rate. This reduced ESRD rate and slowed CKD progression is largely due to improvements in glycemic and blood pressure control and probably also to the institution of RAAS blockers in more advanced CKD. These data portend even better future outcomes if treatment guidance is followed. […] many medications are effective in blood pressure control, but RAAS blockade should always be a part of any regimen when very high albuminuria is present.”

v. New Understanding of β-Cell Heterogeneity and In Situ Islet Function.

“Insulin-secreting β-cells are heterogeneous in their regulation of hormone release. While long known, recent technological advances and new markers have allowed the identification of novel subpopulations, improving our understanding of the molecular basis for heterogeneity. This includes specific subpopulations with distinct functional characteristics, developmental programs, abilities to proliferate in response to metabolic or developmental cues, and resistance to immune-mediated damage. Importantly, these subpopulations change in disease or aging, including in human disease. […] We will discuss recent findings revealing functional β-cell subpopulations in the intact islet, the underlying basis for these identified subpopulations, and how these subpopulations may influence in situ islet function.”

I won’t cover this one in much detail, but this part was interesting:

“Gap junction (GJ) channels electrically couple β-cells within mouse and human islets (25), serving two main functions. First, GJ channels coordinate oscillatory dynamics in electrical activity and Ca2+ under elevated glucose or GLP-1, allowing pulsatile insulin secretion (26,27). Second, GJ channels lower spontaneous elevations in Ca2+ under low glucose levels (28). GJ coupling is also heterogeneous within the islet (29), leading to some β-cells being highly coupled and others showing negligible coupling. Several studies have examined how electrically heterogeneous cells interact via GJ channels […] This series of experiments indicate a “bistability” in islet function, where a threshold number of poorly responsive β-cells is sufficient to totally suppress islet function. Notably, when islets lacking GJ channels are treated with low levels of the KATP activator diazoxide or the GCK inhibitor mannoheptulose, a subpopulation of cells are silenced, presumably corresponding to the less functional population (30). Only diazoxide/mannoheptulose concentrations capable of silencing >40% of these cells will fully suppress Ca2+ elevations in normal islets. […] this indicates that a threshold number of poorly responsive cells can inhibit the whole islet. Thus, if there exists a threshold number of functionally competent β-cells (∼60–85%), then the islet will show coordinated elevations in Ca2+ and insulin secretion.

Below this threshold number, the islet will lack Ca2+ elevation and insulin secretion (Fig. 2). The precise threshold depends on the characteristics of the excitable and inexcitable populations: small numbers of inexcitable cells will increase the number of functionally competent cells required for islet activity, whereas small numbers of highly excitable cells will do the opposite. However, if GJ coupling is lowered, then inexcitable cells will exert a reduced suppression, also decreasing the threshold required. […] Paracrine communication between β-cells and other endocrine cells is also important for regulating insulin secretion. […] Little is known how these paracrine and juxtacrine mechanisms impact heterogeneous cells.”

vi. Closing in on the Mechanisms of Pulsatile Insulin Secretion.

“Insulin secretion from pancreatic islet β-cells occurs in a pulsatile fashion, with a typical period of ∼5 min. The basis of this pulsatility in mouse islets has been investigated for more than four decades, and the various theories have been described as either qualitative or mathematical models. In many cases the models differ in their mechanisms for rhythmogenesis, as well as other less important details. In this Perspective, we describe two main classes of models: those in which oscillations in the intracellular Ca2+ concentration drive oscillations in metabolism, and those in which intrinsic metabolic oscillations drive oscillations in Ca2+ concentration and electrical activity. We then discuss nine canonical experimental findings that provide key insights into the mechanism of islet oscillations and list the models that can account for each finding. Finally, we describe a new model that integrates features from multiple earlier models and is thus called the Integrated Oscillator Model. In this model, intracellular Ca2+ acts on the glycolytic pathway in the generation of oscillations, and it is thus a hybrid of the two main classes of models. It alone among models proposed to date can explain all nine key experimental findings, and it serves as a good starting point for future studies of pulsatile insulin secretion from human islets.”

This one covers material closely related to the study above, so if you find one of these papers interesting you might want to check out the other one as well. The paper is quite technical but if you were wondering why people are interested in this kind of stuff, one reason is that there’s good evidence at this point that insulin pulsativity is disturbed in type 2 diabetics and so it’d be nice to know why that is so that new drugs can be developed to correct this.

April 25, 2018 Posted by | Biology, Cardiology, Diabetes, Epidemiology, Health Economics, Medicine, Nephrology, Pharmacology, Studies | Leave a comment

Networks

I actually think this was a really nice book, considering the format – I gave it four stars on goodreads. One of the things I noticed people didn’t like about it in the reviews is that it ‘jumps’ a bit in terms of topic coverage; it covers a wide variety of applications and analytical settings. I mostly don’t consider this a weakness of the book – even if occasionally it does get a bit excessive – and I can definitely understand the authors’ choice of approach; it’s sort of hard to illustrate the potential the analytical techniques described within this book have if you’re not allowed to talk about all the areas in which they have been – or could be gainfully – applied. A related point is that many people who read the book might be familiar with the application of these tools in specific contexts but have perhaps not thought about the fact that similar methods are applied in many other areas (and they might all of them be a bit annoyed the authors don’t talk more about computer science applications, or foodweb analyses, or infectious disease applications, or perhaps sociometry…). Most of the book is about graph-theory-related stuff, but a very decent amount of the coverage deals with applications, in a broad sense of the word at least, not theory. The discussion of theoretical constructs in the book always felt to me driven to a large degree by their usefulness in specific contexts.

I have covered related topics before here on the blog, also quite recently – e.g. there’s at least some overlap between this book and Holland’s book about complexity theory in the same series (I incidentally think these books probably go well together) – and as I found the book slightly difficult to blog as it was I decided against covering it in as much detail as I sometimes do when covering these texts – this means that I decided to leave out the links I usually include in posts like these.

Below some quotes from the book.

“The network approach focuses all the attention on the global structure of the interactions within a system. The detailed properties of each element on its own are simply ignored. Consequently, systems as different as a computer network, an ecosystem, or a social group are all described by the same tool: a graph, that is, a bare architecture of nodes bounded by connections. […] Representing widely different systems with the same tool can only be done by a high level of abstraction. What is lost in the specific description of the details is gained in the form of universality – that is, thinking about very different systems as if they were different realizations of the same theoretical structure. […] This line of reasoning provides many insights. […] The network approach also sheds light on another important feature: the fact that certain systems that grow without external control are still capable of spontaneously developing an internal order. […] Network models are able to describe in a clear and natural way how self-organization arises in many systems. […] In the study of complex, emergent, and self-organized systems (the modern science of complexity), networks are becoming increasingly important as a universal mathematical framework, especially when massive amounts of data are involved. […] networks are crucial instruments to sort out and organize these data, connecting individuals, products, news, etc. to each other. […] While the network approach eliminates many of the individual features of the phenomenon considered, it still maintains some of its specific features. Namely, it does not alter the size of the system — i.e. the number of its elements — or the pattern of interaction — i.e. the specific set of connections between elements. Such a simplified model is nevertheless enough to capture the properties of the system. […] The network approach [lies] somewhere between the description by individual elements and the description by big groups, bridging the two of them. In a certain sense, networks try to explain how a set of isolated elements are transformed, through a pattern of interactions, into groups and communities.”

“[T]he random graph model is very important because it quantifies the properties of a totally random network. Random graphs can be used as a benchmark, or null case, for any real network. This means that a random graph can be used in comparison to a real-world network, to understand how much chance has shaped the latter, and to what extent other criteria have played a role. The simplest recipe for building a random graph is the following. We take all the possible pair of vertices. For each pair, we toss a coin: if the result is heads, we draw a link; otherwise we pass to the next pair, until all the pairs are finished (this means drawing the link with a probability p = ½, but we may use whatever value of p). […] Nowadays [the random graph model] is a benchmark of comparison for all networks, since any deviations from this model suggests the presence of some kind of structure, order, regularity, and non-randomness in many real-world networks.”

“…in networks, topology is more important than metrics. […] In the network representation, the connections between the elements of a system are much more important than their specific positions in space and their relative distances. The focus on topology is one of its biggest strengths of the network approach, useful whenever topology is more relevant than metrics. […] In social networks, the relevance of topology means that social structure matters. […] Sociology has classified a broad range of possible links between individuals […]. The tendency to have several kinds of relationships in social networks is called multiplexity. But this phenomenon appears in many other networks: for example, two species can be connected by different strategies of predation, two computers by different cables or wireless connections, etc. We can modify a basic graph to take into account this multiplexity, e.g. by attaching specific tags to edges. […] Graph theory [also] allows us to encode in edges more complicated relationships, as when connections are not reciprocal. […] If a direction is attached to the edges, the resulting structure is a directed graph […] In these networks we have both in-degree and out-degree, measuring the number of inbound and outbound links of a node, respectively. […] in most cases, relations display a broad variation or intensity [i.e. they are not binary/dichotomous]. […] Weighted networks may arise, for example, as a result of different frequencies of interactions between individuals or entities.”

“An organism is […] the outcome of several layered networks and not only the deterministic result of the simple sequence of genes. Genomics has been joined by epigenomics, transcriptomics, proteomics, metabolomics, etc., the disciplines that study these layers, in what is commonly called the omics revolution. Networks are at the heart of this revolution. […] The brain is full of networks where various web-like structures provide the integration between specialized areas. In the cerebellum, neurons form modules that are repeated again and again: the interaction between modules is restricted to neighbours, similarly to what happens in a lattice. In other areas of the brain, we find random connections, with a more or less equal probability of connecting local, intermediate, or distant neurons. Finally, the neocortex — the region involved in many of the higher functions of mammals — combines local structures with more random, long-range connections. […] typically, food chains are not isolated, but interwoven in intricate patterns, where a species belongs to several chains at the same time. For example, a specialized species may predate on only one prey […]. If the prey becomes extinct, the population of the specialized species collapses, giving rise to a set of co-extinctions. An even more complicated case is where an omnivore species predates a certain herbivore, and both eat a certain plant. A decrease in the omnivore’s population does not imply that the plant thrives, because the herbivore would benefit from the decrease and consume even more plants. As more species are taken into account, the population dynamics can become more and more complicated. This is why a more appropriate description than ‘foodchains’ for ecosystems is the term foodwebs […]. These are networks in which nodes are species and links represent relations of predation. Links are usually directed (big fishes eat smaller ones, not the other way round). These networks provide the interchange of food, energy, and matter between species, and thus constitute the circulatory system of the biosphere.”

“In the cell, some groups of chemicals interact only with each other and with nothing else. In ecosystems, certain groups of species establish small foodwebs, without any connection to external species. In social systems, certain human groups may be totally separated from others. However, such disconnected groups, or components, are a strikingly small minority. In all networks, almost all the elements of the systems take part in one large connected structure, called a giant connected component. […] In general, the giant connected component includes not less than 90 to 95 per cent of the system in almost all networks. […] In a directed network, the existence of a path from one node to another does not guarantee that the journey can be made in the opposite direction. Wolves eat sheep, and sheep eat grass, but grass does not eat sheep, nor do sheep eat wolves. This restriction creates a complicated architecture within the giant connected component […] according to an estimate made in 1999, more than 90 per cent of the WWW is composed of pages connected to each other, if the direction of edges is ignored. However, if we take direction into account, the proportion of nodes mutually reachable is only 24 per cent, the giant strongly connected component. […] most networks are sparse, i.e. they tend to be quite frugal in connections. Take, for example, the airport network: the personal experience of every frequent traveller shows that direct flights are not that common, and intermediate stops are necessary to reach several destinations; thousands of airports are active, but each city is connected to less than 20 other cities, on average. The same happens in most networks. A measure of this is given by the mean number of connection of their nodes, that is, their average degree.”

“[A] puzzling contradiction — a sparse network can still be very well connected — […] attracted the attention of the Hungarian mathematicians […] Paul Erdős and Alfréd Rényi. They tackled it by producing different realizations of their random graph. In each of them, they changed the density of edges. They started with a very low density: less than one edge per node. It is natural to expect that, as the density increases, more and more nodes will be connected to each other. But what Erdős and Rényi found instead was a quite abrupt transition: several disconnected components coalesced suddenly into a large one, encompassing almost all the nodes. The sudden change happened at one specific critical density: when the average number of links per node (i.e. the average degree) was greater than one, then the giant connected component suddenly appeared. This result implies that networks display a very special kind of economy, intrinsic to their disordered structure: a small number of edges, even randomly distributed between nodes, is enough to generate a large structure that absorbs almost all the elements. […] Social systems seem to be very tightly connected: in a large enough group of strangers, it is not unlikely to find pairs of people with quite short chains of relations connecting them. […] The small-world property consists of the fact that the average distance between any two nodes (measured as the shortest path that connects them) is very small. Given a node in a network […], few nodes are very close to it […] and few are far from it […]: the majority are at the average — and very short — distance. This holds for all networks: starting from one specific node, almost all the nodes are at very few steps from it; the number of nodes within a certain distance increases exponentially fast with the distance. Another way of explaining the same phenomenon […] is the following: even if we add many nodes to a network, the average distance will not increase much; one has to increase the size of a network by several orders of magnitude to notice that the paths to new nodes are (just a little) longer. The small-world property is crucial to many network phenomena. […] The small-world property is something intrinsic to networks. Even the completely random Erdős-Renyi graphs show this feature. By contrast, regular grids do not display it. If the Internet was a chessboard-like lattice, the average distance between two routers would be of the order of 1,000 jumps, and the Net would be much slower [the authors note elsewhere that “The Internet is composed of hundreds of thousands of routers, but just about ten ‘jumps’ are enough to bring an information packet from one of them to any other.”] […] The key ingredient that transforms a structure of connections into a small world is the presence of a little disorder. No real network is an ordered array of elements. On the contrary, there are always connections ‘out of place’. It is precisely thanks to these connections that networks are small worlds. […] Shortcuts are responsible for the small-world property in many […] situations.”

“Body size, IQ, road speed, and other magnitudes have a characteristic scale: that is, an average value that in the large majority of cases is a rough predictor of the actual value that one will find. […] While height is a homogeneous magnitude, the number of social connection[s] is a heterogeneous one. […] A system with this feature is said to be scale-free or scale-invariant, in the sense that it does not have a characteristic scale. This can be rephrased by saying that the individual fluctuations with respect to the average are too large for us to make a correct prediction. […] In general, a network with heterogeneous connectivity has a set of clear hubs. When a graph is small, it is easy to find whether its connectivity is homogeneous or heterogeneous […]. In the first case, all the nodes have more or less the same connectivity, while in the latter it is easy to spot a few hubs. But when the network to be studied is very big […] things are not so easy. […] the distribution of the connectivity of the nodes of the […] network […] is the degree distribution of the graph. […] In homogeneous networks, the degree distribution is a bell curve […] while in heterogeneous networks, it is a power law […]. The power law implies that there are many more hubs (and much more connected) in heterogeneous networks than in homogeneous ones. Moreover, hubs are not isolated exceptions: there is a full hierarchy of nodes, each of them being a hub compared with the less connected ones.”

“Looking at the degree distribution is the best way to check if a network is heterogeneous or not: if the distribution is fat tailed, then the network will have hubs and heterogeneity. A mathematically perfect power law is never found, because this would imply the existence of hubs with an infinite number of connections. […] Nonetheless, a strongly skewed, fat-tailed distribution is a clear signal of heterogeneity, even if it is never a perfect power law. […] While the small-world property is something intrinsic to networked structures, hubs are not present in all kind of networks. For example, power grids usually have very few of them. […] hubs are not present in random networks. A consequence of this is that, while random networks are small worlds, heterogeneous ones are ultra-small worlds. That is, the distance between their vertices is relatively smaller than in their random counterparts. […] Heterogeneity is not equivalent to randomness. On the contrary, it can be the signature of a hidden order, not imposed by a top-down project, but generated by the elements of the system. The presence of this feature in widely different networks suggests that some common underlying mechanism may be at work in many of them. […] the Barabási–Albert model gives an important take-home message. A simple, local behaviour, iterated through many interactions, can give rise to complex structures. This arises without any overall blueprint”.

Homogamy, the tendency of like to marry like, is very strong […] Homogamy is a specific instance of homophily: this consists of a general trend of like to link to like, and is a powerful force in shaping social networks […] assortative mixing [is] a special form of homophily, in which nodes tend to connect with others that are similar to them in the number of connections. By contrast [when] high- and low-degree nodes are more connected to each other [it] is called disassortative mixing. Both cases display a form of correlation in the degrees of neighbouring nodes. When the degrees of neighbours are positively correlated, then the mixing is assortative; when negatively, it is disassortative. […] In random graphs, the neighbours of a given node are chosen completely at random: as a result, there is no clear correlation between the degrees of neighbouring nodes […]. On the contrary, correlations are present in most real-world networks. Although there is no general rule, most natural and technological networks tend to be disassortative, while social networks tend to be assortative. […] Degree assortativity and disassortativity are just an example of the broad range of possible correlations that bias how nodes tie to each other.”

“[N]etworks (neither ordered lattices nor random graphs), can have both large clustering and small average distance at the same time. […] in almost all networks, the clustering of a node depends on the degree of that node. Often, the larger the degree, the smaller the clustering coefficient. Small-degree nodes tend to belong to well-interconnected local communities. Similarly, hubs connect with many nodes that are not directly interconnected. […] Central nodes usually act as bridges or bottlenecks […]. For this reason, centrality is an estimate of the load handled by a node of a network, assuming that most of the traffic passes through the shortest paths (this is not always the case, but it is a good approximation). For the same reason, damaging central nodes […] can impair radically the flow of a network. Depending on the process one wants to study, other definitions of centrality can be introduced. For example, closeness centrality computes the distance of a node to all others, and reach centrality factors in the portion of all nodes that can be reached in one step, two steps, three steps, and so on.”

“Domino effects are not uncommon in foodwebs. Networks in general provide the backdrop for large-scale, sudden, and surprising dynamics. […] most of the real-world networks show a doubled-edged kind of robustness. They are able to function normally even when a large fraction of the network is damaged, but suddenly certain small failures, or targeted attacks, bring them down completely. […] networks are very different from engineered systems. In an airplane, damaging one element is enough to stop the whole machine. In order to make it more resilient, we have to use strategies such as duplicating certain pieces of the plane: this makes it almost 100 per cent safe. In contrast, networks, which are mostly not blueprinted, display a natural resilience to a broad range of errors, but when certain elements fail, they collapse. […] A random graph of the size of most real-world networks is destroyed after the removal of half of the nodes. On the other hand, when the same procedure is performed on a heterogeneous network (either a map of a real network or a scale-free model of a similar size), the giant connected component resists even after removing more than 80 per cent of the nodes, and the distance within it is practically the same as at the beginning. The scene is different when researchers simulate a targeted attack […] In this situation the collapse happens much faster […]. However, now the most vulnerable is the second: while in the homogeneous network it is necessary to remove about one-fifth of its more connected nodes to destroy it, in the heterogeneous one this happens after removing the first few hubs. Highly connected nodes seem to play a crucial role, in both errors and attacks. […] hubs are mainly responsible for the overall cohesion of the graph, and removing a few of them is enough to destroy it.”

“Studies of errors and attacks have shown that hubs keep different parts of a network connected. This implies that they also act as bridges for spreading diseases. Their numerous ties put them in contact with both infected and healthy individuals: so hubs become easily infected, and they infect other nodes easily. […] The vulnerability of heterogeneous networks to epidemics is bad news, but understanding it can provide good ideas for containing diseases. […] if we can immunize just a fraction, it is not a good idea to choose people at random. Most of the times, choosing at random implies selecting individuals with a relatively low number of connections. Even if they block the disease from spreading in their surroundings, hubs will always be there to put it back into circulation. A much better strategy would be to target hubs. Immunizing hubs is like deleting them from the network, and the studies on targeted attacks show that eliminating a small fraction of hubs fragments the network: thus, the disease will be confined to a few isolated components. […] in the epidemic spread of sexually transmitted diseases the timing of the links is crucial. Establishing an unprotected link with a person before they establish an unprotected link with another person who is infected is not the same as doing so afterwards.”

April 3, 2018 Posted by | Biology, Books, Ecology, Engineering, Epidemiology, Genetics, Mathematics, Statistics | Leave a comment

Marine Biology (II)

Below some observations and links related to the second half of the book’s coverage:

[C]oral reefs occupy a very small proportion of the planet’s surface – about 284,000 square kilometres – roughly equivalent to the size of Italy [yet they] are home to an incredibly diversity of marine organisms – about a quarter of all marine species […]. Coral reef systems provide food for hundreds of millions of people, with about 10 per cent of all fish consumed globally caught on coral reefs. […] Reef-building corals thrive best at sea temperatures above about 23°C and few exist where sea temperatures fall below 18°C for significant periods of time. Thus coral reefs are absent at tropical latitudes where upwelling of cold seawater occurs, such as the west coasts of South America and Africa. […] they are generally restricted to areas of clear water less than about 50 metres deep. Reef-building corals are very intolerant of any freshening of seawater […] and so do not occur in areas exposed to intermittent influxes of freshwater, such as near the mouths of rivers, or in areas where there are high amounts of rainfall run-off. This is why coral reefs are absent along much of the tropical Atlantic coast of South America, which is exposed to freshwater discharge from the Amazon and Orinoco Rivers. Finally, reef-building corals flourish best in areas with moderate to high wave action, which keeps the seawater well aerated […]. Spectacular and productive coral reef systems have developed in those parts of the Global Ocean where this special combination of physical conditions converges […] Each colony consists of thousands of individual animals called polyps […] all reef-building corals have entered into an intimate relationship with plant cells. The tissues lining the inside of the tentacles and stomach cavity of the polyps are packed with photosynthetic cells called zooxanthellae, which are photosynthetic dinoflagellates […] Depending on the species, corals receive anything from about 50 per cent to 95 per cent of their food from their zooxanthellae. […] Healthy coral reefs are very productive marine systems. This is in stark contrast to the nutrient-poor and unproductive tropical waters adjacent to reefs. Coral reefs are, in general, roughly one hundred times more productive than the surrounding environment”.

“Overfishing constitutes a significant threat to coral reefs at this time. About an eighth of the world’s population – roughly 875 million people – live within 100 kilometres of a coral reef. Most of the people live in developing countries and island nations and depend greatly on fish obtained from coral reefs as a food source. […] Some of the fishing practices are very harmful. Once the large fish are removed from a coral reef, it becomes increasingly more difficult to make a living harvesting the more elusive and lower-value smaller fish that remain. Fishers thus resort to more destructive techniques such as dynamiting parts of the reef and scooping up the dead and stunned fish that float to the surface. People capturing fish for the tropical aquarium trade will often poison parts of the reef with sodium cyanide which paralyses the fish, making them easier to catch. An unfortunate side effect of this practice is that the poison kills corals. […] Coral reefs have only been seriously studied since the 1970s, which in most cases was well after human impacts had commenced. This makes it difficult to define what might actually constitute a ‘natural’ and healthy coral reef system, as would have existed prior to extensive human impacts.”

“Mangrove is a collective term applied to a diverse group of trees and scrubs that colonize protected muddy intertidal areas in tropical and subtropical regions, creating mangrove forests […] Mangroves are of great importance from a human perspective. The sheltered waters of a mangrove forest provide important nursery areas for juvenile fish, crabs, and shrimp. Many commercial fisheries depend on the existence of healthy mangrove forests, including blue crab, shrimp, spiny lobster, and mullet fisheries. Mangrove forests also stabilize the foreshore and protect the adjacent land from erosion, particularly from the effects of large storms and tsunamis. They also act as biological filters by removing excess nutrients and trapping sediment from land run-off before it enters the coastal environment, thereby protecting other habitats such as seagrass meadows and coral reefs. […] [However] mangrove forests are disappearing rapidly. In a twenty-year period between 1980 and 2000 the area of mangrove forest globally declined from around 20 million hectares to below 15 million hectares. In some specific regions the rate of mangrove loss is truly alarming. For example, Puerto Rico lost about 89 per cent of its mangrove forests between 1930 and 1985, while the southern part of India lost about 96 per cent of its mangroves between 1911 and 1989.”

“[A]bout 80 per cent of the entire volume of the Global Ocean, or roughly one billion cubic kilometres, consists of seawater with depths greater than 1,000 metres […] The deep ocean is a permanently dark environment devoid of sunlight, the last remnants of which cannot penetrate much beyond 200 metres in most parts of the Global Ocean, and no further than 800 metres or so in even the clearest oceanic waters. The only light present in the deep ocean is of biological origin […] Except in a few very isolated places, the deep ocean is a permanently cold environment, with sea temperatures ranging from about 2° to 4°C. […] Since there is no sunlight, there is no plant life, and thus no primary production of organic matter by photosynthesis. The base of the food chain in the deep ocean consists mostly of a ‘rain’ of small particles of organic material sinking down through the water column from the sunlit surface waters of the ocean. This reasonably constant rain of organic material is supplemented by the bodies of large fish and marine mammals that sink more rapidly to the bottom following death, and which provide sporadic feasts for deep-ocean bottom dwellers. […] Since food is a scarce commodity for deep-ocean fish, full advantage must be taken of every meal encountered. This has resulted in a number of interesting adaptations. Compared to fish in the shallow ocean, many deep-ocean fish have very large mouths capable of opening very wide, and often equipped with numerous long, sharp, inward-pointing teeth. […] These fish can capture and swallow whole prey larger than themselves so as not to pass up a rare meal simply because of its size. These fish also have greatly extensible stomachs to accommodate such meals.”

“In the pelagic environment of the deep ocean, animals must be able to keep themselves within an appropriate depth range without using up energy in their food-poor habitat. This is often achieved by reducing the overall density of the animal to that of seawater so that it is neutrally buoyant. Thus the tissues and bones of deep-sea fish are often rather soft and watery. […] There is evidence that deep-ocean organisms have developed biochemical adaptations to maintain the functionality of their cell membranes under pressure, including adjusting the kinds of lipid molecules present in membranes to retain membrane fluidity under high pressure. High pressures also affect protein molecules, often preventing them from folding up into the correct shapes for them to function as efficient metabolic enzymes. There is evidence that deep-ocean animals have evolved pressure-resistant variants of common enzymes that mitigate this problem. […] The pattern of species diversity of the deep-ocean benthos appears to differ from that of other marine communities, which are typically dominated by a small number of abundant and highly visible species which overshadow the presence of a large number of rarer and less obvious species which are also present. In the deep-ocean benthic community, in contrast, no one group of species tends to dominate, and the community consists of a high number of different species all occurring in low abundance. […] In general, species diversity increases with the size of a habitat – the larger the area of a habitat, the more species that have developed ways to successfully live in that habitat. Since the deep-ocean bottom is the largest single habitat on the planet, it follows that species diversity would be expected to be high.”

Seamounts represent a special kind of biological hotspot in the deep ocean. […] In contrast to the surrounding flat, soft-bottomed abyssal plains, seamounts provide a complex rocky platform that supports an abundance of organisms that are distinct from the surrounding deep-ocean benthos. […] Seamounts support a great diversity of fish species […] This [has] triggered the creation of new deep-ocean fisheries focused on seamounts. […] [However these species are generally] very slow-growing and long-lived and mature at a late age, and thus have a low reproductive potential. […] Seamount fisheries have often been described as mining operations rather than sustainable fisheries. They typically collapse within a few years of the start of fishing and the trawlers then move on to other unexplored seamounts to maintain the fishery. The recovery of localized fisheries will inevitably be very slow, if achievable at all, because of the low reproductive potential of these deep-ocean fish species. […] Comparisons of ‘fished’ and ‘unfished’ seamounts have clearly shown the extent of habitat damage and loss of species diversity brought about by trawl fishing, with the dense coral habitats reduced to rubble over much of the area investigated. […] Unfortunately, most seamounts exist in areas beyond national jurisdiction, which makes it very difficult to regulate fishing activities on them, although some efforts are underway to establish international treaties to better manage and protect seamount ecosystems.”

“Hydrothermal vents are unstable and ephemeral features of the deep ocean. […] The lifespan of a typical vent is likely in the order of tens of years. Thus the rich communities surrounding vents have a very limited lifespan. Since many vent animals can live only near vents, and the distance between vent systems can be hundreds to thousands of kilometres, it is a puzzle as to how vent animals escape a dying vent and colonize other distant vents or newly created vents. […] Hydrothermal vents are [however] not the only source of chemical-laden fluids supporting unique chemosynthetic-based communities in the deep ocean. Hydrogen sulphide and methane also ooze from the ocean buttom at some locations at temperatures similar to the surrounding seawater. These so-called ‘cold seeps‘ are often found along continental margins […] The communities associated with cold seeps are similar to hydrothermal vent communities […] Cold seeps appear to be more permanent sources of fluid compared to the ephemeral nature of hot water vents.”

“Seepage of crude oil into the marine environment occurs naturally from oil-containing geological formations below the seabed. It is estimated that around 600,000 tonnes of crude oil seeps into the marine environment each year, which represents almost half of all the crude oil entering the oceans. […] The human activities associated with exploring for and producing oil result in the release on average of an estimated 38,000 tonnes of crude oil into the oceans each year, which is about 6 per cent of the total anthropogenic input of oil into the oceans worldwide. Although small in comparison to natural seepage, crude oil pollution from this source can cause serious damage to coastal ecosystems because it is released near the coast and sometimes in very large, concentrated amounts. […] The transport of oil and oil products around the globe in tankers results in the release of about 150,000 tonnes of oil worldwide each year on average, or about 22 per cent of the total anthropogenic input. […] About 480,000 tonnes of oil make their way into the marine environment each year worldwide from leakage associated with the consumption of oil-derived products in cars and trucks, and to a lesser extent in boats. Oil lost from the operation of cars and trucks collects on paved urban areas from where it is washed off into streams and rivers, and from there into the oceans. Surprisingly, this represents the most significant source of human-derived oil pollution into the marine environment – about 72 per cent of the total. Because it is a very diffuse source of pollution, it is the most difficult to control.”

“Today it has been estimated that virtually all of the marine food resources in the Mediterranean sea have been reduced to less than 50 per cent of their original abundance […] The greatest impact has been on the larger predatory fish, which were the first to be targeted by fishers. […] It is estimated that, collectively, the European fish stocks of today are just one-tenth of their size in 1900. […] In 1950 the total global catch of marine seafood was just less than twenty million tonnes fresh weight. This increased steadily and rapidly until by the late 1980s more than eighty million tonnes were being taken each year […] Starting in the early 1990s, however, yields began to show signs of levelling off. […] By far the most heavily exploited marine fishery in the world is the Peruvian anchoveta (Engraulis ringens) fishery, which can account for 10 per cent or more of the global marine catch of seafood in any particular year. […] The anchoveta is a very oily fish, which makes it less desirable for direct consumption by humans. However, the high oil content makes it ideal for the production of fish meal and fish oil […] the demand for fish meal and fish oil is huge and about a third of the entire global catch of fish is converted into these products rather than consumed directly by humans. Feeding so much fish protein to livestock comes with a considerable loss of potential food energy (around 25 per cent) compared to if it was eaten directly by humans. This could be viewed as a potential waste of available energy for a rapidly growing human population […] around 90 per cent of the fish used to produce fish meal and oil is presently unpalatable to most people and thus unmarketable in large quantities as a human food”.

“On heavily fished areas of the continental shelves, the same parts of the sea floor can be repeatedly trawled many times per year. Such intensive bottom trawling causes great cumulative damage to seabed habitats. The trawls scrape and pulverize rich and complex bottom habitats built up over centuries by living organisms such as tube worms, cold-water corals, and oysters. These habitats are eventually reduced to uniform stretches of rubble and sand. For all intents and purposes these areas are permanently altered and become occupied by a much changed and much less rich community adapted to frequent disturbance.”

“The eighty million tonnes or so of marine seafood caught each year globally equates to about eleven kilograms of wild-caught marine seafood per person on the planet. […] What is perfectly clear […] on the basis of theory backed up by real data on marine fish catches, is that marine fisheries are now fully exploited and that there is little if any headroom for increasing the amount of wild-caught fish humans can extract from the oceans to feed a burgeoning human population. […] This conclusion is solidly supported by the increasingly precarious state of global marine fishery resources. The most recent information from the Food and Agriculture Organization of the United Nations (The State of World Fisheries and Aquaculture 2010) shows that over half (53 per cent of all fish stocks are fully exploited – their current catches are at or close to their maximum sustainable levels of production and there is no scope for further expansion. Another 32 per cent are overexploited and in decline. Of the remaining 15 per cent of stocks, 12 per cent are considered moderately exploited and only 3 per cent underexploited. […] in the mid 1970s 40 per cent of all fish stocks were in [the moderately exploited or unexploited] category as opposed to around 15 per cent now. […] the real question is not so much whether we can get more fish from the sea but whether we can sustain the amount of fish we are harvesting at present”.

Links:

Scleractinia.
Atoll. Fringing reef. Barrier reef.
Corallivore.
Broadcast spawning.
Acanthaster planci.
Coral bleaching. Ocean acidification.
Avicennia germinans. Pneumatophores. Lenticel.
Photophore. Lanternfish. Anglerfish. Black swallower.
Deep scattering layer. Taylor column.
Hydrothermal vent. Black smokers and white smokers. Chemosynthesis. Siboglinidae.
Intertidal zone. Tides. Tidal range.
Barnacle. Mussel.
Clupeidae. Gadidae. Scombridae.

March 16, 2018 Posted by | Biology, Books, Chemistry, Ecology, Evolutionary biology, Geology | Leave a comment

Marine Biology (I)

This book was ‘okay’.

Some quotes and links related to the first half of the book below.

Quotes:

“The Global Ocean has come to be divided into five regional oceans – the Pacific, Atlantic, Indian, Arctic, and Southern Oceans […] These oceans are large, seawater-filled basins that share characteristic structural features […] The edge of each basin consists of a shallow, gently sloping extension of the adjacent continental land mass and is term the continental shelf or continental margin. Continental shelves typically extend off-shore to depths of a couple of hundred metres and vary from several kilometres to hundreds of kilometres in width. […] At the outer edge of the continental shelf, the seafloor drops off abruptly and steeply to form the continental slope, which extends down to depths of 2–3 kilometres. The continental slope then flattens out and gives way to a vast expanse of flat, soft, ocean bottom — the abyssal plain — which extends over depths of about 3–5 kilometres and accounts for about 76 per cent of the Global Ocean floor. The abyssal plains are transected by extensive mid-ocean ridges—underwater mountain chains […]. Mid-ocean ridges form a continuous chain of mountains that extend linearly for 65,000 kilometres across the floor of the Global Ocean basins […]. In some places along the edges of the abyssal plains the ocean bottom is cut by narrow, oceanic trenches or canyons which plunge to extraordinary depths — 3–4 kilometres below the surrounding seafloor — and are thousands of kilometres long but only tens of kilometres wide. […] Seamounts are another distinctive and dramatic feature of ocean basins. Seamounts are typically extinct volcanoes that rise 1,000 or more metres above the surrounding ocean but do not reach the surface of the ocean. […] Seamounts generally occur in chains or clusters in association with mid-ocean ridges […] The Global Ocean contains an estimated 100,000 or so seamounts that rise more than 1,000 metres above the surrounding deep-ocean floor. […] on a planetary scale, the surface of the Global Ocean is moving in a series of enormous, roughly circular, wind-driven current systems, or gyres […] These gyres transport enormous volumes of water and heat energy from one part of an ocean basin to another

“We now know that the oceans are literally teeming with life. Viruses […] are astoundingly abundant – there are around ten million viruses per millilitre of seawater. Bacteria and other microorganisms occur at concentrations of around 1 million per millilitre”

“The water in the oceans is in the form of seawater, a dilute brew of dissolved ions, or salts […] Chloride and sodium ions are the predominant salts in seawater, along with smaller amounts of other ions such as sulphate, magnesium, calcium, and potassium […] The total amount of dissolved salts in seawater is termed its salinity. Seawater typically has a salinity of roughly 35 – equivalent to about 35 grams of salts in one kilogram of seawater. […] Most marine organisms are exposed to seawater that, compared to the temperature extremes characteristic of terrestrial environments, ranges within a reasonably moderate range. Surface waters in tropical parts of ocean basins are consistently warm throughout the year, ranging from about 20–27°C […]. On the other hand, surface seawater in polar parts of ocean basins can get as cold as −1.9°C. Sea temperatures typically decrease with depth, but not in a uniform fashion. A distinct zone of rapid temperature transition is often present that separates warm seawater at the surface from cooler deeper seawater. This zone is called the thermocline layer […]. In tropical ocean waters the thermocline layer is a strong, well-defined and permanent feature. It may start at around 100 metres and be a hundred or so metres thick. Sea temperatures above the thermocline can be a tropical 25°C or more, but only 6–7°C just below the thermocline. From there the temperature drops very gradually with increasing depth. Thermoclines in temperate ocean regions are a more seasonal phenomenon, becoming well established in the summer as the sun heats up the surface waters, and then breaking down in the autumn and winter. Thermoclines are generally absent in the polar regions of the Global Ocean. […] As a rule of thumb, in the clearest ocean waters some light will penetrate to depths of 150-200 metres, with red light being absorbed within the first few metres and green and blue light penetrating the deepest. At certain times of the year in temperate coastal seas light may penetrate only a few tens of metres […] In the oceans, pressure increases by an additional atmosphere every 10 metres […] Thus, an organism living at a depth of 100 metres on the continental shelf experiences a pressure ten times greater than an organism living at sea level; a creature living at 5 kilometres depth on an abyssal plain experiences pressures some 500 times greater than at the surface”.

“With very few exceptions, dissolved oxygen is reasonably abundant throughout all parts of the Global Ocean. However, the amount of oxygen in seawater is much less than in air — seawater at 20°C contains about 5.4 millilitres of oxygen per litre of seawater, whereas air at this temperature contains about 210 millilitres of oxygen per litre. The colder the seawater, the more oxygen it contains […]. Oxygen is not distributed evenly with depth in the oceans. Oxygen levels are typically high in a thin surface layer 10–20 metres deep. Here oxygen from the atmosphere can freely diffuse into the seawater […] Oxygen concentration then decreases rapidly with depth and reaches very low levels, sometimes close to zero, at depths of around 200–1,000 metres. This region is referred to as the oxygen minimum zone […] This zone is created by the low rates of replenishment of oxygen diffusing down from the surface layer of the ocean, combined with the high rates of depletion of oxygen by decaying particulate organic matter that sinks from the surface and accumulates at these depths. Beneath the oxygen minimum zone, oxygen content increases again with depth such that the deep oceans contain quite high levels of oxygen, though not generally as high as in the surface layer. […] In contrast to oxygen, carbon dioxide (CO2) dissolves readily in seawater. Some of it is then converted into carbonic acid (H2CO3), bicarbonate ion (HCO3-), and carbonate ion (CO32-), with all four compounds existing in equilibrium with one another […] The pH of seawater is inversely proportional to the amount of carbon dioxide dissolved in it. […] the warmer the seawater, the less carbon dioxide it can absorb. […] Seawater is naturally slightly alkaline, with a pH ranging from about 7.5 to 8.5, and marine organisms have become well adapted to life within this stable pH range. […] In the oceans, carbon is never a limiting factor to marine plant photosynthesis and growth, as it is for terrestrial plants.”

“Since the beginning of the industrial revolution, the average pH of the Global Ocean has dropped by about 0.1 pH unit, making it 30 per cent more acidic than in pre-industrial times. […] As a result, more and more parts of the oceans are falling below a pH of 7.5 for longer periods of time. This trend, termed ocean acidification, is having profound impacts on marine organisms and the overall functioning of the marine ecosystem. For example, many types of marine organisms such as corals, clams, oysters, sea urchins, and starfish manufacture external shells or internal skeletons containing calcium carbonate. When the pH of seawater drops below about 7.5, calcium carbonate starts to dissolve, and thus the shells and skeletons of these organisms begin to erode and weaken, with obvious impacts on the health of the animal. Also, these organisms produce their calcium carbonate structures by combining calcium dissolved in seawater with carbonate ion. As the pH decreases, more of the carbonate ions in seawater become bound up with the increasing numbers of hydrogen ions, making fewer carbonate ions available to the organisms for shell-forming purposes. It thus becomes more difficult for these organisms to secrete their calcium carbonate structures and grow.”

“Roughly half of the planet’s primary production — the synthesis of organic compounds by chlorophyll-bearing organisms using energy from the sun—is produced within the Global Ocean. On land the primary producers are large, obvious, and comparatively long-lived — the trees, shrubs, and grasses characteristic of the terrestrial landscape. The situation is quite different in the oceans where, for the most part, the primary producers are minute, short-lived microorganisms suspended in the sunlit surface layer of the oceans. These energy-fixing microorganisms — the oceans’ invisible forest — are responsible for almost all of the primary production in the oceans. […] A large amount, perhaps 30-50 per cent, of marine primary production is produced by bacterioplankton comprising tiny marine photosynthetic bacteria ranging from about 0.5 to 2 μm in size. […] light availability and the strength of vertical mixing are important factors limiting primary production in the oceans. Nutrient availability is the other main factor limiting the growth of primary producers. One important nutrient is nitrogen […] nitrogen is a key component of amino acids, which are the building blocks of proteins. […] Photosynthetic marine organisms also need phosphorus, which is a requirement for many important biological functions, including the synthesis of nucleic acids, a key component of DNA. Phosphorus in the oceans comes naturally from the erosion of rocks and soils on land, and is transported into the oceans by rivers, much of it in the form of dissolved phosphate (PO43−), which can be readily absorbed by marine photosynthetic organisms. […] Inorganic nitrogen and phosphorus compounds are abundant in deep-ocean waters. […] In practice, inorganic nitrogen and phosphorus compounds are not used up at exactly the same rate. Thus one will be depleted before the other and becomes the limiting nutrient at the time, preventing further photosynthesis and growth of marine primary producers until it is replenished. Nitrogen is often considered to be the rate-limiting nutrient in most oceanic environments, particularly in the open ocean. However, in coastal waters phosphorus is often the rate-limiting nutrient.”

“The overall pattern of primary production in the Global Ocean depends greatly on latitude […] In polar oceans primary production is a boom-and-bust affair driven by light availability. Here the oceans are well mixed throughout the year so nutrients are rarely limiting. However, during the polar winter there is no light, and thus no primary production is taking place. […] Although limited to a short seasonal pulse, the total amount of primary production can be quite high, especially in the polar Southern Ocean […] In tropical open oceans, primary production occurs at a low level throughout the year. Here light is never limiting but the permanent tropical thermocline prevents the mixing of deep, nutrient-rich seawater with the surface waters. […] open-ocean tropical waters are often referred to as ‘marine deserts’, with productivity […] comparable to a terrestrial desert. In temperate open-ocean regions, primary productivity is linked closely to seasonal events. […] Although occurring in a number of pulses, primary productivity in temperate oceans [is] similar to [that of] a temperate forest or grassland. […] Some of the most productive marine environments occur in coastal ocean above the continental shelves. This is the result of a phenomenon known as coastal upwelling which brings deep, cold, nutrient-rich seawater to the ocean surface, creating ideal conditions for primary productivity […], comparable to a terrestrial rainforest or cultivated farmland. These hotspots of marine productivity are created by wind acting in concert with the planet’s rotation. […] Coastal upwelling can occur when prevailing winds move in a direction roughly parallel to the edge of a continent so as to create offshore Ekman transport. Coastal upwelling is particularly prevalent along the west coasts of continents. […] Since coastal upwelling is dependent on favourable winds, it tends to be a seasonal or intermittent phenomenon and the strength of upwelling will depend on the strength of the winds. […] Important coastal upwelling zones around the world include the coasts of California, Oregon, northwest Africa, and western India in the northern hemisphere; and the coasts of Chile, Peru, and southwest Africa in the southern hemisphere. These regions are amongst the most productive marine ecosystems on the planet.”

“Considering the Global Ocean as a whole, it is estimated that total marine primary production is about 50 billion tonnes of carbon per year. In comparison, the total production of land plants, which can also be estimated using satellite data, is estimated at around 52 billion tonnes per year. […] Primary production in the oceans is spread out over a much larger surface area and so the average productivity per unit of surface area is much smaller than on land. […] the energy of primary production in the oceans flows to higher trophic levels through several different pathways of various lengths […]. Some energy is lost along each step of the pathway — on average the efficiency of energy transfer from one trophic level to the next is about 10 per cent. Hence, shorter pathways are more efficient. Via these pathways, energy ultimately gets transferred to large marine consumers such as large fish, marine mammals, marine turtles, and seabirds.”

“…it has been estimated that in the 17th century, somewhere between fifty million and a hundred million green turtles inhabited the Caribbean Sea, but numbers are now down to about 300,000. Since their numbers are now so low, their impact on seagrass communities is currently small, but in the past, green turtles would have been extraordinarily abundant grazers of seagrasses. It appears that in the past, green turtles thinned out seagrass beds, thereby reducing direct competition among different species of seagrass and allowing several species of seagrass to coexist. Without green turtles in the system, seagrass beds are generally overgrown monocultures of one dominant species. […] Seagrasses are of considerable importance to human society. […] It is therefore of great concern that seagrass meadows are in serious decline globally. In 2003 it was estimated that 15 per cent of the planet’s existing seagrass beds had disappeared in the preceding ten years. Much of this is the result of increasing levels of coastal development and dredging of the seabed, activities which release excessive amounts of sediment into coastal waters which smother seagrasses. […] The number of marine dead zones in the Global Ocean has roughly doubled every decade since the 1960s”.

“Sea ice is habitable because, unlike solid freshwater ice, it is a very porous substance. As sea ice forms, tiny spaces between the ice crystals become filled with a highly saline brine solution resistant to freezing. Through this process a three-dimensional network of brine channels and spaces, ranging from microscopic to several centimetres in size, is created within the sea ice. These channels are physically connected to the seawater beneath the ice and become colonized by a great variety of marine organisms. A significant amount of the primary production in the Arctic Ocean, perhaps up to 50 per cent in those areas permanently covered by sea ice, takes place in the ice. […] Large numbers of zooplanktonic organisms […] swarm about on the under surface of the ice, grazing on the ice community at the ice-seawater interface, and sheltering in the brine channels. […] These under-ice organisms provide the link to higher trophic levels in the Arctic food web […] They are an important food source for fish such as Arctic cod and glacial cod that graze along the bottom of the ice. These fish are in turn fed on by squid, seals, and whales.”

“[T]he Antarctic marine system consists of a ring of ocean about 10° of latitude wide – roughly 1,000 km. […] The Arctic and Antarctic marine systems can be considered geographic opposites. In contrast to the largely landlocked Arctic Ocean, the Southern Ocean surrounds the Antarctic continental land mass and is in open contact with the Atlantic, Indian, and Pacific Oceans. Whereas the Arctic Ocean is strongly influenced by river inputs, the Antarctic continent has no rivers, and so hard-bottomed seabed is common in the Southern Ocean, and there is no low-saline surface layer, as in the Arctic Ocean. Also, in contrast to the Arctic Ocean with its shallow, broad continental shelves, the Antarctic continental shelf is very narrow and steep. […] Antarctic waters are extremely nutrient rich, fertilized by a permanent upwelling of seawater that has its origins at the other end of the planet. […] This continuous upwelling of cold, nutrient-rich seawater, in combination with the long Antarctic summer day length, creates ideal conditions for phytoplankton growth, which drives the productivity of the Antarctic marine system. As in the Arctic, a well-developed sea-ice community is present. Antarctic ice algae are even more abundant and productive than in the Arctic Ocean because the sea ice is thinner, and there is thus more available light for photosynthesis. […] Antarctica’s most important marine species [is] the Antarctic krill […] Krill are very adept at surviving many months under starvation conditions — in the laboratory they can endure more than 200 days without food. During the winter months they lower their metabolic rate, shrink in body size, and revert back to a juvenile state. When food once again becomes abundant in the spring, they grow rapidly […] As the sea ice breaks up they leave the ice and begin feeding directly on the huge blooms of free-living diatoms […]. With so much food available they grow and reproduce quickly, and start to swarm in large numbers, often at densities in excess of 10,000 individuals per cubic metre — dense enough to colour the seawater a reddish-brown. Krill swarms are patchy and vary greatly in size […] Because the Antarctic marine system covers a large area, krill numbers are enormous, estimated at about 600 billion animals on average, or 500 million tonnes of krill. This makes Antarctic krill one of the most abundant animal species on the planet […] Antarctic krill are the main food source for many of Antarctica’s large marine animals, and a key link in a very short and efficient food chain […]. Krill comprise the staple diet of icefish, squid, baleen whales, leopard seals, fur seals, crabeater seals, penguins, and seabirds, including albatross. Thus, a very simple and efficient three-step food chain is in operation — diatoms eaten by krill in turn eaten by a suite of large consumers — which supports the large numbers of large marine animals living in the Southern Ocean.”

Links:

Ocean gyre. North Atlantic Gyre. Thermohaline circulation. North Atlantic Deep Water. Antarctic bottom water.
Cyanobacteria. Diatom. Dinoflagellate. Coccolithophore.
Trophic level.
Nitrogen fixation.
High-nutrient, low-chlorophyll regions.
Light and dark bottle method of measuring primary productivity. Carbon-14 method for estimating primary productivity.
Ekman spiral.
Peruvian anchoveta.
El Niño. El Niño–Southern Oscillation.
Copepod.
Dissolved organic carbon. Particulate organic matter. Microbial loop.
Kelp forest. Macrocystis. Sea urchin. Urchin barren. Sea otter.
Seagrass.
Green sea turtle.
Manatee.
Demersal fish.
Eutrophication. Harmful algal bloom.
Comb jelly. Asterias amurensis.
Great Pacific garbage patch.
Eelpout. Sculpin.
Polynya.
Crabeater seal.
Adélie penguin.
Anchor ice mortality.

March 13, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Geology, Zoology | Leave a comment

Systems Biology (III)

Some observations from chapter 4 below:

The need to maintain a steady state ensuring homeostasis is an essential concern in nature while negative feedback loop is the fundamental way to ensure that this goal is met. The regulatory system determines the interdependences between individual cells and the organism, subordinating the former to the latter. In trying to maintain homeostasis, the organism may temporarily upset the steady state conditions of its component cells, forcing them to perform work for the benefit of the organism. […] On a cellular level signals are usually transmitted via changes in concentrations of reaction substrates and products. This simple mechanism is made possible due to limited volume of each cell. Such signaling plays a key role in maintaining homeostasis and ensuring cellular activity. On the level of the organism signal transmission is performed by hormones and the nervous system. […] Most intracellular signal pathways work by altering the concentrations of selected substances inside the cell. Signals are registered by forming reversible complexes consisting of a ligand (reaction product) and an allosteric receptor complex. When coupled to the ligand, the receptor inhibits the activity of its corresponding effector, which in turn shuts down the production of the controlled substance ensuring the steady state of the system. Signals coming from outside the cell are usually treated as commands (covalent modifications), forcing the cell to adjust its internal processes […] Such commands can arrive in the form of hormones, produced by the organism to coordinate specialized cell functions in support of general homeostasis (in the organism). These signals act upon cell receptors and are usually amplified before they reach their final destination (the effector).”

“Each concentration-mediated signal must first be registered by a detector. […] Intracellular detectors are typically based on allosteric proteins. Allosteric proteins exhibit a special property: they have two stable structural conformations and can shift from one form to the other as a result of changes in ligand concentrations. […] The concentration of a product (or substrate) which triggers structural realignment in the allosteric protein (such as a regulatory enzyme) depends on the genetically-determined affinity of the active site to its ligand. Low affinity results in high target concentration of the controlled substance while high affinity translates into lower concentration […]. In other words, high concentration of the product is necessary to trigger a low-affinity receptor (and vice versa). Most intracellular regulatory mechanisms rely on noncovalent interactions. Covalent bonding is usually associated with extracellular signals, generated by the organism and capable of overriding the cell’s own regulatory mechanisms by modifying the sensitivity of receptors […]. Noncovalent interactions may be compared to requests while covalent signals are treated as commands. Signals which do not originate in the receptor’s own feedback loop but modify its affinity are known as steering signals […] Hormones which act upon cells are, by their nature, steering signals […] Noncovalent interactions — dependent on substance concentrations — impose spatial restrictions on regulatory mechanisms. Any increase in cell volume requires synthesis of additional products in order to maintain stable concentrations. The volume of a spherical cell is given as V = 4/3 π r3, where r indicates cell radius. Clearly, even a slight increase in r translates into a significant increase in cell volume, diluting any products dispersed in the cytoplasm. This implies that cells cannot expand without incurring great energy costs. It should also be noted that cell expansion reduces the efficiency of intracellular regulatory mechanisms because signals and substrates need to be transported over longer distances. Thus, cells are universally small, regardless of whether they make up a mouse or an elephant.”

An effector is an element of a regulatory loop which counteracts changes in the regulated quantity […] Synthesis and degradation of biological compounds often involves numerous enzymes acting in sequence. The product of one enzyme is a substrate for another enzyme. With the exception of the initial enzyme, each step of this cascade is controlled by the availability of the supplied substrate […] The effector consists of a chain of enzymes, each of which depends on the activity of the initial regulatory enzyme […] as well as on the activity of its immediate predecessor which supplies it with substrates. The function of all enzymes in the effector chain is indirectly dependent on the initial enzyme […]. This coupling between the receptor and the first link in the effector chain is a universal phenomenon. It can therefore be said that the initial enzyme in the effector chain is, in fact, a regulatory enzyme. […] Most cell functions depend on enzymatic activity. […] It seems that a set of enzymes associated with a specific process which involves a negative feedback loop is the most typical form of an intracellular regulatory effector. Such effectors can be controlled through activation or inhibition of their associated enzymes.”

“The organism is a self-contained unit represented by automatic regulatory loops which ensure homeostasis. […] Effector functions are conducted by cells which are usually grouped and organized into tissues and organs. Signal transmission occurs by way of body fluids, hormones or nerve connections. Cells can be treated as automatic and potentially autonomous elements of regulatory loops, however their specific action is dependent on the commands issued by the organism. This coercive property of organic signals is an integral requirement of coordination, allowing the organism to maintain internal homeostasis. […] Activities of the organism are themselves regulated by their own negative feedback loops. Such regulation differs however from the mechanisms observed in individual cells due to its place in the overall hierarchy and differences in signal properties, including in particular:
• Significantly longer travel distances (compared to intracellular signals);
• The need to maintain hierarchical superiority of the organism;
• The relative autonomy of effector cells. […]
The relatively long distance travelled by organism’s signals and their dilution (compared to intracellular ones) calls for amplification. As a consequence, any errors or random distortions in the original signal may be drastically exacerbated. A solution to this problem comes in the form of encoding, which provides the signal with sufficient specificity while enabling it to be selectively amplified. […] a loudspeaker can […] assist in acoustic communication, but due to the lack of signal encoding it cannot compete with radios in terms of communication distance. The same reasoning applies to organism-originated signals, which is why information regarding blood glucose levels is not conveyed directly by glucose but instead by adrenalin, glucagon or insulin. Information encoding is handled by receptors and hormone-producing cells. Target cells are capable of decoding such signals, thus completing the regulatory loop […] Hormonal signals may be effectively amplified because the hormone itself does not directly participate in the reaction it controls — rather, it serves as an information carrier. […] strong amplification invariably requires encoding in order to render the signal sufficiently specific and unambiguous. […] Unlike organisms, cells usually do not require amplification in their internal regulatory loops — even the somewhat rare instances of intracellular amplification only increase signal levels by a small amount. Without the aid of an amplifier, messengers coming from the organism level would need to be highly concentrated at their source, which would result in decreased efficiency […] Most signals originated on organism’s level travel with body fluids; however if a signal has to reach its destination very rapidly (for instance in muscle control) it is sent via the nervous system”.

“Two types of amplifiers are observed in biological systems:
1. cascade amplifier,
2. positive feedback loop. […]
A cascade amplifier is usually a collection of enzymes which perform their action by activation in strict sequence. This mechanism resembles multistage (sequential) synthesis or degradation processes, however instead of exchanging reaction products, amplifier enzymes communicate by sharing activators or by directly activating one another. Cascade amplifiers are usually contained within cells. They often consist of kinases. […] Amplification effects occurring at each stage of the cascade contribute to its final result. […] While the kinase amplification factor is estimated to be on the order of 103, the phosphorylase cascade results in 1010-fold amplification. It is a stunning value, though it should also be noted that the hormones involved in this cascade produce particularly powerful effects. […] A positive feedback loop is somewhat analogous to a negative feedback loop, however in this case the input and output signals work in the same direction — the receptor upregulates the process instead of inhibiting it. Such upregulation persists until the available resources are exhausted.
Positive feedback loops can only work in the presence of a control mechanism which prevents them from spiraling out of control. They cannot be considered self-contained and only play a supportive role in regulation. […] In biological systems positive feedback loops are sometimes encountered in extracellular regulatory processes where there is a need to activate slowly-migrating components and greatly amplify their action in a short amount of time. Examples include blood coagulation and complement factor activation […] Positive feedback loops are often coupled to negative loop-based control mechanisms. Such interplay of loops may impart the signal with desirable properties, for instance by transforming a flat signals into a sharp spike required to overcome the activation threshold for the next stage in a signalling cascade. An example is the ejection of calcium ions from the endoplasmic reticulum in the phospholipase C cascade, itself subject to a negative feedback loop.”

“Strong signal amplification carries an important drawback: it tends to “overshoot” its target activity level, causing wild fluctuations in the process it controls. […] Nature has evolved several means of signal attenuation. The most typical mechanism superimposes two regulatory loops which affect the same parameter but act in opposite directions. An example is the stabilization of blood glucose levels by two contradictory hormones: glucagon and insulin. Similar strategies are exploited in body temperature control and many other biological processes. […] The coercive properties of signals coming from the organism carry risks associated with the possibility of overloading cells. The regulatory loop of an autonomous cell must therefore include an “off switch”, controlled by the cell. An autonomous cell may protect itself against excessive involvement in processes triggered by external signals (which usually incur significant energy expenses). […] The action of such mechanisms is usually timer-based, meaning that they inactivate signals following a set amount of time. […] The ability to interrupt signals protects cells from exhaustion. Uncontrolled hormone-induced activity may have detrimental effects upon the organism as a whole. This is observed e.g. in the case of the vibrio cholerae toxin which causes prolonged activation of intestinal epithelial cells by locking protein G in its active state (resulting in severe diarrhea which can dehydrate the organism).”

“Biological systems in which information transfer is affected by high entropy of the information source and ambiguity of the signal itself must include discriminatory mechanisms. These mechanisms usually work by eliminating weak signals (which are less specific and therefore introduce ambiguities). They create additional obstacles (thresholds) which the signals must overcome. A good example is the mechanism which eliminates the ability of weak, random antigens to activate lymphatic cells. It works by inhibiting blastic transformation of lymphocytes until a so-called receptor cap has accumulated on the surface of the cell […]. Only under such conditions can the activation signal ultimately reach the cell nucleus […] and initiate gene transcription. […] weak, reversible nonspecific interactions do not permit sufficient aggregation to take place. This phenomenon can be described as a form of discrimination against weak signals. […] Discrimination may also be linked to effector activity. […] Cell division is counterbalanced by programmed cell death. The most typical example of this process is apoptosis […] Each cell is prepared to undergo controlled death if required by the organism, however apoptosis is subject to tight control. Cells protect themselves against accidental triggering of the process via IAP proteins. Only strong proapoptotic signals may overcome this threshold and initiate cellular suicide”.

Simply knowing the sequences, structures or even functions of individual proteins does not provide sufficient insight into the biological machinery of living organisms. The complexity of individual cells and entire organisms calls for functional classification of proteins. This task can be accomplished with a proteome — a theoretical construct where individual elements (proteins) are grouped in a way which acknowledges their mutual interactions and interdependencies, characterizing the information pathways in a complex organism.
Most ongoing proteome construction projects focus on individual proteins as the basic building blocks […] [We would instead argue in favour of a model in which] [t]he basic unit of the proteome is one negative feedback loop (rather than a single protein) […]
Due to the relatively large number of proteins (between 25 and 40 thousand in the human organism), presenting them all on a single graph with vertex lengths corresponds to the relative duration of interactions would be unfeasible. This is why proteomes are often subdivided into functional subgroups such as the metabolome (proteins involved in metabolic processes), interactome (complex-forming proteins), kinomes (proteins which belong to the kinase family) etc.”

February 18, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine | Leave a comment

Systems Biology (II)

Some observations from the book’s chapter 3 below:

“Without regulation biological processes would become progressively more and more chaotic. In living cells the primary source of information is genetic material. Studying the role of information in biology involves signaling (i.e. spatial and temporal transfer of information) and storage (preservation of information). Regarding the role of the genome we can distinguish three specific aspects of biological processes: steady-state genetics, which ensure cell-level and body homeostasis; genetics of development, which controls cell differentiation and genesis of the organism; and evolutionary genetics, which drives speciation. […] The ever growing demand for information, coupled with limited storage capacities, has resulted in a number of strategies for minimizing the quantity of the encoded information that must be preserved by living cells. In addition to combinatorial approaches based on noncontiguous genes structure, self-organization plays an important role in cellular machinery. Nonspecific interactions with the environment give rise to coherent structures despite the lack of any overt information store. These mechanisms, honed by evolution and ubiquitous in living organisms, reduce the need to directly encode large quantities of data by adopting a systemic approach to information management.”

Information is commonly understood as a transferable description of an event or object. Information transfer can be either spatial (communication, messaging or signaling) or temporal (implying storage). […] The larger the set of choices, the lower the likelihood [of] making the correct choice by accident and — correspondingly — the more information is needed to choose correctly. We can therefore state that an increase in the cardinality of a set (the number of its elements) corresponds to an increase in selection indeterminacy. This indeterminacy can be understood as a measure of “a priori ignorance”. […] Entropy determines the uncertainty inherent in a given system and therefore represents the relative difficulty of making the correct choice. For a set of possible events it reaches its maximum value if the relative probabilities of each event are equal. Any information input reduces entropy — we can therefore say that changes in entropy are a quantitative measure of information. […] Physical entropy is highest in a state of equilibrium, i.e. lack of spontaneity (G = 0,0) which effectively terminates the given reaction. Regulatory processes which counteract the tendency of physical systems to reach equilibrium must therefore oppose increases in entropy. It can be said that a steady inflow of information is a prerequisite of continued function in any organism. As selections are typically made at the entry point of a regulatory process, the concept of entropy may also be applied to information sources. This approach is useful in explaining the structure of regulatory systems which must be “designed” in a specific way, reducing uncertainty and enabling accurate, error-free decisions.

The fire ant exudes a pheromone which enables it to mark sources of food and trace its own path back to the colony. In this way, the ant conveys pathing information to other ants. The intensity of the chemical signal is proportional to the abundance of the source. Other ants can sense the pheromone from a distance of several (up to a dozen) centimeters and thus locate the source themselves. […] As can be expected, an increase in the entropy of the information source (i.e. the measure of ignorance) results in further development of regulatory systems — in this case, receptors capable of receiving signals and processing them to enable accurate decisions. Over time, the evolution of regulatory mechanisms increases their performance and precision. The purpose of various structures involved in such mechanisms can be explained on the grounds of information theory. The primary goal is to select the correct input signal, preserve its content and avoid or eliminate any errors.”

Genetic information stored in nucleotide sequences can be expressed and transmitted in two ways:
a. via replication (in cell division);
b. via transcription and translation (also called gene expression […]
)
Both processes act as effectors and can be triggered by certain biological signals transferred on request.
Gene expression can be defined as a sequence of events which lead to the synthesis of proteins or their products required for a particular function. In cell division, the goal of this process is to generate a copy of the entire genetic code (S phase), whereas in gene expression only selected fragments of DNA (those involved in the requested function) are transcribed and translated. […] Transcription calls for exposing a section of the cell’s genetic code and although its product (RNA) is short-lived, it can be recreated on demand, just like a carbon copy of a printed text. On the other hand, replication affects the entire genetic material contained in the cell and must conform to stringent precision requirements, particularly as the size of the genome increases.”

The magnitude of effort involved in replication of genetic code can be visualized by comparing the DNA chain to a zipper […]. Assuming that the zipper consists of three pairs of interlocking teeth per centimeter (300 per meter) and that the human genome is made up of 3 billion […] base pairs, the total length of our uncoiled DNA in “zipper form” would be equal to […] 10,000 km […] If we were to unfasten the zipper at a rate of 1 m per second, the entire unzipping process would take approximately 3 months […]. This comparison should impress upon the reader the length of the DNA chain and the precision with which individual nucleotides must be picked to ensure that the resulting code is an exact copy of the source. It should also be noted that for each base pair the polymerase enzyme needs to select an appropriate matching nucleotide from among four types of nucleotides present in the solution, and attach it to the chain (clearly, no such problem occurs in zippers). The reliability of an average enzyme is on the order of 10-3–10-4, meaning that one error occurs for every 1,000–10,000 interactions between the enzyme and its substrate. Given this figure, replication of 3*109 base pairs would introduce approximately 3 million errors (mutations) per genome, resulting in a highly inaccurate copy. Since the observed reliability of replication is far higher, we may assume that some corrective mechanisms are involved. Really, the remarkable precision of genetic replication is ensured by DNA repair processes, and in particular by the corrective properties of polymerase itself.

Many mutations are caused by the inherent chemical instability of nucleic acids: for example, cytosine may spontaneously convert to uracil. In the human genome such an event occurs approximately 100 times per day; however uracil is not normally encountered in DNA and its presence alerts defensive mechanisms which correct the error. Another type of mutation is spontaneous depurination, which also triggers its own, dedicated error correction procedure. Cells employ a large number of corrective mechanisms […] DNA repair mechanisms may be treated as an “immune system” which protects the genome from loss or corruption of genetic information. The unavoidable mutations which sometimes occur despite the presence of error correction-mechanisms can be masked due to doubled presentation (alleles) of genetic information. Thus, most mutations are recessive and not expressed in the phenotype. As the length of the DNA chain increases, mutations become more probable. It should be noted that the number of nucleotides in DNA is greater than the relative number of aminoacids participating in polypeptide chains. This is due to the fact that each aminoacid is encoded by exactly three nucleotides — a general principle which applies to all living organisms. […] Fidelity is, of course, fundamentally important in DNA replication as any harmful mutations introduced in its course are automatically passed on to all successive generations of cells. In contrast, transcription and translation processes can be more error-prone as their end products are relatively short-lived. Of note is the fact that faulty transcripts appear in relatively low quantities and usually do not affect cell functions, since regulatory processes ensure continued synthesis of the required substances until a suitable level of activity is reached. Nevertheless, it seems that reliable transcription of genetic material is sufficiently significant for cells to have developed appropriate proofreading mechanisms, similar to those which assist replication. […] the entire information pathway — starting with DNA and ending with active proteins — is protected against errors. We can conclude that fallibility is an inherent property of genetic information channels, and that in order to perform their intended function, these channels require error correction mechanisms.”

The discrete nature of genetic material is an important property which distinguishes prokaryotes from eukaryotes. […] The ability to select individual nucleotide fragments and construct sequences from predetermined “building blocks” results in high adaptability to environmental stimuli and is a fundamental aspect of evolution. The discontinuous nature of genes is evidenced by the presence of fragments which do not convey structural information (introns), as opposed to structure-encoding fragments (exons). The initial transcript (pre-mRNA) contains introns as well as exons. In order to provide a template for protein synthesis, it must undergo further processing (also known as splicing): introns must be cleaved and exon fragments attached to one another. […] Recognition of intron-exon boundaries is usually very precise, while the reattachment of adjacent exons is subject to some variability. Under certain conditions, alternative splicing may occur, where the ordering of the final product does not reflect the order in which exon sequences appear in the source chain. This greatly increases the number of potential mRNA combinations and thus the variety of resulting proteins. […] While access to energy sources is not a major problem, sources of information are usually far more difficult to manage — hence the universal tendency to limit the scope of direct (genetic) information storage. Reducing the length of genetic code enables efficient packing and enhances the efficiency of operations while at the same time decreasing the likelihood of errors. […] The number of genes identified in the human genome is lower than the number of distinct proteins by a factor of 4; a difference which can be attributed to alternative splicing. […] This mechanism increases the variety of protein structures without affecting core information storage, i.e. DNA sequences. […] Primitive organisms often possess nearly as many genes as humans, despite the essential differences between both groups. Interspecies diversity is primarily due to the properties of regulatory sequences.”

The discontinuous nature of genes is evolutionarily advantageous but comes at the expense of having to maintain a nucleus where such splicing processes can be safely conducted, in addition to efficient transport channels allowing transcripts to penetrate the nuclear membrane. While it is believed that at early stages of evolution RNA was the primary repository of genetic information, its present function can best be described as an information carrier. Since unguided proteins cannot ensure sufficient specificity of interaction with nucleic acids, protein-RNA complexes are used often in cases where specific fragments of genetic information need to be read. […] The use of RNA in protein complexes is common across all domains of the living world as it bridges the gap between discrete and continuous storage of genetic information.”

Epigenetic differentiation mechanisms are particularly important in embryonic development. […] Unlike the function of mature organisms, embryonic programming refers to structures which do not yet exist but which need to be created through cell proliferation and differentiation. […] Differentiation of cells results in phenotypic changes. This phenomenon is the primary difference between development genetics and steady-state genetics. Functional differences are not, however, associated with genomic changes: instead they are mediated by the transcriptome where certain genes are preferentially selected for transcription while others are suppressed. […] In a mature, specialized cell only a small portion of the transcribable genome is actually expressed. The remainder of the cell’s genetic material is said to be silenced. Gene silencing is a permanent condition. Under normal circumstances mature cells never alter their function, although such changes may be forced in a laboratory setting […] Cells which make up the embryo at a very early stage of development are pluripotent, meaning that their purpose can be freely determined and that all of their genetic information can potentially be expressed (under certain conditions). […] At each stage of the development process the scope of pluripotency is reduced until, ultimately, the cell becomes monopotent. Monopotency implies that the final function of the cell has already been determined, although the cell itself may still be immature. […] functional dissimilarities between specialized cells are not associated with genetic mutations but rather with selective silencing of genes. […] Most genes which determine biological functions have a biallelic representation (i.e. a representation consisting of two alleles). The remainder (approximately 10 % of genes) is inherited from one specific parent, as a result of partial or complete silencing of their sister alleles (called paternal or maternal imprinting) which occurs during gametogenesis. The suppression of a single copy of the X chromosome is a special case of this phenomenon.”

Evolutionary genetics is subject to two somewhat contradictory criteria. On the one hand, there is clear pressure on accurate and consistent preservation of biological functions and structures while on the other hand it is also important to permit gradual but persistent changes. […] the observable progression of adaptive traits which emerge as a result of evolution suggests a mechanism which promotes constructive changes over destructive ones. Mutational diversity cannot be considered truly random if it is limited to certain structures or functions. […] Approximately 50 % of the human genome consists of mobile segments, capable of migrating to various positions in the genome. These segments are called transposons and retrotransposons […] The mobility of genome fragments not only promotes mutations (by increasing the variability of DNA) but also affects the stability and packing of chromatin strands wherever such mobile sections are reintegrated with the genome. Under normal circumstances the activity of mobile sections is tempered by epigenetic mechanisms […]; however in certain situations gene mobility may be upregulated. In particular, it seems that in “prehistoric” (remote evolutionary) times such events occurred at a much faster pace, accelerating the rate of genetic changes and promoting rapid evolution. Cells can actively promote mutations by way of the so-called AID process (activity-dependent cytosine deamination). It is an enzymatic mechanism which converts cytosine into uracil, thereby triggering repair mechanisms and increasing the likelihood of mutations […] The existence of AID proves that cells themselves may trigger evolutionary changes and that the role of mutations in the emergence of new biological structures is not strictly passive.”

Regulatory mechanisms which receive signals characterized by high degrees of uncertainty, must be able to make informed choices to reduce the overall entropy of the system they control. This property is usually associated with development of information channels. Special structures ought to be exposed within information channels connecting systems of different character as for example linking transcription to translation or enabling transduction of signals through the cellular membrane. Examples of structures which convey highly entropic information are receptor systems associated with blood coagulation and immune responses. The regulatory mechanism which triggers an immune response relies on relatively simple effectors (complement factor enzymes, phages and killer cells) coupled to a highly evolved receptor system, represented by specific antibodies and organized set of cells. Compared to such advanced receptors the structures which register the concentration of a given product (e.g. glucose in blood) are rather primitive. Advanced receptors enable the immune system to recognize and verify information characterized by high degrees of uncertainty. […] In sequential processes it is usually the initial stage which poses the most problems and requires the most information to complete successfully. It should come as no surprise that the most advanced control loops are those associated with initial stages of biological pathways.”

February 10, 2018 Posted by | Biology, Books, Chemistry, Evolutionary biology, Genetics, Immunology, Medicine | Leave a comment

Systems Biology (I)

This book is really dense and is somewhat tough for me to blog. One significant problem is that: “The authors assume that the reader is already familiar with the material covered in a classic biochemistry course.” I know enough biochem to follow most of the stuff in this book, and I was definitely quite happy to have recently read John Finney’s book on the biochemical properties of water and Christopher Hall’s introduction to materials science, as both of those books’ coverage turned out to be highly relevant (these are far from the only relevant books I’ve read semi-recently – Atkins introduction to thermodynamics is another book that springs to mind) – but even so, what do you leave out when writing a post like this? I decided to leave out a lot. Posts covering books like this one are hard to write because it’s so easy for them to blow up in your face because you have to include so many details for the material included in the post to even start to make sense to people who didn’t read the original text. And if you leave out all the details, what’s really left? It’s difficult..

Anyway, some observations from the first chapters of the book below.

“[T]he biological world consists of self-managing and self-organizing systems which owe their existence to a steady supply of energy and information. Thermodynamics introduces a distinction between open and closed systems. Reversible processes occurring in closed systems (i.e. independent of their environment) automatically gravitate toward a state of equilibrium which is reached once the velocity of a given reaction in both directions becomes equal. When this balance is achieved, we can say that the reaction has effectively ceased. In a living cell, a similar condition occurs upon death. Life relies on certain spontaneous processes acting to unbalance the equilibrium. Such processes can only take place when substrates and products of reactions are traded with the environment, i.e. they are only possible in open systems. In turn, achieving a stable level of activity in an open system calls for regulatory mechanisms. When the reaction consumes or produces resources that are exchanged with the outside world at an uneven rate, the stability criterion can only be satisfied via a negative feedback loop […] cells and living organisms are thermodynamically open systems […] all structures which play a role in balanced biological activity may be treated as components of a feedback loop. This observation enables us to link and integrate seemingly unrelated biological processes. […] the biological structures most directly involved in the functions and mechanisms of life can be divided into receptors, effectors, information conduits and elements subject to regulation (reaction products and action results). Exchanging these elements with the environment requires an inflow of energy. Thus, living cells are — by their nature — open systems, requiring an energy source […] A thermodynamically open system lacking equilibrium due to a steady inflow of energy in the presence of automatic regulation is […] a good theoretical model of a living organism. […] Pursuing growth and adapting to changing environmental conditions calls for specialization which comes at the expense of reduced universality. A specialized cell is no longer self-sufficient. As a consequence, a need for higher forms of intercellular organization emerges. The structure which provides cells with suitable protection and ensures continued homeostasis is called an organism.”

“In biology, structure and function are tightly interwoven. This phenomenon is closely associated with the principles of evolution. Evolutionary development has produced structures which enable organisms to develop and maintain its architecture, perform actions and store the resources needed to survive. For this reason we introduce a distinction between support structures (which are akin to construction materials), function-related structures (fulfilling the role of tools and machines), and storage structures (needed to store important substances, achieving a compromise between tight packing and ease of access). […] Biology makes extensive use of small-molecule structures and polymers. The physical properties of polymer chains make them a key building block in biological structures. There are several reasons as to why polymers are indispensable in nature […] Sequestration of resources is subject to two seemingly contradictory criteria: 1. Maximize storage density; 2. Perform sequestration in such a way as to allow easy access to resources. […] In most biological systems, storage applies to energy and information. Other types of resources are only occasionally stored […]. Energy is stored primarily in the form of saccharides and lipids. Saccharides are derivatives of glucose, rendered insoluble (and thus easy to store) via polymerization.Their polymerized forms, stabilized with α-glycosidic bonds, include glycogen (in animals) and starch (in plantlife). […] It should be noted that the somewhat loose packing of polysaccharides […] makes them unsuitable for storing large amounts of energy. In a typical human organism only ca. 600 kcal of energy is stored in the form of glycogen, while (under normal conditions) more than 100,000 kcal exists as lipids. Lipids deposit usually assume the form of triglycerides (triacylglycerols). Their properties can be traced to the similarities between fatty acids and hydrocarbons. Storage efficiency (i.e. the amount of energy stored per unit of mass) is twice that of polysaccharides, while access remains adequate owing to the relatively large surface area and high volume of lipids in the organism.”

“Most living organisms store information in the form of tightly-packed DNA strands. […] It should be noted that only a small percentage of DNA (about few %) conveys biologically relevant information. The purpose of the remaining ballast is to enable suitable packing and exposure of these important fragments. If all of DNA were to consist of useful code, it would be nearly impossible to devise a packing strategy guaranteeing access to all of the stored information.”

“The seemingly endless diversity of biological functions frustrates all but the most persistent attempts at classification. For the purpose of this handbook we assume that each function can be associated either with a single cell or with a living organism. In both cases, biological functions are strictly subordinate to automatic regulation, based — in a stable state — on negative feedback loops, and in processes associated with change (for instance in embryonic development) — on automatic execution of predetermined biological programs. Individual components of a cell cannot perform regulatory functions on their own […]. Thus, each element involved in the biological activity of a cell or organism must necessarily participate in a regulatory loop based on processing information.”

“Proteins are among the most basic active biological structures. Most of the well-known proteins studied thus far perform effector functions: this group includes enzymes, transport proteins, certain immune system components (complement factors) and myofibrils. Their purpose is to maintain biological systems in a steady state. Our knowledge of receptor structures is somewhat poorer […] Simple structures, including individual enzymes and components of multienzyme systems, can be treated as “tools” available to the cell, while advanced systems, consisting of many mechanically-linked tools, resemble machines. […] Machinelike mechanisms are readily encountered in living cells. A classic example is fatty acid synthesis, performed by dedicated machines called synthases. […] Multiunit structures acting as machines can be encountered wherever complex biochemical processes need to be performed in an efficient manner. […] If the purpose of a machine is to generate motion then a thermally powered machine can accurately be called a motor. This type of action is observed e.g. in myocytes, where transmission involves reordering of protein structures using the energy generated by hydrolysis of high-energy bonds.”

“In biology, function is generally understood as specific physiochemical action, almost universally mediated by proteins. Most such actions are reversible which means that a single protein molecule may perform its function many times. […] Since spontaneous noncovalent surface interactions are very infrequent, the shape and structure of active sites — with high concentrations of hydrophobic residues — makes them the preferred area of interaction between functional proteins and their ligands. They alone provide the appropriate conditions for the formation of hydrogen bonds; moreover, their structure may determine the specific nature of interaction. The functional bond between a protein and a ligand is usually noncovalent and therefore reversible.”

“In general terms, we can state that enzymes accelerate reactions by lowering activation energies for processes which would otherwise occur very slowly or not at all. […] The activity of enzymes goes beyond synthesizing a specific protein-ligand complex (as in the case of antibodies or receptors) and involves an independent catalytic attack on a selected bond within the ligand, precipitating its conversion into the final product. The relative independence of both processes (binding of the ligand in the active site and catalysis) is evidenced by the phenomenon of noncompetitive inhibition […] Kinetic studies of enzymes have provided valuable insight into the properties of enzymatic inhibitors — an important field of study in medicine and drug research. Some inhibitors, particularly competitive ones (i.e. inhibitors which outcompete substrates for access to the enzyme), are now commonly used as drugs. […] Physical and chemical processes may only occur spontaneously if they generate energy, or non-spontaneously if they consume it. However, all processes occurring in a cell must have a spontaneous character because only these processes may be catalyzed by enzymes. Enzymes merely accelerate reactions; they do not provide energy. […] The change in enthalpy associated with a chemical process may be calculated as a net difference in the sum of molecular binding energies prior to and following the reaction. Entropy is a measure of the likelihood that a physical system will enter a given state. Since chaotic distribution of elements is considered the most probable, physical systems exhibit a general tendency to gravitate towards chaos. Any form of ordering is thermodynamically disadvantageous.”

“The chemical reactions which power biological processes are characterized by varying degrees of efficiency. In general, they tend to be on the lower end of the efficiency spectrum, compared to energy sources which drive matter transformation processes in our universe. In search for a common criterion to describe the efficiency of various energy sources, we can refer to the net loss of mass associated with a release of energy, according to Einstein’s formula:
E = mc2
The
M/M coefficient (relative loss of mass, given e.g. in %) allows us to compare the efficiency of energy sources. The most efficient processes are those involved in the gravitational collapse of stars. Their efficiency may reach 40 %, which means that 40 % of the stationary mass of the system is converted into energy. In comparison, nuclear reactions have an approximate efficiency of 0.8 %. The efficiency of chemical energy sources available to biological systems is incomparably lower and amounts to approximately 10(-7) % […]. Among chemical reactions, the most potent sources of energy are found in oxidation processes, commonly exploited by biological systems. Oxidation tends  to result in the largest net release of energy per unit of mass, although the efficiency of specific types of oxidation varies. […] given unrestricted access to atmospheric oxygen and to hydrogen atoms derived from hydrocarbons — the combustion of hydrogen (i.e. the synthesis of water; H2 + 1/2O2 = H2O) has become a principal source of energy in nature, next to photosynthesis, which exploits the energy of solar radiation. […] The basic process associated with the release of hydrogen and its subsequent oxidation (called the Krebs cycle) is carried by processes which transfer electrons onto oxygen atoms […]. Oxidation occurs in stages, enabling optimal use of the released energy. An important byproduct of water synthesis is the universal energy carrier known as ATP (synthesized separately). As water synthesis is a highly spontaneous process, it can be exploited to cover the energy debt incurred by endergonic synthesis of ATP, as long as both processes are thermodynamically coupled, enabling spontaneous catalysis of anhydride bonds in ATP. Water synthesis is a universal source of energy in heterotrophic systems. In contrast, autotrophic organisms rely on the energy of light which is exploited in the process of photosynthesis. Both processes yield ATP […] Preparing nutrients (hydrogen carriers) for participation in water synthesis follows different paths for sugars, lipids and proteins. This is perhaps obvious given their relative structural differences; however, in all cases the final form, which acts as a substrate for dehydrogenases, is acetyl-CoA“.

“Photosynthesis is a process which — from the point of view of electron transfer — can be treated as a counterpart of the respiratory chain. In heterotrophic organisms, mitochondria transport electrons from hydrogenated compounds (sugars, lipids, proteins) onto oxygen molecules, synthesizing water in the process, whereas in the course of photosynthesis electrons released by breaking down water molecules are used as a means of reducing oxydised carbon compounds […]. In heterotrophic organisms the respiratory chain has a spontaneous quality (owing to its oxidative properties); however any reverse process requires energy to occur. In the case of photosynthesis this energy is provided by sunlight […] Hydrogen combustion and photosynthesis are the basic sources of energy in the living world. […] For an energy source to become useful, non-spontaneous reactions must be coupled to its operation, resulting in a thermodynamically unified system. Such coupling can be achieved by creating a coherent framework in which the spontaneous and non-spontaneous processes are linked, either physically or chemically, using a bridging component which affects them both. If the properties of both reactions are different, the bridging component must also enable suitable adaptation and mediation. […] Direct exploitation of the energy released via the hydrolysis of ATP is possible usually by introducing an active binding carrier mediating the energy transfer. […] Carriers are considered active as long as their concentration ensures a sufficient release of energy to synthesize a new chemical bond by way of a non-spontaneous process. Active carriers are relatively short-lived […] Any active carrier which performs its function outside of the active site must be sufficiently stable to avoid breaking up prior to participating in the synthesis reaction. Such mobile carriers are usually produced when the required synthesis consists of several stages or cannot be conducted in the active site of the enzyme for sterical reasons. Contrary to ATP, active energy carriers are usually reaction-specific. […] Mobile energy carriers are usually formed as a result of hydrolysis of two high-energy ATP bonds. In many cases this is the minimum amount of energy required to power a reaction which synthesizes a single chemical bond. […] Expelling a mobile or unstable reaction component in order to increase the spontaneity of active energy carrier synthesis is a process which occurs in many biological mechanisms […] The action of active energy carriers may be compared to a ball rolling down a hill. The descending snowball gains sufficient energy to traverse another, smaller mound, adjacent to its starting point. In our case, the smaller hill represents the final synthesis reaction […] Understanding the role of active carriers is essential for the study of metabolic processes.”

“A second category of processes, directly dependent on energy sources, involves structural reconfiguration of proteins, which can be further differentiated into low and high-energy reconfiguration. Low-energy reconfiguration occurs in proteins which form weak, easily reversible bonds with ligands. In such cases, structural changes are powered by the energy released in the creation of the complex. […] Important low-energy reconfiguration processes may occur in proteins which consist of subunits. Structural changes resulting from relative motion of subunits typically do not involve significant expenditures of energy. Of particular note are the so-called allosteric proteins […] whose rearrangement is driven by a weak and reversible bond between the protein and an oxygen molecule. Allosteric proteins are genetically conditioned to possess two stable structural configurations, easily swapped as a result of binding or releasing ligands. Thus, they tend to have two comparable energy minima (separated by a low threshold), each of which may be treated as a global minimum corresponding to the native form of the protein. Given such properties, even a weakly interacting ligand may trigger significant structural reconfiguration. This phenomenon is of critical importance to a variety of regulatory proteins. In many cases, however, the second potential minimum in which the protein may achieve relative stability is separated from the global minimum by a high threshold requiring a significant expenditure of energy to overcome. […] Contrary to low-energy reconfigurations, the relative difference in ligand concentrations is insufficient to cover the cost of a difficult structural change. Such processes are therefore coupled to highly exergonic reactions such as ATP hydrolysis. […]  The link between a biological process and an energy source does not have to be immediate. Indirect coupling occurs when the process is driven by relative changes in the concentration of reaction components. […] In general, high-energy reconfigurations exploit direct coupling mechanisms while indirect coupling is more typical of low-energy processes”.

Muscle action requires a major expenditure of energy. There is a nonlinear dependence between the degree of physical exertion and the corresponding energy requirements. […] Training may improve the power and endurance of muscle tissue. Muscle fibers subjected to regular exertion may improve their glycogen storage capacity, ATP production rate, oxidative metabolism and the use of fatty acids as fuel.

February 4, 2018 Posted by | Biology, Books, Chemistry, Genetics, Pharmacology, Physics | Leave a comment

Lakes (II)

(I have had some computer issues over the last couple of weeks, which was the explanation for my brief blogging hiatus, but they should be resolved by now and as I’m already starting to fall quite a bit behind in terms of my intended coverage of the books I’ve read this year I hope to get rid of some of the backlog in the days to come.)

I have added some more observations from the second half of the book, as well as some related links, below.

“[R]ecycling of old plant material is especially important in lakes, and one way to appreciate its significance is to measure the concentration of CO2, an end product of decomposition, in the surface waters. This value is often above, sometimes well above, the value to be expected from equilibration of this gas with the overlying air, meaning that many lakes are net producers of CO2 and that they emit this greenhouse gas to the atmosphere. How can that be? […] Lakes are not sealed microcosms that function as stand-alone entities; on the contrary, they are embedded in a landscape and are intimately coupled to their terrestrial surroundings. Organic materials are produced within the lake by the phytoplankton, photosynthetic cells that are suspended in the water and that fix CO2, release oxygen (O2), and produce biomass at the base of the aquatic food web. Photosynthesis also takes place by attached algae (the periphyton) and submerged water plants (aquatic macrophytes) that occur at the edge of the lake where enough sunlight reaches the bottom to allow their growth. But additionally, lakes are the downstream recipients of terrestrial runoff from their catchments […]. These continuous inputs include not only water, but also subsidies of plant and soil organic carbon that are washed into the lake via streams, rivers, groundwater, and overland flows. […] The organic carbon entering lakes from the catchment is referred to as ‘allochthonous’, meaning coming from the outside, and it tends to be relatively old […] In contrast, much younger organic carbon is available […] as a result of recent photosynthesis by the phytoplankton and littoral communities; this carbon is called ‘autochthonous’, meaning that it is produced within the lake.”

“It used to be thought that most of the dissolved organic matter (DOM) entering lakes, especially the coloured fraction, was unreactive and that it would transit the lake to ultimately leave unchanged at the outflow. However, many experiments and field observations have shown that this coloured material can be partially broken down by sunlight. These photochemical reactions result in the production of CO2, and also the degradation of some of the organic polymers into smaller organic molecules; these in turn are used by bacteria and decomposed to CO2. […] Most of the bacterial species in lakes are decomposers that convert organic matter into mineral end products […] This sunlight-driven chemistry begins in the rivers, and continues in the surface waters of the lake. Additional chemical and microbial reactions in the soil also break down organic materials and release CO2 into the runoff and ground waters, further contributing to the high concentrations in lake water and its emission to the atmosphere. In algal-rich ‘eutrophic’ lakes there may be sufficient photosynthesis to cause the drawdown of CO2 to concentrations below equilibrium with the air, resulting in the reverse flux of this gas, from the atmosphere into the surface waters.”

“There is a precarious balance in lakes between oxygen gains and losses, despite the seemingly limitless quantities in the overlying atmosphere. This balance can sometimes tip to deficits that send a lake into oxygen bankruptcy, with the O2 mostly or even completely consumed. Waters that have O2 concentrations below 2mg/L are referred to as ‘hypoxic’, and will be avoided by most fish species, while waters in which there is a complete absence of oxygen are called ‘anoxic’ and are mostly the domain for specialized, hardy microbes. […] In many temperate lakes, mixing in spring and again in autumn are the critical periods of re-oxygenation from the overlying atmosphere. In summer, however, the thermocline greatly slows down that oxygen transfer from air to deep water, and in cooler climates, winter ice-cover acts as another barrier to oxygenation. In both of these seasons, the oxygen absorbed into the water during earlier periods of mixing may be rapidly consumed, leading to anoxic conditions. Part of the reason that lakes are continuously on the brink of anoxia is that only limited quantities of oxygen can be stored in water because of its low solubility. The concentration of oxygen in the air is 209 millilitres per litre […], but cold water in equilibrium with the atmosphere contains only 9ml/L […]. This scarcity of oxygen worsens with increasing temperature (from 4°C to 30°C the solubility of oxygen falls by 43 per cent), and it is compounded by faster rates of bacterial decomposition in warmer waters and thus a higher respiratory demand for oxygen.”

“Lake microbiomes play multiple roles in food webs as producers, parasites, and consumers, and as steps into the animal food chain […]. These diverse communities of microbes additionally hold centre stage in the vital recycling of elements within the lake ecosystem […]. These biogeochemical processes are not simply of academic interest; they totally alter the nutritional value, mobility, and even toxicity of elements. For example, sulfate is the most oxidized and also most abundant form of sulfur in natural waters, and it is the ion taken up by phytoplankton and aquatic plants to meet their biochemical needs for this element. These photosynthetic organisms reduce the sulfate to organic sulfur compounds, and once they die and decompose, bacteria convert these compounds to the rotten-egg smelling gas, H2S, which is toxic to most aquatic life. In anoxic waters and sediments, this effect is amplified by bacterial sulfate reducers that directly convert sulfate to H2S. Fortunately another group of bacteria, sulfur oxidizers, can use H2S as a chemical energy source, and in oxygenated waters they convert this reduced sulfur back to its benign, oxidized, sulfate form. […] [The] acid neutralizing capacity (or ‘alkalinity’) varies greatly among lakes. Many lakes in Europe, North America, and Asia have been dangerously shifted towards a low pH because they lacked sufficient carbonate to buffer the continuous input of acid rain that resulted from industrial pollution of the atmosphere. The acid conditions have negative effects on aquatic animals, including by causing a shift in aluminium to its more soluble and toxic form Al3+. Fortunately, these industrial emissions have been regulated and reduced in most of the developed world, although there are still legacy effects of acid rain that have resulted in a long-term depletion of carbonates and associated calcium in certain watersheds.”

“Rotifers, cladocerans, and copepods are all planktonic, that is their distribution is strongly affected by currents and mixing processes in the lake. However, they are also swimmers, and can regulate their depth in the water. For the smallest such as rotifers and copepods, this swimming ability is limited, but the larger zooplankton are able to swim over an impressive depth range during the twenty-four-hour ‘diel’ (i.e. light–dark) cycle. […] the cladocerans in Lake Geneva reside in the thermocline region and deep epilimnion during the day, and swim upwards by about 10m during the night, while cyclopoid copepods swim up by 60m, returning to the deep, dark, cold waters of the profundal zone during the day. Even greater distances up and down the water column are achieved by larger animals. The opossum shrimp, Mysis (up to 25mm in length) lives on the bottom of lakes during the day and in Lake Tahoe it swims hundreds of metres up into the surface waters, although not on moon-lit nights. In Lake Baikal, one of the main zooplankton species is the endemic amphipod, Macrohectopus branickii, which grows up to 38mm in size. It can form dense swarms at 100–200m depth during the day, but the populations then disperse and rise to the upper waters during the night. These nocturnal migrations connect the pelagic surface waters with the profundal zone in lake ecosystems, and are thought to be an adaptation towards avoiding visual predators, especially pelagic fish, during the day, while accessing food in the surface waters under the cover of nightfall. […] Although certain fish species remain within specific zones of the lake, there are others that swim among zones and access multiple habitats. […] This type of fish migration means that the different parts of the lake ecosystem are ecologically connected. For many fish species, moving between habitats extends all the way to the ocean. Anadromous fish migrate out of the lake and swim to the sea each year, and although this movement comes at considerable energetic cost, it has the advantage of access to rich marine food sources, while allowing the young to be raised in the freshwater environment with less exposure to predators. […] With the converse migration pattern, catadromous fish live in freshwater and spawn in the sea.”

“Invasive species that are the most successful and do the most damage once they enter a lake have a number of features in common: fast growth rates, broad tolerances, the capacity to thrive under high population densities, and an ability to disperse and colonize that is enhanced by human activities. Zebra mussels (Dreissena polymorpha) get top marks in each of these categories, and they have proven to be a troublesome invader in many parts of the world. […] A single Zebra mussel can produce up to one million eggs over the course of a spawning season, and these hatch into readily dispersed larvae (‘veligers’), that are free-swimming for up to a month. The adults can achieve densities up to hundreds of thousands per square metre, and their prolific growth within water pipes has been a serious problem for the cooling systems of nuclear and thermal power stations, and for the intake pipes of drinking water plants. A single Zebra mussel can filter a litre a day, and they have the capacity to completely strip the water of bacteria and protists. In Lake Erie, the water clarity doubled and diatoms declined by 80–90 per cent soon after the invasion of Zebra mussels, with a concomitant decline in zooplankton, and potential impacts on planktivorous fish. The invasion of this species can shift a lake from dominance of the pelagic to the benthic food web, but at the expense of native unionid clams on the bottom that can become smothered in Zebra mussels. Their efficient filtering capacity may also cause a regime shift in primary producers, from turbid waters with high concentrations of phytoplankton to a clearer lake ecosystem state in which benthic water plants dominate.”

“One of the many distinguishing features of H2O is its unusually high dielectric constant, meaning that it is a strongly polar solvent with positive and negative charges that can stabilize ions brought into solution. This dielectric property results from the asymmetrical electron cloud over the molecule […] and it gives liquid water the ability to leach minerals from rocks and soils as it passes through the ground, and to maintain these salts in solution, even at high concentrations. Collectively, these dissolved minerals produce the salinity of the water […] Sea water is around 35ppt, and its salinity is mainly due to the positively charged ions sodium (Na+), potassium (K+), magnesium (Mg2+), and calcium (Ca2+), and the negatively charged ions chloride (Cl), sulfate (SO42-), and carbonate CO32-). These solutes, collectively called the ‘major ions’, conduct electrons, and therefore a simple way to track salinity is to measure the electrical conductance of the water between two electrodes set a known distance apart. Lake and ocean scientists now routinely take profiles of salinity and temperature with a CTD: a submersible instrument that records conductance, temperature, and depth many times per second as it is lowered on a rope or wire down the water column. Conductance is measured in Siemens (or microSiemens (µS), given the low salt concentrations in freshwater lakes), and adjusted to a standard temperature of 25°C to give specific conductivity in µS/cm. All freshwater lakes contain dissolved minerals, with specific conductivities in the range 50–500µS/cm, while salt water lakes have values that can exceed sea water (about 50,000µS/cm), and are the habitats for extreme microbes”.

“The World Register of Dams currently lists 58,519 ‘large dams’, defined as those with a dam wall of 15m or higher; these collectively store 16,120km3 of water, equivalent to 213 years of flow of Niagara Falls on the USA–Canada border. […] Around a hundred large dam projects are in advanced planning or construction in Africa […]. More than 300 dams are planned or under construction in the Amazon Basin of South America […]. Reservoirs have a number of distinguishing features relative to natural lakes. First, the shape (‘morphometry’) of their basins is rarely circular or oval, but instead is often dendritic, with a tree-like main stem and branches ramifying out into the submerged river valleys. Second, reservoirs typically have a high catchment area to lake area ratio, again reflecting their riverine origins. For natural lakes, this ratio is relatively low […] These proportionately large catchments mean that reservoirs have short water residence times, and water quality is much better than might be the case in the absence of this rapid flushing. Nonetheless, noxious algal blooms can develop and accumulate in isolated bays and side-arms, and downstream next to the dam itself. Reservoirs typically experience water level fluctuations that are much larger and more rapid than in natural lakes, and this limits the development of littoral plants and animals. Another distinguishing feature of reservoirs is that they often show a longitudinal gradient of conditions. Upstream, the river section contains water that is flowing, turbulent, and well mixed; this then passes through a transition zone into the lake section up to the dam, which is often the deepest part of the lake and may be stratified and clearer due to decantation of land-derived particles. In some reservoirs, the water outflow is situated near the base of the dam within the hypolimnion, and this reduces the extent of oxygen depletion and nutrient build-up, while also providing cool water for fish and other animal communities below the dam. There is increasing attention being given to careful regulation of the timing and magnitude of dam outflows to maintain these downstream ecosystems. […] The downstream effects of dams continue out into the sea, with the retention of sediments and nutrients in the reservoir leaving less available for export to marine food webs. This reduction can also lead to changes in shorelines, with a retreat of the coastal delta and intrusion of seawater because natural erosion processes can no longer be offset by resupply of sediments from upstream.”

“One of the most serious threats facing lakes throughout the world is the proliferation of algae and water plants caused by eutrophication, the overfertilization of waters with nutrients from human activities. […] Nutrient enrichment occurs both from ‘point sources’ of effluent discharged via pipes into the receiving waters, and ‘nonpoint sources’ such the runoff from roads and parking areas, agricultural lands, septic tank drainage fields, and terrain cleared of its nutrient- and water-absorbing vegetation. By the 1970s, even many of the world’s larger lakes had begun to show worrying signs of deterioration from these sources of increasing enrichment. […] A sharp drop in water clarity is often among the first signs of eutrophication, although in forested areas this effect may be masked for many years by the greater absorption of light by the coloured organic materials that are dissolved within the lake water. A drop in oxygen levels in the bottom waters during stratification is another telltale indicator of eutrophication, with the eventual fall to oxygen-free (anoxic) conditions in these lower strata of the lake. However, the most striking impact with greatest effect on ecosystem services is the production of harmful algal blooms (HABs), specifically by cyanobacteria. In eutrophic, temperate latitude waters, four genera of bloom-forming cyanobacteria are the usual offenders […]. These may occur alone or in combination, and although each has its own idiosyncratic size, shape, and lifestyle, they have a number of impressive biological features in common. First and foremost, their cells are typically full of hydrophobic protein cases that exclude water and trap gases. These honeycombs of gas-filled chambers, called ‘gas vesicles’, reduce the density of the cells, allowing them to float up to the surface where there is light available for growth. Put a drop of water from an algal bloom under a microscope and it will be immediately apparent that the individual cells are extremely small, and that the bloom itself is composed of billions of cells per litre of lake water.”

“During the day, the [algal] cells capture sunlight and produce sugars by photosynthesis; this increases their density, eventually to the point where they are heavier than the surrounding water and sink to more nutrient-rich conditions at depth in the water column or at the sediment surface. These sugars are depleted by cellular respiration, and this loss of ballast eventually results in cells becoming less dense than water and floating again towards the surface. This alternation of sinking and floating can result in large fluctuations in surface blooms over the twenty-four-hour cycle. The accumulation of bloom-forming cyanobacteria at the surface gives rise to surface scums that then can be blown into bays and washed up onto beaches. These dense populations of colonies in the water column, and especially at the surface, can shade out bottom-dwelling water plants, as well as greatly reduce the amount of light for other phytoplankton species. The resultant ‘cyanobacterial dominance’ and loss of algal species diversity has negative implications for the aquatic food web […] This negative impact on the food web may be compounded by the final collapse of the bloom and its decomposition, resulting in a major drawdown of oxygen. […] Bloom-forming cyanobacteria are especially troublesome for the management of drinking water supplies. First, there is the overproduction of biomass, which results in a massive load of algal particles that can exceed the filtration capacity of a water treatment plant […]. Second, there is an impact on the taste of the water. […] The third and most serious impact of cyanobacteria is that some of their secondary compounds are highly toxic. […] phosphorus is the key nutrient limiting bloom development, and efforts to preserve and rehabilitate freshwaters should pay specific attention to controlling the input of phosphorus via point and nonpoint discharges to lakes.”

Ultramicrobacteria.
The viral shunt in marine foodwebs.
Proteobacteria. Alphaproteobacteria. Betaproteobacteria. Gammaproteobacteria.
Mixotroph.
Carbon cycle. Nitrogen cycle. AmmonificationAnammox. Comammox.
Methanotroph.
Phosphorus cycle.
Littoral zone. Limnetic zone. Profundal zone. Benthic zone. Benthos.
Phytoplankton. Diatom. Picoeukaryote. Flagellates. Cyanobacteria.
Trophic state (-index).
Amphipoda. Rotifer. Cladocera. Copepod. Daphnia.
Redfield ratio.
δ15N.
Thermistor.
Extremophile. Halophile. Psychrophile. Acidophile.
Caspian Sea. Endorheic basin. Mono Lake.
Alpine lake.
Meromictic lake.
Subglacial lake. Lake Vostock.
Thermus aquaticus. Taq polymerase.
Lake Monoun.
Microcystin. Anatoxin-a.

 

 

February 2, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Engineering, Zoology | Leave a comment

Lakes (I)

“The aim of this book is to provide a condensed overview of scientific knowledge about lakes, their functioning as ecosystems that we are part of and depend upon, and their responses to environmental change. […] Each chapter briefly introduces concepts about the physical, chemical, and biological nature of lakes, with emphasis on how these aspects are connected, the relationships with human needs and impacts, and the implications of our changing global environment.”

I’m currently reading this book and I really like it so far. I have added some observations from the first half of the book and some coverage-related links below.

“High resolution satellites can readily detect lakes above 0.002 kilometres square (km2) in area; that’s equivalent to a circular waterbody some 50m across. Using this criterion, researchers estimate from satellite images that the world contains 117 million lakes, with a total surface area amounting to 5 million km2. […] continuous accumulation of materials on the lake floor, both from inflows and from the production of organic matter within the lake, means that lakes are ephemeral features of the landscape, and from the moment of their creation onwards, they begin to fill in and gradually disappear. The world’s deepest and most ancient freshwater ecosystem, Lake Baikal in Russia (Siberia), is a compelling example: it has a maximum depth of 1,642m, but its waters overlie a much deeper basin that over the twenty-five million years of its geological history has become filled with some 7,000m of sediments. Lakes are created in a great variety of ways: tectonic basins formed by movements in the Earth’s crust, the scouring and residual ice effects of glaciers, as well as fluvial, volcanic, riverine, meteorite impacts, and many other processes, including human construction of ponds and reservoirs. Tectonic basins may result from a single fault […] or from a series of intersecting fault lines. […] The oldest and deepest lakes in the world are generally of tectonic origin, and their persistence through time has allowed the evolution of endemic plants and animals; that is, species that are found only at those sites.”

“In terms of total numbers, most of the world’s lakes […] owe their origins to glaciers that during the last ice age gouged out basins in the rock and deepened river valleys. […] As the glaciers retreated, their terminal moraines (accumulations of gravel and sediments) created dams in the landscape, raising water levels or producing new lakes. […] During glacial retreat in many areas of the world, large blocks of glacial ice broke off and were left behind in the moraines. These subsequently melted out to produce basins that filled with water, called ‘kettle’ or ‘pothole’ lakes. Such waterbodies are well known across the plains of North America and Eurasia. […] The most violent of lake births are the result of volcanoes. The craters left behind after a volcanic eruption can fill with water to form small, often circular-shaped and acidic lakes. […] Much larger lakes are formed by the collapse of a magma chamber after eruption to produce caldera lakes. […] Craters formed by meteorite impacts also provide basins for lakes, and have proved to be of great scientific as well as human interest. […] There was a time when limnologists paid little attention to small lakes and ponds, but, this has changed with the realization that although such waterbodies are modest in size, they are extremely abundant throughout the world and make up a large total surface area. Furthermore, these smaller waterbodies often have high rates of chemical activity such as greenhouse gas production and nutrient cycling, and they are major habitats for diverse plants and animals”.

“For Forel, the science of lakes could be subdivided into different disciplines and subjects, all of which continue to occupy the attention of freshwater scientists today […]. First, the physical environment of a lake includes its geological origins and setting, the water balance and exchange of heat with the atmosphere, as well as the penetration of light, the changes in temperature with depth, and the waves, currents, and mixing processes that collectively determine the movement of water. Second, the chemical environment is important because lake waters contain a great variety of dissolved materials (‘solutes’) and particles that play essential roles in the functioning of the ecosystem. Third, the biological features of a lake include not only the individual species of plants, microbes, and animals, but also their organization into food webs, and the distribution and functioning of these communities across the bottom of the lake and in the overlying water.”

“In the simplest hydrological terms, lakes can be thought of as tanks of water in the landscape that are continuously topped up by their inflowing rivers, while spilling excess water via their outflow […]. Based on this model, we can pose the interesting question: how long does the average water molecule stay in the lake before leaving at the outflow? This value is referred to as the water residence time, and it can be simply calculated as the total volume of the lake divided by the water discharge at the outlet. This lake parameter is also referred to as the ‘flushing time’ (or ‘flushing rate’, if expressed as a proportion of the lake volume discharged per unit of time) because it provides an estimate of how fast mineral salts and pollutants can be flushed out of the lake basin. In general, lakes with a short flushing time are more resilient to the impacts of human activities in their catchments […] Each lake has its own particular combination of catchment size, volume, and climate, and this translates into a water residence time that varies enormously among lakes [from perhaps a month to more than a thousand years, US] […] A more accurate approach towards calculating the water residence time is to consider the question: if the lake were to be pumped dry, how long would it take to fill it up again? For most lakes, this will give a similar value to the outflow calculation, but for lakes where evaporation is a major part of the water balance, the residence time will be much shorter.”

“Each year, mineral and organic particles are deposited by wind on the lake surface and are washed in from the catchment, while organic matter is produced within the lake by aquatic plants and plankton. There is a continuous rain of this material downwards, ultimately accumulating as an annual layer of sediment on the lake floor. These lake sediments are storehouses of information about past changes in the surrounding catchment, and they provide a long-term memory of how the limnology of a lake has responded to those changes. The analysis of these natural archives is called ‘palaeolimnology’ (or ‘palaeoceanography’ for marine studies), and this branch of the aquatic sciences has yielded enormous insights into how lakes change through time, including the onset, effects, and abatement of pollution; changes in vegetation both within and outside the lake; and alterations in regional and global climate.”

“Sampling for palaeolimnological analysis is typically undertaken in the deepest waters to provide a more integrated and complete picture of the lake basin history. This is also usually the part of the lake where sediment accumulation has been greatest, and where the disrupting activities of bottom-dwelling animals (‘bioturbation’ of the sediments) may be reduced or absent. […] Some of the most informative microfossils to be found in lake sediments are diatoms, an algal group that has cell walls (‘frustules’) made of silica glass that resist decomposition. Each lake typically contains dozens to hundreds of different diatom species, each with its own characteristic set of environmental preferences […]. A widely adopted approach is to sample many lakes and establish a statistical relationship or ‘transfer function’ between diatom species composition (often by analysis of surface sediments) and a lake water variable such as temperature, pH, phosphorus, or dissolved organic carbon. This quantitative species–environment relationship can then be applied to the fossilized diatom species assemblage in each stratum of a sediment core from a lake in the same region, and in this way the physical and chemical fluctuations that the lake has experienced in the past can be reconstructed or ‘hindcast’ year-by-year. Other fossil indicators of past environmental change include algal pigments, DNA of algae and bacteria including toxic bloom species, and the remains of aquatic animals such as ostracods, cladocerans, and larval insects.”

“In lake and ocean studies, the penetration of sunlight into the water can be […] precisely measured with an underwater light meter (submersible radiometer), and such measurements always show that the decline with depth follows a sharp curve rather than a straight line […]. This is because the fate of sunlight streaming downwards in water is dictated by the probability of the photons being absorbed or deflected out of the light path; for example, a 50 per cent probability of photons being lost from the light beam by these processes per metre depth in a lake would result in sunlight values dropping from 100 per cent at the surface to 50 per cent at 1m, 25 per cent at 2m, 12.5 per cent at 3m, and so on. The resulting exponential curve means that for all but the clearest of lakes, there is only enough solar energy for plants, including photosynthetic cells in the plankton (phytoplankton), in the upper part of the water column. […] The depth limit for underwater photosynthesis or primary production is known as the ‘compensation depth‘. This is the depth at which carbon fixed by photosynthesis exactly balances the carbon lost by cellular respiration, so the overall production of new biomass (net primary production) is zero. This depth often corresponds to an underwater light level of 1 per cent of the sunlight just beneath the water surface […] The production of biomass by photosynthesis takes place at all depths above this level, and this zone is referred to as the ‘photic’ zone. […] biological processes in [the] ‘aphotic zone’ are mostly limited to feeding and decomposition. A Secchi disk measurement can be used as a rough guide to the extent of the photic zone: in general, the 1 per cent light level is about twice the Secchi depth.”

“[W]ater colour is now used in […] many powerful ways to track changes in water quality and other properties of lakes, rivers, estuaries, and the ocean. […] Lakes have different colours, hues, and brightness levels as a result of the materials that are dissolved and suspended within them. The purest of lakes are deep blue because the water molecules themselves absorb light in the green and, to a greater extent, red end of the spectrum; they scatter the remaining blue photons in all directions, mostly downwards but also back towards our eyes. […] Algae in the water typically cause it to be green and turbid because their suspended cells and colonies contain chlorophyll and other light-capturing molecules that absorb strongly in the blue and red wavebands, but not green. However there are some notable exceptions. Noxious algal blooms dominated by cyanobacteria are blue-green (cyan) in colour caused by their blue-coloured protein phycocyanin, in addition to chlorophyll.”

“[A]t the largest dimension, at the scale of the entire lake, there has to be a net flow from the inflowing rivers to the outflow, and […] from this landscape perspective, lakes might be thought of as enlarged rivers. Of course, this riverine flow is constantly disrupted by wind-induced movements of the water. When the wind blows across the surface, it drags the surface water with it to generate a downwind flow, and this has to be balanced by a return movement of water at depth. […] In large lakes, the rotation of the Earth has plenty of time to exert its weak effect as the water moves from one side of the lake to the other. As a result, the surface water no longer flows in a straight line, but rather is directed into two or more circular patterns or gyres that can move nearshore water masses rapidly into the centre of the lake and vice-versa. Gyres can therefore be of great consequence […] Unrelated to the Coriolis Effect, the interaction between wind-induced currents and the shoreline can also cause water to flow in circular, individual gyres, even in smaller lakes. […] At a much smaller scale, the blowing of wind across a lake can give rise to downward spiral motions in the water, called ‘Langmuir cells‘. […] These circulation features are commonly observed in lakes, where the spirals progressing in the general direction of the wind concentrate foam (on days of white-cap waves) or glossy, oily materials (on less windy days) into regularly spaced lines that are parallel to the direction of the wind. […] Density currents must also be included in this brief discussion of water movement […] Cold river water entering a warm lake will be denser than its surroundings and therefore sinks to the buttom, where it may continue to flow for considerable distances. […] Density currents contribute greatly to inshore-offshore exchanges of water, with potential effects on primary productivity, depp-water oxygenation, and the dispersion of pollutants.”

Links:

Limnology.
Drainage basin.
Lake Geneva. Lake Malawi. Lake Tanganyika. Lake Victoria. Lake Biwa. Lake Titicaca.
English Lake District.
Proglacial lakeLake Agassiz. Lake Ojibway.
Lake Taupo.
Manicouagan Reservoir.
Subglacial lake.
Thermokarst (-lake).
Bathymetry. Bathymetric chart. Hypsographic curve.
Várzea forest.
Lake Chad.
Colored dissolved organic matter.
H2O Temperature-density relationship. Thermocline. Epilimnion. Hypolimnion. Monomictic lake. Dimictic lake. Lake stratification.
Capillary wave. Gravity wave. Seiche. Kelvin wave. Poincaré wave.
Benthic boundary layer.
Kelvin–Helmholtz instability.

January 22, 2018 Posted by | Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Rivers (II)

Some more observations from the book and related links below.

“By almost every measure, the Amazon is the greatest of all the large rivers. Encompassing more than 7 million square kilometres, its drainage basin is the largest in the world and makes up 5% of the global land surface. The river accounts for nearly one-fifth of all the river water discharged into the oceans. The flow is so great that water from the Amazon can still be identified 125 miles out in the Atlantic […] The Amazon has some 1,100 tributaries, and 7 of these are more than 1,600 kilometres long. […] In the lowlands, most Amazonian rivers have extensive floodplains studded with thousands of shallow lakes. Up to one-quarter of the entire Amazon Basin is periodically flooded, and these lakes become progressively connected with each other as the water level rise.”

“To hydrologists, the term ‘flood’ refers to a river’s annual peak discharge period, whether the water inundates the surrounding landscape or not. In more common parlance, however, a flood is synonymous with the river overflowing it’s banks […] Rivers flood in the natural course of events. This often occurs on the floodplain, as the name implies, but flooding can affect almost all of the length of the river. Extreme weather, particularly heavy or protracted rainfall, is the most frequent cause of flooding. The melting of snow and ice is another common cause. […] River floods are one of the most common natural hazards affecting human society, frequently causing social disruption, material damage, and loss of life. […] Most floods have a seasonal element in their occurence […] It is a general rule that the magnitude of a flood is inversely related to its frequency […] Many of the less predictable causes of flooding occur after a valley has been blocked by a natural dam as a result of a landslide, glacier, or lava flow. Natural dams may cause upstream flooding as the blocked river forms a lake and downstream flooding as a result of failure of the dam.”

“The Tigris-Euphrates, Nile, and Indus are all large, exotic river systems, but in other respects they are quite different. The Nile has a relatively gentle gradient in Egypt and a channel that has experienced only small changes over the last few thousand years, by meander cut-off and a minor shift eastwards. The river usually flooded in a regular and predictable way. The stability and long continuity of the Egyptian civilization may be a reflection of its river’s relative stability. The steeper channel of the Indus, by contrast, has experienced major avulsions over great distances on the lower Indus Plain and some very large floods caused by the failure of glacier ice dams in the Himalayan mountains. Likely explanations for the abandonment of many Harappan cities […] take account of damage caused by major floods and/or the disruption caused by channel avulsion leading to a loss of water supply. Channel avulsion was also a problem for the Sumerian civilization on the alluvial plain called Mesopotamia […] known for the rise and fall of its numerous city states. Most of these cities were situated along the Euphrates River, probably because it was more easily controlled for irrigation purposes than the Tigris, which flowed faster and carried much more water. However, the Euphrates was an anastomosing river with multiple channels that diverge and rejoin. Over time, individual branch channels ceased to flow as others formed, and settlements located on these channels inevitably declined and were abandoned as their water supply ran dry, while others expanded as their channels carried greater amounts of water.”

“During the colonization of the Americas in the mid-18th century and the imperial expansion into Africa and Asia in the late 19th century, rivers were commonly used as boundaries because they were the first, and frequently the only, features mapped by European explorers. The diplomats in Europe who negotiated the allocation of colonial territories claimed by rival powers knew little of the places they were carving up. Often, their limited knowledge was based solely on maps that showed few details, rivers being the only distinct physical features marked. Today, many international river boundaries remain as legacies of those historical decisions based on poor geographical knowledge because states have been reluctant to alter their territorial boundaries from original delimitation agreements. […] no less than three-quarters of the world’s international boundaries follow rivers for at least part of their course. […] approximately 60% of the world’s fresh water is drawn from rivers shared by more than one country.”

“The sediments carried in rivers, laid down over many years, represent a record of the changes that have occurred in the drainage basin through the ages. Analysis of these sediments is one way in which physical geographers can interpret the historical development of landscapes. They can study the physical and chemical characteristics of the sediments itself and/or the biological remains they contain, such as pollen or spores. […] The simple rate at which material is deposited by a river can be a good reflection of how conditions have changed in the drainage basin. […] Pollen from surrounding plants is often found in abundance in fluvial sediments, and the analysis of pollen can yield a great deal of information about past conditions in an area. […] Very long sediment cores taken from lakes and swamps enable us to reconstruct changes in vegetation over very long time periods, in some cases over a million years […] Because climate is a strong determinant of vegetation, pollen analysis has also proved to be an important method for tracing changes in past climates.”

“The energy in flowing and falling water has been harnessed to perform work by turning water-wheels for more than 2,000 years. The moving water turns a large wheel and a shaft connected to the wheel axle transmits the power from the water through a system of gears and cogs to work machinery, such as a millstone to grind corn. […] The early medieval watermill was able to do the work of between 30 and 60 people, and by the end of the 10th century in Europe, waterwheels were commonly used in a wide range of industries, including powering forge hammers, oil and silk mills, sugar-cane crushers, ore-crushing mills, breaking up bark in tanning mills, pounding leather, and grinding stones. Nonetheless, most were still used for grinding grains for preparation into various types of food and drink. The Domesday Book, a survey prepared in England in AD 1086, lists 6,082 watermills, although this is probably a conservative estimate because many mills were not recorded in the far north of the country. By 1300, this number had risen to exceed 10,000. [..] Medieval watermills typically powered their wheels by using a dam or weir to concentrate the falling water and pond a reserve supply. These modifications to rivers became increasingly common all over Europe, and by the end of the Middle Ages, in the mid-15th century, watermills were in use on a huge number of rivers and streams. The importance of water power continued into the Industrial Revolution […]. The early textile factories were built to produce cloth using machines driven by waterwheels, so they were often called mills. […] [Today,] about one-third of all countries rely on hydropower for more than half their electricity. Globally, hydropower provides about 20% of the world’s total electricity supply.”

“Deliberate manipulation of river channels through engineering works, including dam construction, diversion, channelization, and culverting, […] has a long history. […] In Europe today, almost 80% of the total discharge of the continent’s major rivers is affected by measures designed to regulate flow, whether for drinking water supply, hydroelectric power generation, flood control, or any other reason. The proportion in individual countries is higher still. About 90% of rivers in the UK are regulated as a result of these activities, while in the Netherlands this percentage is close to 100. By contrast, some of the largest rivers on other continents, including the Amazon and the Congo, are hardly manipulated at all. […] Direct and intentional modifications to rivers are complemented by the impacts of land use and land use changes which frequently result in the alteration of rivers as an unintended side effect. Deforestation, afforestation, land drainage, agriculture, and the use of fire have all had significant impacts, with perhaps the most extreme effects produced by construction activity and urbanization. […] The major methods employed in river regulation are the construction of large dams […], the building of run-of-river impoundments such as weirs and locks, and by channelization, a term that covers a range of river engineering works including widening, deepening, straightening, and the stabilization of banks. […] Many aspects of a dynamic river channel and its associated ecosystems are mutually adjusting, so a human activity in a landscape that affects the supply of water or sediment is likely to set off a complex cascade of other alterations.”

“The methods of storage (in reservoirs) and distribution (by canal) have not changed fundamentally since the earliest river irrigation schemes, with the exception of some contemporary projects’ use of pumps to distribute water over greater distances. Nevertheless, many irrigation canals still harness the force of gravity. Half the world’s large dams (defined as being 15 metres or higher) were built exclusively or primarily for irrigation, and about one-third of the world’s irrigated cropland relies on reservoir water. In several countries, including such populous nations as India and China, more than 50% of arable land is irrigated by river water supplied from dams. […] Sadly, many irrigation schemes are not well managed and a number of environmental problems are frequently experienced as a result, both on-site and off-site. In many large networks of irrigation canals, less than half of the water diverted from a river or reservoir actually benefits crops. A lot of water seeps away through unlined canals or evaporates before reaching the fields. Some also runs off the fields or infiltrates through the soil, unused by plants, because farmers apply too much water or at the wrong time. Much of this water seeps back into nearby streams or joins underground aquifers, so can be used again, but the quality of water may deteriorate if it picks up salts, fertilizers, or pesticides. Excessive applications of irrigation water often result in rising water tables beneath fields, causing salinization and waterlogging. These processes reduce crop yields on irrigation schemes all over the world.”

“[Deforestation can contribute] to the degradation of aquatic habitats in numerous ways. The loss of trees along river banks can result in changes in the species found in the river because fewer trees means a decline in plant matter and insects falling from them, items eaten by some fish. Fewer trees on river banks also results in less shade. More sunlight reaching the river results in warmer water and the enhanced growth of algae. A change in species can occur as fish that feed on falling food are edged out by those able to feed on algae. Deforestation also typically results in more runoff and more soil erosion. This sediment may cover spawning grounds, leading to lower reproduction rates. […] Grazing and trampling by livestock reduces vegetation cover and causes the compaction of soil, which reduces its infiltration capacity. As rainwater passes over or through the soil in areas of intensive agriculture, it picks up residues from pesticides and fertilizers and transport them to rivers. In this way, agriculture has become a leading source of river pollution in certain parts of the world. Concentration of nitrates and phosphates, derived from fertilizers, have risen notably in many rivers in Europe and North America since the 1950s and have led to a range of […] problems encompassed under the term ‘eutrophication’ – the raising of biological productivity caused by nutrient enrichment. […] In slow-moving rivers […] the growth of algae reduces light penetration and depletes the oxygen in the water, sometimes causing fish kills.”

“One of the most profound ways in which people alter rivers is by damming them. Obstructing a river and controlling its flow in this way brings about a raft of changes. A dam traps sediments and nutrients, alters the river’s temperature and chemistry, and affects the processes of erosion and deposition by which the river sculpts the landscape. Dams create more uniform flow in rivers, usually by reducing peak flows and increasing minimum flows. Since the natural variation in flow is important for river ecosystems and their biodiversity, when dams even out flows the result is commonly fewer fish of fewer species. […] the past 50 years or so has seen a marked escalation in the rate and scale of construction of dams all over the world […]. At the beginning of the 21st century, there were about 800,000 dams worldwide […] In some large river systems, the capacity of dams is sufficient to hold more than the entire annual discharge of the river. […] Globally, the world’s major reservoirs are thought to control about 15% of the runoff from the land. The volume of water trapped worldwide in reservoirs of all sizes is no less than five times the total global annual river flow […] Downstream of a reservoir, the hydrological regime of a river is modified. Discharge, velocity, water quality, and thermal characteristics are all affected, leading to changes in the channel and its landscape, plants, and animals, both on the river itself and in deltas, estuaries, and offshore. By slowing the flow of river water, a dam acts as a trap for sediment and hence reduces loads in the river downstream. As a result, the flow downstream of the dam is highly erosive. A relative lack of silt arriving at a river’s delta can result in more coastal erosion and the intrusion of seawater that brings salt into delta ecosystems. […] The dam-barrier effect on migratory fish and their access to spawning grounds has been recognized in Europe since medieval times.”

“One of the most important effects cities have on rivers is the way in which urbanization affects flood runoff. Large areas of cities are typically impermeable, being covered by concrete, stone, tarmac, and bitumen. This tends to increase the amount of runoff produced in urban areas, an effect exacerbated by networks of storm drains and sewers. This water carries relatively little sediment (again, because soil surfaces have been covered by impermeable materials), so when it reaches a river channel it typically causes erosion and widening. Larger and more frequent floods are another outcome of the increase in runoff generated by urban areas. […] It […] seems very likely that efforts to manage the flood hazard on the Mississippi have contributed to an increased risk of damage from tropical storms on the Gulf of Mexico coast. The levées built along the river have contributed to the loss of coastal wetlands, starving them of sediment and fresh water, thereby reducing their dampening effect on storm surge levels. This probably enhanced the damage from Hurricane Katrina which struck the city of New Orleans in 2005.”

Links:

Onyx River.
Yangtze. Yangtze floods.
Missoula floods.
Murray River.
Ganges.
Thalweg.
Southeastern Anatolia Project.
Water conflict.
Hydropower.
Fulling mill.
Maritime transport.
Danube.
Lock (water navigation).
Hydrometry.
Yellow River.
Aswan High Dam. Warragamba Dam. Three Gorges Dam.
Onchocerciasis.
River restoration.

January 16, 2018 Posted by | Biology, Books, Ecology, Engineering, Geography, Geology, History | Leave a comment

Random stuff

I have almost stopped posting posts like these, which has resulted in the accumulation of a very large number of links and studies which I figured I might like to blog at some point. This post is mainly an attempt to deal with the backlog – I won’t cover the material in too much detail.

i. Do Bullies Have More Sex? The answer seems to be a qualified yes. A few quotes:

“Sexual behavior during adolescence is fairly widespread in Western cultures (Zimmer-Gembeck and Helfland 2008) with nearly two thirds of youth having had sexual intercourse by the age of 19 (Finer and Philbin 2013). […] Bullying behavior may aid in intrasexual competition and intersexual selection as a strategy when competing for mates. In line with this contention, bullying has been linked to having a higher number of dating and sexual partners (Dane et al. 2017; Volk et al. 2015). This may be one reason why adolescence coincides with a peak in antisocial or aggressive behaviors, such as bullying (Volk et al. 2006). However, not all adolescents benefit from bullying. Instead, bullying may only benefit adolescents with certain personality traits who are willing and able to leverage bullying as a strategy for engaging in sexual behavior with opposite-sex peers. Therefore, we used two independent cross-sectional samples of older and younger adolescents to determine which personality traits, if any, are associated with leveraging bullying into opportunities for sexual behavior.”

“…bullying by males signal the ability to provide good genes, material resources, and protect offspring (Buss and Shackelford 1997; Volk et al. 2012) because bullying others is a way of displaying attractive qualities such as strength and dominance (Gallup et al. 2007; Reijntjes et al. 2013). As a result, this makes bullies attractive sexual partners to opposite-sex peers while simultaneously suppressing the sexual success of same-sex rivals (Gallup et al. 2011; Koh and Wong 2015; Zimmer-Gembeck et al. 2001). Females may denigrate other females, targeting their appearance and sexual promiscuity (Leenaars et al. 2008; Vaillancourt 2013), which are two qualities relating to male mate preferences. Consequently, derogating these qualities lowers a rivals’ appeal as a mate and also intimidates or coerces rivals into withdrawing from intrasexual competition (Campbell 2013; Dane et al. 2017; Fisher and Cox 2009; Vaillancourt 2013). Thus, males may use direct forms of bullying (e.g., physical, verbal) to facilitate intersexual selection (i.e., appear attractive to females), while females may use relational bullying to facilitate intrasexual competition, by making rivals appear less attractive to males.”

The study relies on the use of self-report data, which I find very problematic – so I won’t go into the results here. I’m not quite clear on how those studies mentioned in the discussion ‘have found self-report data [to be] valid under conditions of confidentiality’ – and I remain skeptical. You’ll usually want data from independent observers (e.g. teacher or peer observations) when analyzing these kinds of things. Note in the context of the self-report data problem that if there’s a strong stigma associated with being bullied (there often is, or bullying wouldn’t work as well), asking people if they have been bullied is not much better than asking people if they’re bullying others.

ii. Some topical advice that some people might soon regret not having followed, from the wonderful Things I Learn From My Patients thread:

“If you are a teenage boy experimenting with fireworks, do not empty the gunpowder from a dozen fireworks and try to mix it in your mother’s blender. But if you do decide to do that, don’t hold the lid down with your other hand and stand right over it. This will result in the traumatic amputation of several fingers, burned and skinned forearms, glass shrapnel in your face, and a couple of badly scratched corneas as a start. You will spend months in rehab and never be able to use your left hand again.”

iii. I haven’t talked about the AlphaZero-Stockfish match, but I was of course aware of it and did read a bit about that stuff. Here’s a reddit thread where one of the Stockfish programmers answers questions about the match. A few quotes:

“Which of the two is stronger under ideal conditions is, to me, neither particularly interesting (they are so different that it’s kind of like comparing the maximum speeds of a fish and a bird) nor particularly important (since there is only one of them that you and I can download and run anyway). What is super interesting is that we have two such radically different ways to create a computer chess playing entity with superhuman abilities. […] I don’t think there is anything to learn from AlphaZero that is applicable to Stockfish. They are just too different, you can’t transfer ideas from one to the other.”

“Based on the 100 games played, AlphaZero seems to be about 100 Elo points stronger under the conditions they used. The current development version of Stockfish is something like 40 Elo points stronger than the version used in Google’s experiment. There is a version of Stockfish translated to hand-written x86-64 assembly language that’s about 15 Elo points stronger still. This adds up to roughly half the Elo difference between AlphaZero and Stockfish shown in Google’s experiment.”

“It seems that Stockfish was playing with only 1 GB for transposition tables (the area of memory used to store data about the positions previously encountered in the search), which is way too little when running with 64 threads.” [I seem to recall a comp sci guy observing elsewhere that this was less than what was available to his smartphone version of Stockfish, but I didn’t bookmark that comment].

“The time control was a very artificial fixed 1 minute/move. That’s not how chess is traditionally played. Quite a lot of effort has gone into Stockfish’s time management. It’s pretty good at deciding when to move quickly, and when to spend a lot of time on a critical decision. In a fixed time per move game, it will often happen that the engine discovers that there is a problem with the move it wants to play just before the time is out. In a regular time control, it would then spend extra time analysing all alternative moves and trying to find a better one. When you force it to move after exactly one minute, it will play the move it already know is bad. There is no doubt that this will cause it to lose many games it would otherwise have drawn.”

iv. Thrombolytics for Acute Ischemic Stroke – no benefit found.

“Thrombolysis has been rigorously studied in >60,000 patients for acute thrombotic myocardial infarction, and is proven to reduce mortality. It is theorized that thrombolysis may similarly benefit ischemic stroke patients, though a much smaller number (8120) has been studied in relevant, large scale, high quality trials thus far. […] There are 12 such trials 1-12. Despite the temptation to pool these data the studies are clinically heterogeneous. […] Data from multiple trials must be clinically and statistically homogenous to be validly pooled.14 Large thrombolytic studies demonstrate wide variations in anatomic stroke regions, small- versus large-vessel occlusion, clinical severity, age, vital sign parameters, stroke scale scores, and times of administration. […] Examining each study individually is therefore, in our opinion, both more valid and more instructive. […] Two of twelve studies suggest a benefit […] In comparison, twice as many studies showed harm and these were stopped early. This early stoppage means that the number of subjects in studies demonstrating harm would have included over 2400 subjects based on originally intended enrollments. Pooled analyses are therefore missing these phantom data, which would have further eroded any aggregate benefits. In their absence, any pooled analysis is biased toward benefit. Despite this, there remain five times as many trials showing harm or no benefit (n=10) as those concluding benefit (n=2), and 6675 subjects in trials demonstrating no benefit compared to 1445 subjects in trials concluding benefit.”

“Thrombolytics for ischemic stroke may be harmful or beneficial. The answer remains elusive. We struggled therefore, debating between a ‘yellow’ or ‘red’ light for our recommendation. However, over 60,000 subjects in trials of thrombolytics for coronary thrombosis suggest a consistent beneficial effect across groups and subgroups, with no studies suggesting harm. This consistency was found despite a very small mortality benefit (2.5%), and a very narrow therapeutic window (1% major bleeding). In comparison, the variation in trial results of thrombolytics for stroke and the daunting but consistent adverse effect rate caused by ICH suggested to us that thrombolytics are dangerous unless further study exonerates their use.”

“There is a Cochrane review that pooled estimates of effect. 17 We do not endorse this choice because of clinical heterogeneity. However, we present the NNT’s from the pooled analysis for the reader’s benefit. The Cochrane review suggested a 6% reduction in disability […] with thrombolytics. This would mean that 17 were treated for every 1 avoiding an unfavorable outcome. The review also noted a 1% increase in mortality (1 in 100 patients die because of thrombolytics) and a 5% increase in nonfatal intracranial hemorrhage (1 in 20), for a total of 6% harmed (1 in 17 suffers death or brain hemorrhage).”

v. Suicide attempts in Asperger Syndrome. An interesting finding: “Over 35% of individuals with AS reported that they had attempted suicide in the past.”

Related: Suicidal ideation and suicide plans or attempts in adults with Asperger’s syndrome attending a specialist diagnostic clinic: a clinical cohort study.

“374 adults (256 men and 118 women) were diagnosed with Asperger’s syndrome in the study period. 243 (66%) of 367 respondents self-reported suicidal ideation, 127 (35%) of 365 respondents self-reported plans or attempts at suicide, and 116 (31%) of 368 respondents self-reported depression. Adults with Asperger’s syndrome were significantly more likely to report lifetime experience of suicidal ideation than were individuals from a general UK population sample (odds ratio 9·6 [95% CI 7·6–11·9], p<0·0001), people with one, two, or more medical illnesses (p<0·0001), or people with psychotic illness (p=0·019). […] Lifetime experience of depression (p=0·787), suicidal ideation (p=0·164), and suicide plans or attempts (p=0·06) did not differ significantly between men and women […] Individuals who reported suicide plans or attempts had significantly higher Autism Spectrum Quotient scores than those who did not […] Empathy Quotient scores and ages did not differ between individuals who did or did not report suicide plans or attempts (table 4). Patients with self-reported depression or suicidal ideation did not have significantly higher Autism Spectrum Quotient scores, Empathy Quotient scores, or age than did those without depression or suicidal ideation”.

The fact that people with Asperger’s are more likely to be depressed and contemplate suicide is consistent with previous observations that they’re also more likely to die from suicide – for example a paper I blogged a while back found that in that particular (large Swedish population-based cohort-) study, people with ASD were more than 7 times as likely to die from suicide than were the comparable controls.

Also related: Suicidal tendencies hard to spot in some people with autism.

This link has some great graphs and tables of suicide data from the US.

Also autism-related: Increased perception of loudness in autism. This is one of the ‘important ones’ for me personally – I am much more sound-sensitive than are most people.

vi. Early versus Delayed Invasive Intervention in Acute Coronary Syndromes.

“Earlier trials have shown that a routine invasive strategy improves outcomes in patients with acute coronary syndromes without ST-segment elevation. However, the optimal timing of such intervention remains uncertain. […] We randomly assigned 3031 patients with acute coronary syndromes to undergo either routine early intervention (coronary angiography ≤24 hours after randomization) or delayed intervention (coronary angiography ≥36 hours after randomization). The primary outcome was a composite of death, myocardial infarction, or stroke at 6 months. A prespecified secondary outcome was death, myocardial infarction, or refractory ischemia at 6 months. […] Early intervention did not differ greatly from delayed intervention in preventing the primary outcome, but it did reduce the rate of the composite secondary outcome of death, myocardial infarction, or refractory ischemia and was superior to delayed intervention in high-risk patients.”

vii. Some wikipedia links:

Behrens–Fisher problem.
Sailing ship tactics (I figured I had to read up on this if I were to get anything out of the Aubrey-Maturin books).
Anatomical terms of muscle.
Phatic expression (“a phatic expression […] is communication which serves a social function such as small talk and social pleasantries that don’t seek or offer any information of value.”)
Three-domain system.
Beringian wolf (featured).
Subdural hygroma.
Cayley graph.
Schur polynomial.
Solar neutrino problem.
Hadamard product (matrices).
True polar wander.
Newton’s cradle.

viii. Determinant versus permanent (mathematics – technical).

ix. Some years ago I wrote a few English-language posts about some of the various statistical/demographic properties of immigrants living in Denmark, based on numbers included in a publication by Statistics Denmark. I did it by translating the observations included in that publication, which was only published in Danish. I was briefly considering doing the same thing again when the 2017 data arrived, but I decided not to do it as I recalled that it took a lot of time to write those posts back then, and it didn’t seem to me to be worth the effort – but Danish readers might be interested to have a look at the data, if they haven’t already – here’s a link to the publication Indvandrere i Danmark 2017.

x. A banter blitz session with grandmaster Peter Svidler, who recently became the first Russian ever to win the Russian Chess Championship 8 times. He’s currently shared-second in the World Rapid Championship after 10 rounds and is now in the top 10 on the live rating list in both classical and rapid – seems like he’s had a very decent year.

xi. I recently discovered Dr. Whitecoat’s blog. The patient encounters are often interesting.

December 28, 2017 Posted by | Astronomy, autism, Biology, Cardiology, Chess, Computer science, History, Mathematics, Medicine, Neurology, Physics, Psychiatry, Psychology, Random stuff, Statistics, Studies, Wikipedia, Zoology | Leave a comment

Radioactivity

A few quotes from the book and some related links below. Here’s my very short goodreads review of the book.

Quotes:

“The main naturally occurring radionuclides of primordial origin are uranium-235, uranium-238, thorium-232, their decay products, and potassium-40. The average abundance of uranium, thorium, and potassium in the terrestrial crust is 2.6 parts per million, 10 parts per million, and 1% respectively. Uranium and thorium produce other radionuclides via neutron- and alpha-induced reactions, particularly deeply underground, where uranium and thorium have a high concentration. […] A weak source of natural radioactivity derives from nuclear reactions of primary and secondary cosmic rays with the atmosphere and the lithosphere, respectively. […] Accretion of extraterrestrial material, intensively exposed to cosmic rays in space, represents a minute contribution to the total inventory of radionuclides in the terrestrial environment. […] Natural radioactivity is [thus] mainly produced by uranium, thorium, and potassium. The total heat content of the Earth, which derives from this radioactivity, is 12.6 × 1024 MJ (one megajoule = 1 million joules), with the crust’s heat content standing at 5.4 × 1021 MJ. For comparison, this is significantly more than the 6.4 × 1013 MJ globally consumed for electricity generation during 2011. This energy is dissipated, either gradually or abruptly, towards the external layers of the planet, but only a small fraction can be utilized. The amount of energy available depends on the Earth’s geological dynamics, which regulates the transfer of heat to the surface of our planet. The total power dissipated by the Earth is 42 TW (one TW = 1 trillion watts): 8 TW from the crust, 32.3 TW from the mantle, 1.7 TW from the core. This amount of power is small compared to the 174,000 TW arriving to the Earth from the Sun.”

“Charged particles such as protons, beta and alpha particles, or heavier ions that bombard human tissue dissipate their energy locally, interacting with the atoms via the electromagnetic force. This interaction ejects electrons from the atoms, creating a track of electron–ion pairs, or ionization track. The energy that ions lose per unit path, as they move through matter, increases with the square of their charge and decreases linearly with their energy […] The energy deposited in the tissues and organs of your body by ionizing radiation is defined absorbed dose and is measured in gray. The dose of one gray corresponds to the energy of one joule deposited in one kilogram of tissue. The biological damage wrought by a given amount of energy deposited depends on the kind of ionizing radiation involved. The equivalent dose, measured in sievert, is the product of the dose and a factor w related to the effective damage induced into the living matter by the deposit of energy by specific rays or particles. For X-rays, gamma rays, and beta particles, a gray corresponds to a sievert; for neutrons, a dose of one gray corresponds to an equivalent dose of 5 to 20 sievert, and the factor w is equal to 5–20 (depending on the neutron energy). For protons and alpha particles, w is equal to 5 and 20, respectively. There is also another weighting factor taking into account the radiosensitivity of different organs and tissues of the body, to evaluate the so-called effective dose. Sometimes the dose is still quoted in rem, the old unit, with 100 rem corresponding to one sievert.”

“Neutrons emitted during fission reactions have a relatively high velocity. When still in Rome, Fermi had discovered that fast neutrons needed to be slowed down to increase the probability of their reaction with uranium. The fission reaction occurs with uranium-235. Uranium-238, the most common isotope of the element, merely absorbs the slow neutrons. Neutrons slow down when they are scattered by nuclei with a similar mass. The process is analogous to the interaction between two billiard balls in a head-on collision, in which the incoming ball stops and transfers all its kinetic energy to the second one. ‘Moderators’, such as graphite and water, can be used to slow neutrons down. […] When Fermi calculated whether a chain reaction could be sustained in a homogeneous mixture of uranium and graphite, he got a negative answer. That was because most neutrons produced by the fission of uranium-235 were absorbed by uranium-238 before inducing further fissions. The right approach, as suggested by Szilárd, was to use separated blocks of uranium and graphite. Fast neutrons produced by the splitting of uranium-235 in the uranium block would slow down, in the graphite block, and then produce fission again in the next uranium block. […] A minimum mass – the critical mass – is required to sustain the chain reaction; furthermore, the material must have a certain geometry. The fissile nuclides, capable of sustaining a chain reaction of nuclear fission with low-energy neutrons, are uranium-235 […], uranium-233, and plutonium-239. The last two don’t occur in nature but can be produced artificially by irradiating with neutrons thorium-232 and uranium-238, respectively – via a reaction called neutron capture. Uranium-238 (99.27%) is fissionable, but not fissile. In a nuclear weapon, the chain reaction occurs very rapidly, releasing the energy in a burst.”

“The basic components of nuclear power reactors, fuel, moderator, and control rods, are the same as in the first system built by Fermi, but the design of today’s reactors includes additional components such as a pressure vessel, containing the reactor core and the moderator, a containment vessel, and redundant and diverse safety systems. Recent technological advances in material developments, electronics, and information technology have further improved their reliability and performance. […] The moderator to slow down fast neutrons is sometimes still the graphite used by Fermi, but water, including ‘heavy water’ – in which the water molecule has a deuterium atom instead of a hydrogen atom – is more widely used. Control rods contain a neutron-absorbing material, such as boron or a combination of indium, silver, and cadmium. To remove the heat generated in the reactor core, a coolant – either a liquid or a gas – is circulating through the reactor core, transferring the heat to a heat exchanger or directly to a turbine. Water can be used as both coolant and moderator. In the case of boiling water reactors (BWRs), the steam is produced in the pressure vessel. In the case of pressurized water reactors (PWRs), the steam generator, which is the secondary side of the heat exchanger, uses the heat produced by the nuclear reactor to make steam for the turbines. The containment vessel is a one-metre-thick concrete and steel structure that shields the reactor.”

“Nuclear energy contributed 2,518 TWh of the world’s electricity in 2011, about 14% of the global supply. As of February 2012, there are 435 nuclear power plants operating in 31 countries worldwide, corresponding to a total installed capacity of 368,267 MW (electrical). There are 63 power plants under construction in 13 countries, with a capacity of 61,032 MW (electrical).”

“Since the first nuclear fusion, more than 60 years ago, many have argued that we need at least 30 years to develop a working fusion reactor, and this figure has stayed the same throughout those years.”

“[I]onizing radiation is […] used to improve many properties of food and other agricultural products. For example, gamma rays and electron beams are used to sterilize seeds, flour, and spices. They can also inhibit sprouting and destroy pathogenic bacteria in meat and fish, increasing the shelf life of food. […] More than 60 countries allow the irradiation of more than 50 kinds of foodstuffs, with 500,000 tons of food irradiated every year. About 200 cobalt-60 sources and more than 10 electron accelerators are dedicated to food irradiation worldwide. […] With the help of radiation, breeders can increase genetic diversity to make the selection process faster. The spontaneous mutation rate (number of mutations per gene, for each generation) is in the range 10-8–10-5. Radiation can increase this mutation rate to 10-5–10-2. […] Long-lived cosmogenic radionuclides provide unique methods to evaluate the ‘age’ of groundwaters, defined as the mean subsurface residence time after the isolation of the water from the atmosphere. […] Scientists can date groundwater more than a million years old, through chlorine-36, produced in the atmosphere by cosmic-ray reactions with argon.”

“Radionuclide imaging was developed in the 1950s using special systems to detect the emitted gamma rays. The gamma-ray detectors, called gamma cameras, use flat crystal planes, coupled to photomultiplier tubes, which send the digitized signals to a computer for image reconstruction. Images show the distribution of the radioactive tracer in the organs and tissues of interest. This method is based on the introduction of low-level radioactive chemicals into the body. […] More than 100 diagnostic tests based on radiopharmaceuticals are used to examine bones and organs such as lungs, intestines, thyroids, kidneys, the liver, and gallbladder. They exploit the fact that our organs preferentially absorb different chemical compounds. […] Many radiopharmaceuticals are based on technetium-99m (an excited state of technetium-99 – the ‘m’ stands for ‘metastable’ […]). This radionuclide is used for the imaging and functional examination of the heart, brain, thyroid, liver, and other organs. Technetium-99m is extracted from molybdenum-99, which has a much longer half-life and is therefore more transportable. It is used in 80% of the procedures, amounting to about 40,000 per day, carried out in nuclear medicine. Other radiopharmaceuticals include short-lived gamma-emitters such as cobalt-57, cobalt-58, gallium-67, indium-111, iodine-123, and thallium-201. […] Methods routinely used in medicine, such as X-ray radiography and CAT, are increasingly used in industrial applications, particularly in non-destructive testing of containers, pipes, and walls, to locate defects in welds and other critical parts of the structure.”

“Today, cancer treatment with radiation is generally based on the use of external radiation beams that can target the tumour in the body. Cancer cells are particularly sensitive to damage by ionizing radiation and their growth can be controlled or, in some cases, stopped. High-energy X-rays produced by a linear accelerator […] are used in most cancer therapy centres, replacing the gamma rays produced from cobalt-60. The LINAC produces photons of variable energy bombarding a target with a beam of electrons accelerated by microwaves. The beam of photons can be modified to conform to the shape of the tumour, which is irradiated from different angles. The main problem with X-rays and gamma rays is that the dose they deposit in the human tissue decreases exponentially with depth. A considerable fraction of the dose is delivered to the surrounding tissues before the radiation hits the tumour, increasing the risk of secondary tumours. Hence, deep-seated tumours must be bombarded from many directions to receive the right dose, while minimizing the unwanted dose to the healthy tissues. […] The problem of delivering the needed dose to a deep tumour with high precision can be solved using collimated beams of high-energy ions, such as protons and carbon. […] Contrary to X-rays and gamma rays, all ions of a given energy have a certain range, delivering most of the dose after they have slowed down, just before stopping. The ion energy can be tuned to deliver most of the dose to the tumour, minimizing the impact on healthy tissues. The ion beam, which does not broaden during the penetration, can follow the shape of the tumour with millimetre precision. Ions with higher atomic number, such as carbon, have a stronger biological effect on the tumour cells, so the dose can be reduced. Ion therapy facilities are [however] still very expensive – in the range of hundreds of millions of pounds – and difficult to operate.”

“About 50 million years ago, a global cooling trend took our planet from the tropical conditions at the beginning of the Tertiary to the ice ages of the Quaternary, when the Arctic ice cap developed. The temperature decrease was accompanied by a decrease in atmospheric CO2 from 2,000 to 300 parts per million. The cooling was probably caused by a reduced greenhouse effect and also by changes in ocean circulation due to plate tectonics. The drop in temperature was not constant as there were some brief periods of sudden warming. Ocean deep-water temperatures dropped from 12°C, 50 million years ago, to 6°C, 30 million years ago, according to archives in deep-sea sediments (today, deep-sea waters are about 2°C). […] During the last 2 million years, the mean duration of the glacial periods was about 26,000 years, while that of the warm periods – interglacials – was about 27,000 years. Between 2.6 and 1.1 million years ago, a full cycle of glacial advance and retreat lasted about 41,000 years. During the past 1.2 million years, this cycle has lasted 100,000 years. Stable and radioactive isotopes play a crucial role in the reconstruction of the climatic history of our planet”.

Links:

CUORE (Cryogenic Underground Observatory for Rare Events).
Borexino.
Lawrence Livermore National Laboratory.
Marie Curie. Pierre Curie. Henri Becquerel. Wilhelm Röntgen. Joseph Thomson. Ernest Rutherford. Hans Geiger. Ernest Marsden. Niels Bohr.
Ruhmkorff coil.
Electroscope.
Pitchblende (uraninite).
Mache.
Polonium. Becquerel.
Radium.
Alpha decay. Beta decay. Gamma radiation.
Plum pudding model.
Spinthariscope.
Robert Boyle. John Dalton. Dmitri Mendeleev. Frederick Soddy. James Chadwick. Enrico Fermi. Lise Meitner. Otto Frisch.
Periodic Table.
Exponential decay. Decay chain.
Positron.
Particle accelerator. Cockcroft-Walton generator. Van de Graaff generator.
Barn (unit).
Nuclear fission.
Manhattan Project.
Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Electron volt.
Thermoluminescent dosimeter.
Silicon diode detector.
Enhanced geothermal system.
Chicago Pile Number 1. Experimental Breeder Reactor 1. Obninsk Nuclear Power Plant.
Natural nuclear fission reactor.
Gas-cooled reactor.
Generation I reactors. Generation II reactor. Generation III reactor. Generation IV reactor.
Nuclear fuel cycle.
Accelerator-driven subcritical reactor.
Thorium-based nuclear power.
Small, sealed, transportable, autonomous reactor.
Fusion power. P-p (proton-proton) chain reaction. CNO cycle. Tokamak. ITER (International Thermonuclear Experimental Reactor).
Sterile insect technique.
Phase-contrast X-ray imaging. Computed tomography (CT). SPECT (Single-photon emission computed tomography). PET (positron emission tomography).
Boron neutron capture therapy.
Radiocarbon dating. Bomb pulse.
Radioactive tracer.
Radithor. The Radiendocrinator.
Radioisotope heater unit. Radioisotope thermoelectric generator. Seebeck effect.
Accelerator mass spectrometry.
Atomic bombings of Hiroshima and Nagasaki. Treaty on the Non-Proliferation of Nuclear Weapons. IAEA.
Nuclear terrorism.
Swiss light source. Synchrotron.
Chronology of the universe. Stellar evolution. S-process. R-process. Red giant. Supernova. White dwarf.
Victor Hess. Domenico Pacini. Cosmic ray.
Allende meteorite.
Age of the Earth. History of Earth. Geomagnetic reversal. Uranium-lead dating. Clair Cameron Patterson.
Glacials and interglacials.
Taung child. Lucy. Ardi. Ardipithecus kadabba. Acheulean tools. Java Man. Ötzi.
Argon-argon dating. Fission track dating.

November 28, 2017 Posted by | Archaeology, Astronomy, Biology, Books, Cancer/oncology, Chemistry, Engineering, Geology, History, Medicine, Physics | Leave a comment

Isotopes

A decent book. Below some quotes and links.

“[A]ll mass spectrometers have three essential components — an ion source, a mass filter, and some sort of detector […] Mass spectrometers need to achieve high vacuum to allow the uninterrupted transmission of ions through the instrument. However, even high-vacuum systems contain residual gas molecules which can impede the passage of ions. Even at very high vacuum there will still be residual gas molecules in the vacuum system that present potential obstacles to the ion beam. Ions that collide with residual gas molecules lose energy and will appear at the detector at slightly lower mass than expected. This tailing to lower mass is minimized by improving the vacuum as much as possible, but it cannot be avoided entirely. The ability to resolve a small isotope peak adjacent to a large peak is called ‘abundance sensitivity’. A single magnetic sector TIMS has abundance sensitivity of about 1 ppm per mass unit at uranium masses. So, at mass 234, 1 ion in 1,000,000 will actually be 235U not 234U, and this will limit our ability to quantify the rare 234U isotope. […] AMS [accelerator mass spectrometry] instruments use very high voltages to achieve high abundance sensitivity. […] As I write this chapter, the human population of the world has recently exceeded seven billion. […] one carbon atom in 1012 is mass 14. So, detecting 14C is far more difficult than identifying a single person on Earth, and somewhat comparable to identifying an individual leaf in the Amazon rain forest. Such is the power of isotope ratio mass spectrometry.”

14C is produced in the Earth’s atmosphere by the interaction between nitrogen and cosmic ray neutrons that releases a free proton turning 147N into 146C in a process that we call an ‘n-p’ reaction […] Because the process is driven by cosmic ray bombardment, we call 14C a ‘cosmogenic’ isotope. The half-life of 14C is about 5,000 years, so we know that all the 14C on Earth is either cosmogenic or has been created by mankind through nuclear reactors and bombs — no ‘primordial’ 14C remains because any that originally existed has long since decayed. 14C is not the only cosmogenic isotope; 16O in the atmosphere interacts with cosmic radiation to produce the isotope 10Be (beryllium). […] The process by which a high energy cosmic ray particle removes several nucleons is called ‘spallation’. 10Be production from 16O is not restricted to the atmosphere but also occurs when cosmic rays impact rock surfaces. […] when cosmic rays hit a rock surface they don’t bounce off but penetrate the top 2 or 3 metres (m) — the actual ‘attenuation’ depth will vary for particles of different energy. Most of the Earth’s crust is made of silicate minerals based on bonds between oxygen and silicon. So, the same spallation process that produces 10Be in the atmosphere also occurs in rock surfaces. […] If we know the flux of cosmic rays impacting a surface, the rate of production of the cosmogenic isotopes with depth below the rock surface, and the rate of radioactive decay, it should be possible to convert the number of cosmogenic atoms into an exposure age. […] Rocks on Earth which are shielded from much of the cosmic radiation have much lower levels of isotopes like 10Be than have meteorites which, before they arrive on Earth, are exposed to the full force of cosmic radiation. […] polar scientists have used cores drilled through ice sheets in Antarctica and Greenland to compare 10Be at different depths and thereby reconstruct 10Be production through time. The 14C and 10Be records are closely correlated indicating the common response to changes in the cosmic ray flux.”

“[O]nce we have credible cosmogenic isotope production rates, […] there are two classes of applications, which we can call ‘exposure’ and ‘burial’ methodologies. Exposure studies simply measure the accumulation of the cosmogenic nuclide. Such studies are simplest when the cosmogenic nuclide is a stable isotope like 3He and 21Ne. These will just accumulate continuously as the sample is exposed to cosmic radiation. Slightly more complicated are cosmogenic isotopes that are radioactive […]. These isotopes accumulate through exposure but will also be destroyed by radioactive decay. Eventually, the isotopes achieve the condition known as ‘secular equilibrium’ where production and decay are balanced and no chronological information can be extracted. Secular equilibrium is achieved after three to four half-lives […] Imagine a boulder that has been transported from its place of origin to another place within a glacier — what we call a glacial erratic. While the boulder was deeply covered in ice, it would not have been exposed to cosmic radiation. Its cosmogenic isotopes will only have accumulated since the ice melted. So a cosmogenic isotope exposure age tells us the date at which the glacier retreated, and, by examining multiple erratics from different locations along the course of the glacier, allows us to construct a retreat history for the de-glaciation. […] Burial methodologies using cosmogenic isotopes work in situations where a rock was previously exposed to cosmic rays but is now located in a situation where it is shielded.”

“Cosmogenic isotopes are also being used extensively to recreate the seismic histories of tectonically active areas. Earthquakes occur when geological faults give way and rock masses move. A major earthquake is likely to expose new rock to the Earth’s surface. If the field geologist can identify rocks in a fault zone that (s)he is confident were brought to the surface in an earthquake, then a cosmogenic isotope exposure age would date the fault — providing, of course, that subsequent erosion can be ruled out or quantified. Precarious rocks are rock outcrops that could reasonably be expected to topple if subjected to a significant earthquake. Dating the exposed surface of precarious rocks with cosmogenic isotopes can reveal the amount of time that has elapsed since the last earthquake of a magnitude that would have toppled the rock. Constructing records of seismic history is not merely of academic interest; some of the world’s seismically active areas are also highly populated and developed.”

“One aspect of the natural decay series that acts in favour of the preservation of accurate age information is the fact that most of the intermediate isotopes are short-lived. For example, in both the U series the radon (Rn) isotopes, which might be expected to diffuse readily out of a mineral, have half-lives of only seconds or days, too short to allow significant losses. Some decay series isotopes though do have significantly long half-lives which offer the potential to be geochronometers in their own right. […] These techniques depend on the tendency of natural decay series to evolve towards a state of ‘secular equilibrium’ in which the activity of all species in the decay series is equal. […] at secular equilibrium, isotopes with long half-lives (i.e. small decay constants) will have large numbers of atoms whereas short-lived isotopes (high decay constants) will only constitute a relatively small number of atoms. Since decay constants vary by several orders of magnitude, so will the numbers of atoms of each isotope in the equilibrium decay series. […] Geochronological applications of natural decay series depend upon some process disrupting the natural decay series to introduce either a deficiency or an excess of an isotope in the series. The decay series will then gradually return to secular equilibrium and the geochronometer relies on measuring the extent to which equilibrium has been approached.”

“The ‘ring of fire’ volcanoes around the margin of the Pacific Ocean are a manifestation of subduction in which the oldest parts of the Pacific Ocean crust are being returned to the mantle below. The oldest parts of the Pacific Ocean crust are about 150 million years (Ma) old, with anything older having already disappeared into the mantle via subduction zones. The Atlantic Ocean doesn’t have a ring of fire because it is a relatively young ocean which started to form about 60 Ma ago, and its oldest rocks are not yet ready to form subduction zones. Thus, while continental crust persists for billions of years, oceanic crust is a relatively transient (in terms of geological time) phenomenon at the Earth’s surface.”

“Mantle rocks typically contain minerals such as olivine, pyroxene, spinel, and garnet. Unlike say ice, which melts to form water, mixtures of minerals do not melt in the proportions in which they occur in the rock. Rather, they undergo partial melting in which some minerals […] melt preferentially leaving a solid residue enriched in refractory minerals […]. We know this from experimentally melting mantle-like rocks in the laboratory, but also because the basalts produced by melting of the mantle are closer in composition to Ca-rich (clino-) pyroxene than to the olivine-rich rocks that dominate the solid pieces (or xenoliths) of mantle that are sometimes transferred to the surface by certain types of volcanic eruptions. […] Thirty years ago geologists fiercely debated whether the mantle was homogeneous or heterogeneous; mantle isotope geochemistry hasn’t yet elucidated all the details but it has put to rest the initial conundrum; Earth’s mantle is compositionally heterogeneous.”

Links:

Frederick Soddy.
Rutherford–Bohr model.
Isotopes of hydrogen.
Radioactive decay. Types of decay. Alpha decay. Beta decay. Electron capture decay. Branching fraction. Gamma radiation. Spontaneous fission.
Promethium.
Lanthanides.
Radiocarbon dating.
Hessel de Vries.
Dendrochronology.
Suess effect.
Bomb pulse.
Delta notation (non-wiki link).
Isotopic fractionation.
C3 carbon fixation. C4 carbon fixation.
Nitrogen-15 tracing.
Isotopes of strontium. Strontium isotope analysis.
Ötzi.
Mass spectrometry.
Geiger counter.
Townsend avalanche.
Gas proportional counter.
Scintillation detector.
Liquid scintillation spectometry. Photomultiplier tube.
Dynode.
Thallium-doped sodium iodide detectors. Semiconductor-based detectors.
Isotope separation (-enrichment).
Doubly labeled water.
Urea breath test.
Radiation oncology.
Brachytherapy.
Targeted radionuclide therapy.
Iodine-131.
MIBG scan.
Single-photon emission computed tomography.
Positron emission tomography.
Inductively coupled plasma (ICP) mass spectrometry.
Secondary ion mass spectrometry.
Faraday cup (-detector).
δ18O.
Stadials and interstadials. Oxygen isotope ratio cycle.
Insolation.
Gain and phase model.
Milankovitch cycles.
Perihelion and aphelion. Precession.
Equilibrium Clumped-Isotope Effects in Doubly Substituted Isotopologues of Ethane (non-wiki link).
Age of the Earth.
Uranium–lead dating.
Geochronology.
Cretaceous–Paleogene boundary.
Argon-argon dating.
Nuclear chain reaction. Critical mass.
Fukushima Daiichi nuclear disaster.
Natural nuclear fission reactor.
Continental crust. Oceanic crust. Basalt.
Core–mantle boundary.
Chondrite.
Ocean Island Basalt.
Isochron dating.

November 23, 2017 Posted by | Biology, Books, Botany, Chemistry, Geology, Medicine, Physics | Leave a comment

Materials (I)…

Useful matter is a good definition of materials. […] Materials are materials because inventive people find ingenious things to do with them. Or just because people use them. […] Materials science […] explains how materials are made and how they behave as we use them.”

I recently read this book, which I liked. Below I have added some quotes from the first half of the book, with some added hopefully helpful links, as well as a collection of links at the bottom of the post to other topics covered.

“We understand all materials by knowing about composition and microstructure. Despite their extraordinary minuteness, the atoms are the fundamental units, and they are real, with precise attributes, not least size. Solid materials tend towards crystallinity (for the good thermodynamic reason that it is the arrangement of lowest energy), and they usually achieve it, though often in granular, polycrystalline forms. Processing conditions greatly influence microstructures which may be mobile and dynamic, particularly at high temperatures. […] The idea that we can understand materials by looking at their internal structure in finer and finer detail goes back to the beginnings of microscopy […]. This microstructural view is more than just an important idea, it is the explanatory framework at the core of materials science. Many other concepts and theories exist in materials science, but this is the framework. It says that materials are intricately constructed on many length-scales, and if we don’t understand the internal structure we shall struggle to explain or to predict material behaviour.”

“Oxygen is the most abundant element in the earth’s crust and silicon the second. In nature, silicon occurs always in chemical combination with oxygen, the two forming the strong Si–O chemical bond. The simplest combination, involving no other elements, is silica; and most grains of sand are crystals of silica in the form known as quartz. […] The quartz crystal comes in right- and left-handed forms. Nothing like this happens in metals but arises frequently when materials are built from molecules and chemical bonds. The crystal structure of quartz has to incorporate two different atoms, silicon and oxygen, each in a repeating pattern and in the precise ratio 1:2. There is also the severe constraint imposed by the Si–O chemical bonds which require that each Si atom has four O neighbours arranged around it at the corners of a tetrahedron, every O bonded to two Si atoms. The crystal structure which quartz adopts (which of all possibilities is the one of lowest energy) is made up of triangular and hexagonal units. But within this there are buried helixes of Si and O atoms, and a helix must be either right- or left-handed. Once a quartz crystal starts to grow as right- or left-handed, its structure templates all the other helices with the same handedness. Equal numbers of right- and left-handed crystals occur in nature, but each is unambiguously one or the other.”

“In the living tree, and in the harvested wood that we use as a material, there is a hierarchy of structural levels, climbing all the way from the molecular to the scale of branch and trunk. The stiff cellulose chains are bundled into fibrils, which are themselves bonded by other organic molecules to build the walls of cells; which in turn form channels for the transport of water and nutrients, the whole having the necessary mechanical properties to support its weight and to resist the loads of wind and rain. In the living tree, the structure allows also for growth and repair. There are many things to be learned from biological materials, but the most universal is that biology builds its materials at many structural levels, and rarely makes a distinction between the material and the organism. Being able to build materials with hierarchical architectures is still more or less out of reach in materials engineering. Understanding how materials spontaneously self-assemble is the biggest challenge in contemporary nanotechnology.”

“The example of diamond shows two things about crystalline materials. First, anything we know about an atom and its immediate environment (neighbours, distances, angles) holds for every similar atom throughout a piece of material, however large; and second, everything we know about the unit cell (its size, its shape, and its symmetry) also applies throughout an entire crystal […] and by extension throughout a material made of a myriad of randomly oriented crystallites. These two general propositions provide the basis and justification for lattice theories of material behaviour which were developed from the 1920s onwards. We know that every solid material must be held together by internal cohesive forces. If it were not, it would fly apart and turn into a gas. A simple lattice theory says that if we can work out what forces act on the atoms in one unit cell, then this should be enough to understand the cohesion of the entire crystal. […] In lattice models which describe the cohesion and dynamics of the atoms, the role of the electrons is mainly in determining the interatomic bonding and the stiffness of the bond-spring. But in many materials, and especially in metals and semiconductors, some of the electrons are free to move about within the lattice. A lattice model of electron behaviour combines a geometrical description of the lattice with a more or less mechanical view of the atomic cores, and a fully quantum theoretical description of the electrons themselves. We need only to take account of the outer electrons of the atoms, as the inner electrons are bound tightly into the cores and are not itinerant. The outer electrons are the ones that form chemical bonds, so they are also called the valence electrons.”

“It is harder to push atoms closer together than to pull them further apart. While atoms are soft on the outside, they have harder cores, and pushed together the cores start to collide. […] when we bring a trillion atoms together to form a crystal, it is the valence electrons that are disturbed as the atoms approach each other. As the atomic cores come close to the equilibrium spacing of the crystal, the electron states of the isolated atoms morph into a set of collective states […]. These collective electron states have a continuous distribution of energies up to a top level, and form a ‘band’. But the separation of the valence electrons into distinct electron-pair states is preserved in the band structure, so that we find that the collective states available to the entire population of valence electrons in the entire crystal form a set of bands […]. Thus in silicon, there are two main bands.”

“The perfect crystal has atoms occupying all the positions prescribed by the geometry of its crystal lattice. But real crystalline materials fall short of perfection […] For instance, an individual site may be unoccupied (a vacancy). Or an extra atom may be squeezed into the crystal at a position which is not a lattice position (an interstitial). An atom may fall off its lattice site, creating a vacancy and an interstitial at the same time. Sometimes a site is occupied by the wrong kind of atom. Point defects of this kind distort the crystal in their immediate neighbourhood. Vacancies free up diffusional movement, allowing atoms to hop from site to site. Larger scale defects invariably exist too. A complete layer of atoms or unit cells may terminate abruptly within the crystal to produce a line defect (a dislocation). […] There are materials which try their best to crystallize, but find it hard to do so. Many polymer materials are like this. […] The best they can do is to form small crystalline regions in which the molecules lie side by side over limited distances. […] Often the crystalline domains comprise about half the material: it is a semicrystal. […] Crystals can be formed from the melt, from solution, and from the vapour. All three routes are used in industry and in the laboratory. As a rule, crystals that grow slowly are good crystals. Geological time can give wonderful results. Often, crystals are grown on a seed, a small crystal of the same material deliberately introduced into the crystallization medium. If this is a melt, the seed can gradually be pulled out, drawing behind it a long column of new crystal material. This is the Czochralski process, an important method for making semiconductors. […] However it is done, crystals invariably grow by adding material to the surface of a small particle to make it bigger.”

“As we go down the Periodic Table of elements, the atoms get heavier much more quickly than they get bigger. The mass of a single atom of uranium at the bottom of the Table is about 25 times greater than that of an atom of the lightest engineering metal, beryllium, at the top, but its radius is only 40 per cent greater. […] The density of solid materials of every kind is fixed mainly by where the constituent atoms are in the Periodic Table. The packing arrangement in the solid has only a small influence, although the crystalline form of a substance is usually a little denser than the amorphous form […] The range of solid densities available is therefore quite limited. At the upper end we hit an absolute barrier, with nothing denser than osmium (22,590 kg/m3). At the lower end we have some slack, as we can make lighter materials by the trick of incorporating holes to make foams and sponges and porous materials of all kinds. […] in the entire catalogue of available materials there is a factor of about a thousand for ingenious people to play with, from say 20 to 20,000 kg/m3.”

“The expansion of materials as we increase their temperature is a universal tendency. It occurs because as we raise the temperature the thermal energy of the atoms and molecules increases correspondingly, and this fights against the cohesive forces of attraction. The mean distance of separation between atoms in the solid (or the liquid) becomes larger. […] As a general rule, the materials with small thermal expansivities are metals and ceramics with high melting temperatures. […] Although thermal expansion is a smooth process which continues from the lowest temperatures to the melting point, it is sometimes interrupted by sudden jumps […]. Changes in crystal structure at precise temperatures are commonplace in materials of all kinds. […] There is a cluster of properties which describe the thermal behaviour of materials. Besides the expansivity, there is the specific heat, and also the thermal conductivity. These properties show us, for example, that it takes about four times as much energy to increase the temperature of 1 kilogram of aluminium by 1°C as 1 kilogram of silver; and that good conductors of heat are usually also good conductors of electricity. At everyday temperatures there is not a huge difference in specific heat between materials. […] In all crystalline materials, thermal conduction arises from the diffusion of phonons from hot to cold regions. As they travel, the phonons are subject to scattering both by collisions with other phonons, and with defects in the material. This picture explains why the thermal conductivity falls as temperature rises”.

 

Materials science.
Metals.
Inorganic compound.
Organic compound.
Solid solution.
Copper. Bronze. Brass. Alloy.
Electrical conductivity.
Steel. Bessemer converter. Gamma iron. Alpha iron. Cementite. Martensite.
Phase diagram.
Equation of state.
Calcite. Limestone.
Birefringence.
Portland cement.
Cellulose.
Wood.
Ceramic.
Mineralogy.
Crystallography.
Laue diffraction pattern.
Silver bromide. Latent image. Photographic film. Henry Fox Talbot.
Graphene. Graphite.
Thermal expansion.
Invar.
Dulong–Petit law.
Wiedemann–Franz law.

 

November 14, 2017 Posted by | Biology, Books, Chemistry, Engineering, Physics | Leave a comment

Organic Chemistry (II)

I have included some observations from the second half of the book below, as well as some links to topics covered.

“[E]nzymes are used routinely to catalyse reactions in the research laboratory, and for a variety of industrial processes involving pharmaceuticals, agrochemicals, and biofuels. In the past, enzymes had to be extracted from natural sources — a process that was both expensive and slow. But nowadays, genetic engineering can incorporate the gene for a key enzyme into the DNA of fast growing microbial cells, allowing the enzyme to be obtained more quickly and in far greater yield. Genetic engineering has also made it possible to modify the amino acids making up an enzyme. Such modified enzymes can prove more effective as catalysts, accept a wider range of substrates, and survive harsher reaction conditions. […] New enzymes are constantly being discovered in the natural world as well as in the laboratory. Fungi and bacteria are particularly rich in enzymes that allow them to degrade organic compounds. It is estimated that a typical bacterial cell contains about 3,000 enzymes, whereas a fungal cell contains 6,000. Considering the variety of bacterial and fungal species in existence, this represents a huge reservoir of new enzymes, and it is estimated that only 3 per cent of them have been investigated so far.”

“One of the most important applications of organic chemistry involves the design and synthesis of pharmaceutical agents — a topic that is defined as medicinal chemistry. […] In the 19th century, chemists isolated chemical components from known herbs and extracts. Their aim was to identify a single chemical that was responsible for the extract’s pharmacological effects — the active principle. […] It was not long before chemists synthesized analogues of active principles. Analogues are structures which have been modified slightly from the original active principle. Such modifications can often improve activity or reduce side effects. This led to the concept of the lead compound — a compound with a useful pharmacological activity that could act as the starting point for further research. […] The first half of the 20th century culminated in the discovery of effective antimicrobial agents. […] The 1960s can be viewed as the birth of rational drug design. During that period there were important advances in the design of effective anti-ulcer agents, anti-asthmatics, and beta-blockers for the treatment of high blood pressure. Much of this was based on trying to understand how drugs work at the molecular level and proposing theories about why some compounds were active and some were not.”

“[R]ational drug design was boosted enormously towards the end of the century by advances in both biology and chemistry. The sequencing of the human genome led to the identification of previously unknown proteins that could serve as potential drug targets. […] Advances in automated, small-scale testing procedures (high-throughput screening) also allowed the rapid testing of potential drugs. In chemistry, advances were made in X-ray crystallography and NMR spectroscopy, allowing scientists to study the structure of drugs and their mechanisms of action. Powerful molecular modelling software packages were developed that allowed researchers to study how a drug binds to a protein binding site. […] the development of automated synthetic methods has vastly increased the number of compounds that can be synthesized in a given time period. Companies can now produce thousands of compounds that can be stored and tested for pharmacological activity. Such stores have been called chemical libraries and are routinely tested to identify compounds capable of binding with a specific protein target. These advances have boosted medicinal chemistry research over the last twenty years in virtually every area of medicine.”

“Drugs interact with molecular targets in the body such as proteins and nucleic acids. However, the vast majority of clinically useful drugs interact with proteins, especially receptors, enzymes, and transport proteins […] Enzymes are […] important drug targets. Drugs that bind to the active site and prevent the enzyme acting as a catalyst are known as enzyme inhibitors. […] Enzymes are located inside cells, and so enzyme inhibitors have to cross cell membranes in order to reach them—an important consideration in drug design. […] Transport proteins are targets for a number of therapeutically important drugs. For example, a group of antidepressants known as selective serotonin reuptake inhibitors prevent serotonin being transported into neurons by transport proteins.”

“The main pharmacokinetic factors are absorption, distribution, metabolism, and excretion. Absorption relates to how much of an orally administered drug survives the digestive enzymes and crosses the gut wall to reach the bloodstream. Once there, the drug is carried to the liver where a certain percentage of it is metabolized by metabolic enzymes. This is known as the first-pass effect. The ‘survivors’ are then distributed round the body by the blood supply, but this is an uneven process. The tissues and organs with the richest supply of blood vessels receive the greatest proportion of the drug. Some drugs may get ‘trapped’ or sidetracked. For example fatty drugs tend to get absorbed in fat tissue and fail to reach their target. The kidneys are chiefly responsible for the excretion of drugs and their metabolites.”

“Having identified a lead compound, it is important to establish which features of the compound are important for activity. This, in turn, can give a better understanding of how the compound binds to its molecular target. Most drugs are significantly smaller than molecular targets such as proteins. This means that the drug binds to quite a small region of the protein — a region known as the binding site […]. Within this binding site, there are binding regions that can form different types of intermolecular interactions such as van der Waals interactions, hydrogen bonds, and ionic interactions. If a drug has functional groups and substituents capable of interacting with those binding regions, then binding can take place. A lead compound may have several groups that are capable of forming intermolecular interactions, but not all of them are necessarily needed. One way of identifying the important binding groups is to crystallize the target protein with the drug bound to the binding site. X-ray crystallography then produces a picture of the complex which allows identification of binding interactions. However, it is not always possible to crystallize target proteins and so a different approach is needed. This involves synthesizing analogues of the lead compound where groups are modified or removed. Comparing the activity of each analogue with the lead compound can then determine whether a particular group is important or not. This is known as an SAR study, where SAR stands for structure–activity relationships.” Once the important binding groups have been identified, the pharmacophore for the lead compound can be defined. This specifies the important binding groups and their relative position in the molecule.”

“One way of identifying the active conformation of a flexible lead compound is to synthesize rigid analogues where the binding groups are locked into defined positions. This is known as rigidification or conformational restriction. The pharmacophore will then be represented by the most active analogue. […] A large number of rotatable bonds is likely to have an adverse effect on drug activity. This is because a flexible molecule can adopt a large number of conformations, and only one of these shapes corresponds to the active conformation. […] In contrast, a totally rigid molecule containing the required pharmacophore will bind the first time it enters the binding site, resulting in greater activity. […] It is also important to optimize a drug’s pharmacokinetic properties such that it can reach its target in the body. Strategies include altering the drug’s hydrophilic/hydrophobic properties to improve absorption, and the addition of substituents that block metabolism at specific parts of the molecule. […] The drug candidate must [in general] have useful activity and selectivity, with minimal side effects. It must have good pharmacokinetic properties, lack toxicity, and preferably have no interactions with other drugs that might be taken by a patient. Finally, it is important that it can be synthesized as cheaply as possible”.

“Most drugs that have reached clinical trials for the treatment of Alzheimer’s disease have failed. Between 2002 and 2012, 244 novel compounds were tested in 414 clinical trials, but only one drug gained approval. This represents a failure rate of 99.6 per cent as against a failure rate of 81 per cent for anti-cancer drugs.”

“It takes about ten years and £160 million to develop a new pesticide […] The volume of global sales increased 47 per cent in the ten-year period between 2002 and 2012, while, in 2012, total sales amounted to £31 billion. […] In many respects, agrochemical research is similar to pharmaceutical research. The aim is to find pesticides that are toxic to ‘pests’, but relatively harmless to humans and beneficial life forms. The strategies used to achieve this goal are also similar. Selectivity can be achieved by designing agents that interact with molecular targets that are present in pests, but not other species. Another approach is to take advantage of any metabolic reactions that are unique to pests. An inactive prodrug could then be designed that is metabolized to a toxic compound in the pest, but remains harmless in other species. Finally, it might be possible to take advantage of pharmacokinetic differences between pests and other species, such that a pesticide reaches its target more easily in the pest. […] Insecticides are being developed that act on a range of different targets as a means of tackling resistance. If resistance should arise to an insecticide acting on one particular target, then one can switch to using an insecticide that acts on a different target. […] Several insecticides act as insect growth regulators (IGRs) and target the moulting process rather than the nervous system. In general, IGRs take longer to kill insects but are thought to cause less detrimental effects to beneficial insects. […] Herbicides control weeds that would otherwise compete with crops for water and soil nutrients. More is spent on herbicides than any other class of pesticide […] The synthetic agent 2,4-D […] was synthesized by ICI in 1940 as part of research carried out on biological weapons […] It was first used commercially in 1946 and proved highly successful in eradicating weeds in cereal grass crops such as wheat, maize, and rice. […] The compound […] is still the most widely used herbicide in the world.”

“The type of conjugated system present in a molecule determines the specific wavelength of light absorbed. In general, the more extended the conjugation, the higher the wavelength absorbed. For example, β-carotene […] is the molecule responsible for the orange colour of carrots. It has a conjugated system involving eleven double bonds, and absorbs light in the blue region of the spectrum. It appears red because the reflected light lacks the blue component. Zeaxanthin is very similar in structure to β-carotene, and is responsible for the yellow colour of corn. […] Lycopene absorbs blue-green light and is responsible for the red colour of tomatoes, rose hips, and berries. Chlorophyll absorbs red light and is coloured green. […] Scented molecules interact with olfactory receptors in the nose. […] there are around 400 different olfactory protein receptors in humans […] The natural aroma of a rose is due mainly to 2-phenylethanol, geraniol, and citronellol.”

“Over the last fifty years, synthetic materials have largely replaced natural materials such as wood, leather, wool, and cotton. Plastics and polymers are perhaps the most visible sign of how organic chemistry has changed society. […] It is estimated that production of global plastics was 288 million tons in 2012 […] Polymerization involves linking molecular strands called polymers […]. By varying the nature of the monomer, a huge range of different polymers can be synthesized with widely differing properties. The idea of linking small molecular building blocks into polymers is not a new one. Nature has been at it for millions of years using amino acid building blocks to make proteins, and nucleotide building blocks to make nucleic acids […] The raw materials for plastics come mainly from oil, which is a finite resource. Therefore, it makes sense to recycle or depolymerize plastics to recover that resource. Virtually all plastics can be recycled, but it is not necessarily economically feasible to do so. Traditional recycling of polyesters, polycarbonates, and polystyrene tends to produce inferior plastics that are suitable only for low-quality goods.”

Adipic acid.
Protease. Lipase. Amylase. Cellulase.
Reflectin.
Agonist.
Antagonist.
Prodrug.
Conformational change.
Process chemistry (chemical development).
Clinical trial.
Phenylbutazone.
Pesticide.
Dichlorodiphenyltrichloroethane.
Aldrin.
N-Methyl carbamate.
Organophosphates.
Pyrethrum.
Neonicotinoid.
Colony collapse disorder.
Ecdysone receptor.
Methoprene.
Tebufenozide.
Fungicide.
Quinone outside inhibitors (QoI).
Allelopathy.
Glyphosate.
11-cis retinal.
Chromophore.
Synthetic dyes.
Methylene blue.
Cryptochrome.
Pheromone.
Artificial sweeteners.
Miraculin.
Addition polymer.
Condensation polymer.
Polyethylene.
Polypropylene.
Polyvinyl chloride.
Bisphenol A.
Vulcanization.
Kevlar.
Polycarbonate.
Polyhydroxyalkanoates.
Bioplastic.
Nanochemistry.
Allotropy.
Allotropes of carbon.
Carbon nanotube.
Rotaxane.
π-interactions.
Molecular switch.

November 11, 2017 Posted by | Biology, Books, Botany, Chemistry, Medicine, Pharmacology, Zoology | Leave a comment

Organic Chemistry (I)

This book‘s a bit longer than most ‘A very short introduction to…‘ publications, and it’s quite dense at times and included a lot of interesting stuff. It took me a while to finish it as I put it away a while back when I hit some of the more demanding content, but I did pick it up later and I really enjoyed most of the coverage. In the end I decided that I wouldn’t be doing the book justice if I were to limit my coverage of it to just one post, so this will be only the first of two posts of coverage of this book, covering roughly the first half of it.

As usual I have included in my post both some observations from the book (…and added a few links to these quotes where I figured they might be helpful) as well as some wiki links to topics discussed in the book.

“Organic chemistry is a branch of chemistry that studies carbon-based compounds in terms of their structure, properties, and synthesis. In contrast, inorganic chemistry covers the chemistry of all the other elements in the periodic table […] carbon-based compounds are crucial to the chemistry of life. [However] organic chemistry has come to be defined as the chemistry of carbon-based compounds, whether they originate from a living system or not. […] To date, 16 million compounds have been synthesized in organic chemistry laboratories across the world, with novel compounds being synthesized every day. […] The list of commodities that rely on organic chemistry include plastics, synthetic fabrics, perfumes, colourings, sweeteners, synthetic rubbers, and many other items that we use every day.”

“For a neutral carbon atom, there are six electrons occupying the space around the nucleus […] The electrons in the outer shell are defined as the valence electrons and these determine the chemical properties of the atom. The valence electrons are easily ‘accessible’ compared to the two electrons in the first shell. […] There is great significance in carbon being in the middle of the periodic table. Elements which are close to the left-hand side of the periodic table can lose their valence electrons to form positive ions. […] Elements on the right-hand side of the table can gain electrons to form negatively charged ions. […] The impetus for elements to form ions is the stability that is gained by having a full outer shell of electrons. […] Ion formation is feasible for elements situated to the left or the right of the periodic table, but it is less feasible for elements in the middle of the table. For carbon to gain a full outer shell of electrons, it would have to lose or gain four valence electrons, but this would require far too much energy. Therefore, carbon achieves a stable, full outer shell of electrons by another method. It shares electrons with other elements to form bonds. Carbon excels in this and can be considered chemistry’s ultimate elemental socialite. […] Carbon’s ability to form covalent bonds with other carbon atoms is one of the principle reasons why so many organic molecules are possible. Carbon atoms can be linked together in an almost limitless way to form a mind-blowing variety of carbon skeletons. […] carbon can form a bond to hydrogen, but it can also form bonds to atoms such as nitrogen, phosphorus, oxygen, sulphur, fluorine, chlorine, bromine, and iodine. As a result, organic molecules can contain a variety of different elements. Further variety can arise because it is possible for carbon to form double bonds or triple bonds to a variety of other atoms. The most common double bonds are formed between carbon and oxygen, carbon and nitrogen, or between two carbon atoms. […] The most common triple bonds are found between carbon and nitrogen, or between two carbon atoms.”

[C]hirality has huge importance. The two enantiomers of a chiral molecule behave differently when they interact with other chiral molecules, and this has important consequences in the chemistry of life. As an analogy, consider your left and right hands. These are asymmetric in shape and are non-superimposable mirror images. Similarly, a pair of gloves are non-superimposable mirror images. A left hand will fit snugly into a left-hand glove, but not into a right-hand glove. In the molecular world, a similar thing occurs. The proteins in our bodies are chiral molecules which can distinguish between the enantiomers of other molecules. For example, enzymes can distinguish between the two enantiomers of a chiral compound and catalyse a reaction with one of the enantiomers but not the other.”

“A key concept in organic chemistry is the functional group. A functional group is essentially a distinctive arrangement of atoms and bonds. […] Functional groups react in particular ways, and so it is possible to predict how a molecule might react based on the functional groups that are present. […] it is impossible to build a molecule atom by atom. Instead, target molecules are built by linking up smaller molecules. […] The organic chemist needs to have a good understanding of the reactions that are possible between different functional groups when choosing the molecular building blocks to be used for a synthesis. […] There are many […] reasons for carrying out FGTs [functional group transformations], especially when synthesizing complex molecules. For example, a starting material or a synthetic intermediate may lack a functional group at a key position of the molecular structure. Several reactions may then be required to introduce that functional group. On other occasions, a functional group may be added to a particular position then removed at a later stage. One reason for adding such a functional group would be to block an unwanted reaction at that position of the molecule. Another common situation is where a reactive functional group is converted to a less reactive functional group such that it does not interfere with a subsequent reaction. Later on, the original functional group is restored by another functional group transformation. This is known as a protection/deprotection strategy. The more complex the target molecule, the greater the synthetic challenge. Complexity is related to the number of rings, functional groups, substituents, and chiral centres that are present. […] The more reactions that are involved in a synthetic route, the lower the overall yield. […] retrosynthesis is a strategy by which organic chemists design a synthesis before carrying it out in practice. It is called retrosynthesis because the design process involves studying the target structure and working backwards to identify how that molecule could be synthesized from simpler starting materials. […] a key stage in retrosynthesis is identifying a bond that can be ‘disconnected’ to create those simpler molecules.”

“[V]ery few reactions produce the spectacular visual and audible effects observed in chemistry demonstrations. More typically, reactions involve mixing together two colourless solutions to produce another colourless solution. Temperature changes are a bit more informative. […] However, not all reactions generate heat, and monitoring the temperature is not a reliable way of telling whether the reaction has gone to completion or not. A better approach is to take small samples of the reaction solution at various times and to test these by chromatography or spectroscopy. […] If a reaction is taking place very slowly, different reaction conditions could be tried to speed it up. This could involve heating the reaction, carrying out the reaction under pressure, stirring the contents vigorously, ensuring that the reaction is carried out in a dry atmosphere, using a different solvent, using a catalyst, or using one of the reagents in excess. […] There are a large number of variables that can affect how efficiently reactions occur, and organic chemists in industry are often employed to develop the ideal conditions for a specific reaction. This is an area of organic chemistry known as chemical development. […] Once a reaction has been carried out, it is necessary to isolate and purify the reaction product. This often proves more time-consuming than carrying out the reaction itself. Ideally, one would remove the solvent used in the reaction and be left with the product. However, in most reactions this is not possible as other compounds are likely to be present in the reaction mixture. […] it is usually necessary to carry out procedures that will separate and isolate the desired product from these other compounds. This is known as ‘working up’ the reaction.”

“Proteins are large molecules (macromolecules) which serve a myriad of purposes, and are essentially polymers constructed from molecular building blocks called amino acids […]. In humans, there are twenty different amino acids having the same ‘head group’, consisting of a carboxylic acid and an amine attached to the same carbon atom […] The amino acids are linked up by the carboxylic acid of one amino acid reacting with the amine group of another to form an amide link. Since a protein is being produced, the amide bond is called a peptide bond, and the final protein consists of a polypeptide chain (or backbone) with different side chains ‘hanging off’ the chain […]. The sequence of amino acids present in the polypeptide sequence is known as the primary structure. Once formed, a protein folds into a specific 3D shape […] Nucleic acids […] are another form of biopolymer, and are formed from molecular building blocks called nucleotides. These link up to form a polymer chain where the backbone consists of alternating sugar and phosphate groups. There are two forms of nucleic acid — deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). In DNA, the sugar is deoxyribose , whereas the sugar in RNA is ribose. Each sugar ring has a nucleic acid base attached to it. For DNA, there are four different nucleic acid bases called adenine (A), thymine (T), cytosine (C), and guanine (G) […]. These bases play a crucial role in the overall structure and function of nucleic acids. […] DNA is actually made up of two DNA strands […] where the sugar-phosphate backbones are intertwined to form a double helix. The nucleic acid bases point into the centre of the helix, and each nucleic acid base ‘pairs up’ with a nucleic acid base on the opposite strand through hydrogen bonding. The base pairing is specifically between adenine and thymine, or between cytosine and guanine. This means that one polymer strand is complementary to the other, a feature that is crucial to DNA’s function as the storage molecule for genetic information. […]  [E]ach strand […] act as the template for the creation of a new strand to produce two identical ‘daughter’ DNA double helices […] [A] genetic alphabet of four letters (A, T, G, C) […] code for twenty amino acids. […] [A]n amino acid is coded, not by one nucleotide, but by a set of three. The number of possible triplet combinations using four ‘letters’ is more than enough to encode all the amino acids.”

“Proteins have a variety of functions. Some proteins, such as collagen, keratin, and elastin, have a structural role. Others catalyse life’s chemical reactions and are called enzymes. They have a complex 3D shape, which includes a cavity called the active site […]. This is where the enzyme binds the molecules (substrates) that undergo the enzyme-catalysed reaction. […] A substrate has to have the correct shape to fit an enzyme’s active site, but it also needs binding groups to interact with that site […]. These interactions hold the substrate in the active site long enough for a reaction to occur, and typically involve hydrogen bonds, as well as van der Waals and ionic interactions. When a substrate binds, the enzyme normally undergoes an induced fit. In other words, the shape of the active site changes slightly to accommodate the substrate, and to hold it as tightly as possible. […] Once a substrate is bound to the active site, amino acids in the active site catalyse the subsequent reaction.”

“Proteins called receptors are involved in chemical communication between cells and respond to chemical messengers called neurotransmitters if they are released from nerves, or hormones if they are released by glands. Most receptors are embedded in the cell membrane, with part of their structure exposed on the outer surface of the cell membrane, and another part exposed on the inner surface. On the outer surface they contain a binding site that binds the molecular messenger. An induced fit then takes place that activates the receptor. This is very similar to what happens when a substrate binds to an enzyme […] The induced fit is crucial to the mechanism by which a receptor conveys a message into the cell — a process known as signal transduction. By changing shape, the protein initiates a series of molecular events that influences the internal chemistry within the cell. For example, some receptors are part of multiprotein complexes called ion channels. When the receptor changes shape, it causes the overall ion channel to change shape. This opens up a central pore allowing ions to flow across the cell membrane. The ion concentration within the cell is altered, and that affects chemical reactions within the cell, which ultimately lead to observable results such as muscle contraction. Not all receptors are membrane-bound. For example, steroid receptors are located within the cell. This means that steroid hormones need to cross the cell membrane in order to reach their target receptors. Transport proteins are also embedded in cell membranes and are responsible for transporting polar molecules such as amino acids into the cell. They are also important in controlling nerve action since they allow nerves to capture released neurotransmitters, such that they have a limited period of action.”

“RNA […] is crucial to protein synthesis (translation). There are three forms of RNA — messenger RNA (mRNA), transfer RNA (tRNA), and ribosomal RNA (rRNA). mRNA carries the genetic code for a particular protein from DNA to the site of protein production. Essentially, mRNA is a single-strand copy of a specific section of DNA. The process of copying that information is known as transcription. tRNA decodes the triplet code on mRNA by acting as a molecular adaptor. At one end of tRNA, there is a set of three bases (the anticodon) that can base pair to a set of three bases on mRNA (the codon). An amino acid is linked to the other end of the tRNA and the type of amino acid present is related to the anticodon that is present. When tRNA with the correct anticodon base pairs to the codon on mRNA, it brings the amino acid encoded by that codon. rRNA is a major constituent of a structure called a ribosome, which acts as the factory for protein production. The ribosome binds mRNA then coordinates and catalyses the translation process.”

Organic chemistry.
Carbon.
Stereochemistry.
Delocalization.
Hydrogen bond.
Van der Waals forces.
Ionic bonding.
Chemoselectivity.
Coupling reaction.
Chemical polarity.
Crystallization.
Elemental analysis.
NMR spectroscopy.
Polymerization.
Miller–Urey experiment.
Vester-Ulbricht hypothesis.
Oligonucleotide.
RNA world.
Ribozyme.

November 9, 2017 Posted by | Biology, Books, Chemistry, Genetics | Leave a comment

Molecules

This book is almost exclusively devoted to covering biochemistry topics. When the coverage is decent I find biochemistry reasonably interesting – for example I really liked Beer, Björk & Beardall’s photosynthesis book – and the coverage here was okay, but not more than that. I think that Ball was trying to cover a bit too much ground, or perhaps that there was really too much ground to cover for it to even make sense to try to write a book on this particular topic in a series like this. I learned a lot though.

As usual I’ve added some quotes from the coverage below, as well as some additional links to topics/concepts/people/etc. covered in the book.

“Most atoms on their own are highly reactive – they have a predisposition to join up with other atoms. Molecules are collectives of atoms, firmly welded together into assemblies that may contain anything up to many millions of them. […] By molecules, we generally mean assemblies of a discrete, countable number of atoms. […] Some pure elements adopt molecular forms; others do not. As a rough rule of thumb, metals are non-molecular […] whereas non-metals are molecular. […] molecules are the smallest units of meaning in chemistry. It is through molecules, not atoms, that one can tell stories in the sub-microscopic world. They are the words; atoms are just the letters. […] most words are distinct aggregates of several letters arranged in a particular order. We often find that longer words convey subtler and more finely nuanced meanings. And in molecules, as in words, the order in which the component parts are put together matters: ‘save’ and ‘vase’ do not mean the same thing.”

“There are something like 60,000 different varieties of protein molecule in human cells, each conducting a highly specialized task. It would generally be impossible to guess what this task is merely by looking at a protein. They are undistinguished in appearance, mostly globular in shape […] and composed primarily of carbon, hydrogen, nitrogen, oxygen, and a little sulphur. […] There are twenty varieties of amino acids in natural proteins. In the chain, one amino acid is linked to the next via a covalent bond called a peptide bond. Both molecules shed a few extraneous atoms to make this linkage, and the remainder – another link in the chain – is called a residue. The chain itself is termed a polypeptide. Any string of amino acid residues is a polypeptide. […] In a protein the order of amino acids along the chain – the sequence – is not arbitrary. It is selected […] to ensure that the chain will collapse and curl up in water into the precisely determined globular form of the protein, with all parts of the chain in the right place. This shape can be destroyed by warming the protein, a process called denaturation. But many proteins will fold up again spontaneously into the same globular structure when cooled. In other words, the chain has a kind of memory of its folded shape. The details of this folding process are still not fully understood – it is, in fact, one of the central unsolved puzzles of molecular biology. […] proteins are made not in the [cell] nucleus but in a different compartment called the endoplasmic reticulum […]. The gene is transcribed first into a molecule related to DNA, called RNA (ribonucleic acid). The RNA molecules travel from the nucleus to the endoplasmic reticulum, where they are translated to proteins. The proteins are then shipped off to where they are needed.”

[M]icrofibrils aggregate together in various ways. For example, they can gather in a staggered arrangement to form thick strands called banded fibrils. […] Banded fibrils constitute the connective tissues between cells – they are the cables that hold our flesh together. Bone consists of collagen banded fibrils sprinkled with tiny crystals of the mineral hydroxyapatite, which is basically calcium phosphate. Because of the high protein content of bone, it is flexible and resilient as well as hard. […] In contrast to the disorderly tangle of connective tissue, the eye’s cornea contains collagen fibrils packed side by side in an orderly manner. These fibrils are too small to scatter light, and so the material is virtually transparent. The basic design principle – one that recurs often in nature – is that, by tinkering with the chemical composition and, most importantly, the hierarchical arrangement of the same basic molecules, it is possible to extract several different kinds of material properties. […] cross-links determine the strength of the material: hair and fingernail are more highly cross-linked than skin. Curly or frizzy hair can be straightened by breaking some of [the] sulphur cross-links to make the hairs more pliable. […] Many of the body’s structural fabrics are proteins. Unlike enzymes, structural proteins do not have to conduct any delicate chemistry, but must simply be (for instance) tough, or flexible, or waterproof. In principle many other materials besides proteins would suffice; and indeed, plants use cellulose (a sugar-based polymer) to make their tissues.”

“In many ways, it is metabolism and not replication that provides the best working definition of life. Evolutionary biologists would say that we exist in order to reproduce – but we are not, even the most amorous of us, trying to reproduce all the time. Yet, if we stop metabolizing, even for a minute or two, we are done for. […] Whether waking or asleep, our bodies stay close to a healthy temperature of 37 °C. There is only one way of doing this: our cells are constantly pumping out heat, a by-product of metabolism. Heat is not really the point here – it is simply unavoidable, because all conversion of energy from one form to another squanders some of it this way. Our metabolic processes are primarily about making molecules. Cells cannot survive without constantly reinventing themselves: making new amino acids for proteins, new lipids for membranes, new nucleic acids so that they can divide.”

“In the body, combustion takes place in a tightly controlled, graded sequence of steps, and some chemical energy is drawn off and stored at each stage. […] A power station burns coal, oil, or gas […]. Burning is just a means to an end. The heat is used to turn water into steam; the pressure of the steam drives turbines; the turbines spin and send wire coils whirling in the arms of great magnets, which induces an electrical current in the wire. Energy is passed on, from chemical to heat to mechanical to electrical. And every plant has a barrage of regulatory and safety mechanisms. There are manual checks on pressure gauges and on the structural integrity of moving parts. Automatic sensors make the measurements. Failsafe devices avert catastrophic failure. Energy generation in the cell is every bit as complicated. […] The cell seems to have thought of everything, and has protein devices for fine-tuning it all.”

ATP is the key to the maintenance of cellular integrity and organization, and so the cell puts a great deal of effort into making as much of it as possible from each molecule of glucose that it burns. About 40 per cent of the energy released by the combustion of food is conserved in ATP molecules. ATP is rich in energy because it is like a coiled spring. It contains three phosphate groups, linked like so many train carriages. Each of these phosphate groups has a negative charge; this means that they repel one another. But because they are joined by chemical bonds, they cannot escape one another […]. Straining to get away, the phosphates pull an energetically powerful punch. […] The links between phosphates can be snipped in a reaction that involves water […] called hydrolysis (‘splitting with water’). Each time a bond is hydrolysed, energy is released. Setting free the outermost phosphate converts ATP to adenosine diphosphate (ADP); cleave the second phosphate and it becomes adenosine monophosphate (AMP). Both severances release comparable amounts of energy.”

“Burning sugar is a two-stage process, beginning with its transformation to a molecule called pyruvate in a process known as glycolysis […]. This involves a sequence of ten enzyme-catalysed steps. The first five of these split glucose in half […], powered by the consumption of ATP molecules: two of them are ‘decharged’ to ADP for every glucose molecule split. But the conversion of the fragments to pyruvate […] permits ATP to be recouped from ADP. Four ATP molecules are made this way, so that there is an overall gain of two ATP molecules per glucose molecule consumed. Thus glycolysis charges the cell’s batteries. Pyruvate then normally enters the second stage of the combustion process: the citric acid cycle, which requires oxygen. But if oxygen is scarce – that is, under anaerobic conditions – a contingency plan is enacted whereby pyruvate is instead converted to the molecule lactate. […] The first thing a mitochondrion does is convert pyruvate enzymatically to a molecule called acetyl coenzyme A (CoA). The breakdown of fatty acids and glycerides from fats also eventually generates acetyl CoA. The [citric acid] cycle is a sequence of eight enzyme-catalysed reactions that transform acetyl CoA first to citric acid and then to various other molecules, ending with […] oxaloacetate. This end is a new beginning, for oxaloacetate reacts with acetyl CoA to make citric acid. In some of the steps of the cycle, carbon dioxide is generated as a by-product. It dissolves in the bloodstream and is carried off to the lungs to be exhaled. Thus in effect the carbon in the original glucose molecules is syphoned off into the end product carbon dioxide, completing the combustion process. […] Also syphoned off from the cycle are electrons – crudely speaking, the citric acid cycle sends an electrical current to a different part of the mitochondrion. These electrons are used to convert oxygen molecules and positively charged hydrogen ions to water – an energy-releasing process. The energy is captured and used to make ATP in abundance.”

“While mammalian cells have fuel-burning factories in the form of mitochondria, the solar-power centres in the cells of plant leaves are compartments called chloroplasts […] chloroplast takes carbon dioxide and water, and from them constructs […] sugar. […] In the first part of photosynthesis, light is used to convert NADP to an electron carrier (NADPH) and to transform ADP to ATP. This is effectively a charging-up process that primes the chloroplast for glucose synthesis. In the second part, ATP and NADPH are used to turn carbon dioxide into sugar, in a cyclic sequence of steps called the Calvin–Benson cycle […] There are several similarities between the processes of aerobic metabolism and photosynthesis. Both consist of two distinct sub-processes with separate evolutionary origins: a linear sequence of reactions coupled to a cyclic sequence that regenerates the molecules they both need. The bridge between glycolysis and the citric acid cycle is the electron-ferrying NAD molecule; the two sub-processes of photosynthesis are bridged by the cycling of an almost identical molecule, NAD phosphate (NADP).”

“Despite the variety of messages that hormones convey, the mechanism by which the signal is passed from a receptor protein at the cell surface to the cell’s interior is the same in almost all cases. It involves a sequence of molecular interactions in which molecules transform one another down a relay chain. In cell biology this is called signal transduction. At the same time as relaying the message, these interactions amplify the signal so that the docking of a single hormone molecule to a receptor creates a big response inside the cell. […] The receptor proteins span the entire width of the membrane; the hormone-binding site protrudes on the outer surface, while the base of the receptor emerges from the inner surface […]. When the receptor binds its target hormone, a shape change is transmitted to the lower face of the protein, which enables it to act as an enzyme. […] The participants of all these processes [G protein, guanosine diphosphate and -triphosphate, adenylate cyclase… – figured it didn’t matter if I left out a few details – US…] are stuck to the cell wall. But cAMP floats freely in the cell’s cytoplasm, and is able to carry the signal into the cell interior. It is called a ‘second messenger’, since it is the agent that relays the signal of the ‘first messenger’ (the hormone) into the community of the cell. Cyclic AMP becomes attached to protein molecules called protein kinases, whereupon they in turn become activated as enzymes. Most protein kinases switch other enzymes on and off by attaching phosphate groups to them – a reaction called phosphorylation. […] The process might sound rather complicated, but it is really nothing more than a molecular relay. The signal is passed from the hormone to its receptor, then to the G protein, on to an enzyme and thence to the second messenger, and further on to a protein kinase, and so forth. The G-protein mechanism of signal transduction was discovered in the 1970s by Alfred Gilman and Martin Rodbell, for which they received the 1994 Nobel Prize for medicine. It represents one of the most widespread means of getting a message across a cell membrane. […] it is not just hormonal signalling that makes use of the G-protein mechanism. Our senses of vision and smell, which also involve the transmission of signals, employ the same switching process.”

“Although axon signals are electrical, they differ from those in the metal wires of electronic circuitry. The axon is basically a tubular cell membrane decorated along its length with channels that let sodium and potassium ions in and out. Some of these ion channels are permanently open; others are ‘gated’, opening or closing in response to electrical signals. And some are not really channels at all but pumps, which actively transport sodium ions out of the cell and potassium ions in. These sodium-potassium pumps can move ions […] powered by ATP. […] Drugs that relieve pain typically engage with inhibitory receptors. Morphine, the main active ingredient of opium, binds to so-called opioid receptors in the spinal cord, which inhibit the transmission of pain signals to the brain. There are also opioid receptors in the brain itself, which is why morphine and related opiate drugs have a mental as well as a somatic effect. These receptors in the brain are the binding sites of peptide molecules called endorphins, which the brain produces in response to pain. Some of these are themselves extremely powerful painkillers. […] Not all pain-relieving drugs (analgesics) work by blocking the pain signal. Some prevent the signal from ever being sent. Pain signals are initiated by peptides called prostaglandins, which are manufactured and released by distressed cells. Aspirin (acetylsalicylic acid) latches onto and inhibits one of the enzymes responsible for prostaglandin synthesis, cutting off the cry of pain at its source. Unfortunately, prostaglandins are also responsible for making the mucus that protects the stomach lining […], so one of the side effects of aspirin is the risk of ulcer formation.”

“Shape changes […] are common when a receptor binds its target. If binding alone is the objective, a big shape change is not terribly desirable, since the internal rearrangements of the receptor make heavy weather of the binding event and may make it harder to achieve. This is why many supramolecular hosts are designed so that they are ‘pre-organized’ to receive their guests, minimizing the shape change caused by binding.”

“The way that a protein chain folds up is determined by its amino-acid sequence […] so the ‘information’ for making a protein is uniquely specified by this sequence. DNA encodes this information using […] groups of three bases [to] represent each amino acid. This is the genetic code.* How a particular protein sequence determines the way its chain folds is not yet fully understood. […] Nevertheless, the principle of information flow in the cell is clear. DNA is a manual of information about proteins. We can think of each chromosome as a separate chapter, each gene as a word in that chapter (they are very long words!), and each sequential group of three bases in the gene as a character in the word. Proteins are translations of the words into another language, whose characters are amino acids. In general, only when the genetic language is translated can we understand what it means.”

“It is thought that only about 2–3 per cent of the entire human genome codes for proteins. […] Some people object to genetic engineering on the grounds that it is ethically wrong to tamper with the fundamental material of life – DNA – whether it is in bacteria, humans, tomatoes, or sheep. One can understand such objections, and it would be arrogant to dismiss them as unscientific. Nevertheless, they do sit uneasily with what we now know about the molecular basis of life. The idea that our genetic make-up is sacrosanct looks hard to sustain once we appreciate how contingent, not to say arbitrary, that make-up is. Our genomes are mostly parasite-riddled junk, full of the detritus of over three billion years of evolution.”

Links:

Roald Hoffmann.
Molecular solid.
Covalent bond.
Visible spectrum.
X-ray crystallography.
Electron microscope.
Valence (chemistry).
John Dalton.
Isomer.
Lysozyme.
Organic chemistry.
Synthetic dye industry/Alizarin.
Paul Ehrlich (staining).
Retrosynthetic analysis. [I would have added a link to ‘rational synthesis as well here if there’d been a good article on that topic, but I wasn’t able to find one. Anyway: “Organic chemists call [the] kind of procedure […] in which a starting molecule is converted systematically, bit by bit, to the desired product […] a rational synthesis.”]
Paclitaxel synthesis.
Protein.
Enzyme.
Tryptophan synthase.
Ubiquitin.
Amino acid.
Protein folding.
Peptide bond.
Hydrogen bond.
Nucleotide.
Chromosome.
Structural gene. Regulatory gene.
Operon.
Gregor Mendel.
Mitochondrial DNA.
RNA world.
Ribozyme.
Artificial gene synthesis.
Keratin.
Silk.
Vulcanization.
Aramid.
Microtubule.
Tubulin.
Carbon nanotube.
Amylase/pepsin/glycogen/insulin.
Cytochrome c oxidase.
ATP synthase.
Haemoglobin.
Thylakoid membrane.
Chlorophyll.
Liposome.
TNT.
Motor protein. Dynein. Kinesin.
Sarcomere.
Sliding filament theory of muscle action.
Photoisomerization.
Supramolecular chemistry.
Hormone. Endocrine system.
Neurotransmitter.
Ionophore.
DNA.
Mutation.
Intron. Exon.
Transposon.
Molecular electronics.

October 30, 2017 Posted by | Biology, Books, Botany, Chemistry, Genetics, Neurology, Pharmacology | Leave a comment

Physical chemistry

This is a good book, I really liked it, just as I really liked the other book in the series which I read by the same author, the one about the laws of thermodynamics (blog coverage here). I know much, much more about physics than I do about chemistry and even though some of it was review I learned a lot from this one. Recommended, certainly if you find the quotes below interesting. As usual, I’ve added some observations from the book and some links to topics/people/etc. covered/mentioned in the book below.

Some quotes:

“Physical chemists pay a great deal of attention to the electrons that surround the nucleus of an atom: it is here that the chemical action takes place and the element expresses its chemical personality. […] Quantum mechanics plays a central role in accounting for the arrangement of electrons around the nucleus. The early ‘Bohr model’ of the atom, […] with electrons in orbits encircling the nucleus like miniature planets and widely used in popular depictions of atoms, is wrong in just about every respect—but it is hard to dislodge from the popular imagination. The quantum mechanical description of atoms acknowledges that an electron cannot be ascribed to a particular path around the nucleus, that the planetary ‘orbits’ of Bohr’s theory simply don’t exist, and that some electrons do not circulate around the nucleus at all. […] Physical chemists base their understanding of the electronic structures of atoms on Schrödinger’s model of the hydrogen atom, which was formulated in 1926. […] An atom is often said to be mostly empty space. That is a remnant of Bohr’s model in which a point-like electron circulates around the nucleus; in the Schrödinger model, there is no empty space, just a varying probability of finding the electron at a particular location.”

“No more than two electrons may occupy any one orbital, and if two do occupy that orbital, they must spin in opposite directions. […] this form of the principle [the Pauli exclusion principleUS] […] is adequate for many applications in physical chemistry. At its very simplest, the principle rules out all the electrons of an atom (other than atoms of one-electron hydrogen and two-electron helium) having all their electrons in the 1s-orbital. Lithium, for instance, has three electrons: two occupy the 1s orbital, but the third cannot join them, and must occupy the next higher-energy orbital, the 2s-orbital. With that point in mind, something rather wonderful becomes apparent: the structure of the Periodic Table of the elements unfolds, the principal icon of chemistry. […] The first electron can enter the 1s-orbital, and helium’s (He) second electron can join it. At that point, the orbital is full, and lithium’s (Li) third electron must enter the next higher orbital, the 2s-orbital. The next electron, for beryllium (Be), can join it, but then it too is full. From that point on the next six electrons can enter in succession the three 2p-orbitals. After those six are present (at neon, Ne), all the 2p-orbitals are full and the eleventh electron, for sodium (Na), has to enter the 3s-orbital. […] Similar reasoning accounts for the entire structure of the Table, with elements in the same group all having analogous electron arrangements and each successive row (‘period’) corresponding to the next outermost shell of orbitals.”

“[O]n crossing the [Periodic] Table from left to right, atoms become smaller: even though they have progressively more electrons, the nuclear charge increases too, and draws the clouds in to itself. On descending a group, atoms become larger because in successive periods new outermost shells are started (as in going from lithium to sodium) and each new coating of cloud makes the atom bigger […] the ionization energy [is] the energy needed to remove one or more electrons from the atom. […] The ionization energy more or less follows the trend in atomic radii but in an opposite sense because the closer an electron lies to the positively charged nucleus, the harder it is to remove. Thus, ionization energy increases from left to right across the Table as the atoms become smaller. It decreases down a group because the outermost electron (the one that is most easily removed) is progressively further from the nucleus. […] the electron affinity [is] the energy released when an electron attaches to an atom. […] Electron affinities are highest on the right of the Table […] An ion is an electrically charged atom. That charge comes about either because the neutral atom has lost one or more of its electrons, in which case it is a positively charged cation […] or because it has captured one or more electrons and has become a negatively charged anion. […] Elements on the left of the Periodic Table, with their low ionization energies, are likely to lose electrons and form cations; those on the right, with their high electron affinities, are likely to acquire electrons and form anions. […] ionic bonds […] form primarily between atoms on the left and right of the Periodic Table.”

“Although the Schrödinger equation is too difficult to solve for molecules, powerful computational procedures have been developed by theoretical chemists to arrive at numerical solutions of great accuracy. All the procedures start out by building molecular orbitals from the available atomic orbitals and then setting about finding the best formulations. […] Depictions of electron distributions in molecules are now commonplace and very helpful for understanding the properties of molecules. It is particularly relevant to the development of new pharmacologically active drugs, where electron distributions play a central role […] Drug discovery, the identification of pharmacologically active species by computation rather than in vivo experiment, is an important target of modern computational chemistry.”

Work […] involves moving against an opposing force; heat […] is the transfer of energy that makes use of a temperature difference. […] the internal energy of a system that is isolated from external influences does not change. That is the First Law of thermodynamics. […] A system possesses energy, it does not possess work or heat (even if it is hot). Work and heat are two different modes for the transfer of energy into or out of a system. […] if you know the internal energy of a system, then you can calculate its enthalpy simply by adding to U the product of pressure and volume of the system (H = U + pV). The significance of the enthalpy […] is that a change in its value is equal to the output of energy as heat that can be obtained from the system provided it is kept at constant pressure. For instance, if the enthalpy of a system falls by 100 joules when it undergoes a certain change (such as a chemical reaction), then we know that 100 joules of energy can be extracted as heat from the system, provided the pressure is constant.”

“In the old days of physical chemistry (well into the 20th century), the enthalpy changes were commonly estimated by noting which bonds are broken in the reactants and which are formed to make the products, so A → B might be the bond-breaking step and B → C the new bond-formation step, each with enthalpy changes calculated from knowledge of the strengths of the old and new bonds. That procedure, while often a useful rule of thumb, often gave wildly inaccurate results because bonds are sensitive entities with strengths that depend on the identities and locations of the other atoms present in molecules. Computation now plays a central role: it is now routine to be able to calculate the difference in energy between the products and reactants, especially if the molecules are isolated as a gas, and that difference easily converted to a change of enthalpy. […] Enthalpy changes are very important for a rational discussion of changes in physical state (vaporization and freezing, for instance) […] If we know the enthalpy change taking place during a reaction, then provided the process takes place at constant pressure we know how much energy is released as heat into the surroundings. If we divide that heat transfer by the temperature, then we get the associated entropy change in the surroundings. […] provided the pressure and temperature are constant, a spontaneous change corresponds to a decrease in Gibbs energy. […] the chemical potential can be thought of as the Gibbs energy possessed by a standard-size block of sample. (More precisely, for a pure substance the chemical potential is the molar Gibbs energy, the Gibbs energy per mole of atoms or molecules.)”

“There are two kinds of work. One kind is the work of expansion that occurs when a reaction generates a gas and pushes back the atmosphere (perhaps by pressing out a piston). That type of work is called ‘expansion work’. However, a chemical reaction might do work other than by pushing out a piston or pushing back the atmosphere. For instance, it might do work by driving electrons through an electric circuit connected to a motor. This type of work is called ‘non-expansion work’. […] a change in the Gibbs energy of a system at constant temperature and pressure is equal to the maximum non-expansion work that can be done by the reaction. […] the link of thermodynamics with biology is that one chemical reaction might do the non-expansion work of building a protein from amino acids. Thus, a knowledge of the Gibbs energies changes accompanying metabolic processes is very important in bioenergetics, and much more important than knowing the enthalpy changes alone (which merely indicate a reaction’s ability to keep us warm).”

“[T]he probability that a molecule will be found in a state of particular energy falls off rapidly with increasing energy, so most molecules will be found in states of low energy and very few will be found in states of high energy. […] If the temperature is low, then the distribution declines so rapidly that only the very lowest levels are significantly populated. If the temperature is high, then the distribution falls off very slowly with increasing energy, and many high-energy states are populated. If the temperature is zero, the distribution has all the molecules in the ground state. If the temperature is infinite, all available states are equally populated. […] temperature […] is the single, universal parameter that determines the most probable distribution of molecules over the available states.”

“Mixing adds disorder and increases the entropy of the system and therefore lowers the Gibbs energy […] In the absence of mixing, a reaction goes to completion; when mixing of reactants and products is taken into account, equilibrium is reached when both are present […] Statistical thermodynamics, through the Boltzmann distribution and its dependence on temperature, allows physical chemists to understand why in some cases the equilibrium shifts towards reactants (which is usually unwanted) or towards products (which is normally wanted) as the temperature is raised. A rule of thumb […] is provided by a principle formulated by Henri Le Chatelier […] that a system at equilibrium responds to a disturbance by tending to oppose its effect. Thus, if a reaction releases energy as heat (is ‘exothermic’), then raising the temperature will oppose the formation of more products; if the reaction absorbs energy as heat (is ‘endothermic’), then raising the temperature will encourage the formation of more product.”

“Model building pervades physical chemistry […] some hold that the whole of science is based on building models of physical reality; much of physical chemistry certainly is.”

“For reasonably light molecules (such as the major constituents of air, N2 and O2) at room temperature, the molecules are whizzing around at an average speed of about 500 m/s (about 1000 mph). That speed is consistent with what we know about the propagation of sound, the speed of which is about 340 m/s through air: for sound to propagate, molecules must adjust their position to give a wave of undulating pressure, so the rate at which they do so must be comparable to their average speeds. […] a typical N2 or O2 molecule in air makes a collision every nanosecond and travels about 1000 molecular diameters between collisions. To put this scale into perspective: if a molecule is thought of as being the size of a tennis ball, then it travels about the length of a tennis court between collisions. Each molecule makes about a billion collisions a second.”

“X-ray diffraction makes use of the fact that electromagnetic radiation (which includes X-rays) consists of waves that can interfere with one another and give rise to regions of enhanced and diminished intensity. This so-called ‘diffraction pattern’ is characteristic of the object in the path of the rays, and mathematical procedures can be used to interpret the pattern in terms of the object’s structure. Diffraction occurs when the wavelength of the radiation is comparable to the dimensions of the object. X-rays have wavelengths comparable to the separation of atoms in solids, so are ideal for investigating their arrangement.”

“For most liquids the sample contracts when it freezes, so […] the temperature does not need to be lowered so much for freezing to occur. That is, the application of pressure raises the freezing point. Water, as in most things, is anomalous, and ice is less dense than liquid water, so water expands when it freezes […] when two gases are allowed to occupy the same container they invariably mix and each spreads uniformly through it. […] the quantity of gas that dissolves in any liquid is proportional to the pressure of the gas. […] When the temperature of [a] liquid is raised, it is easier for a dissolved molecule to gather sufficient energy to escape back up into the gas; the rate of impacts from the gas is largely unchanged. The outcome is a lowering of the concentration of dissolved gas at equilibrium. Thus, gases appear to be less soluble in hot water than in cold. […] the presence of dissolved substances affects the properties of solutions. For instance, the everyday experience of spreading salt on roads to hinder the formation of ice makes use of the lowering of freezing point of water when a salt is present. […] the boiling point is raised by the presence of a dissolved substance [whereas] the freezing point […] is lowered by the presence of a solute.”

“When a liquid and its vapour are present in a closed container the vapour exerts a characteristic pressure (when the escape of molecules from the liquid matches the rate at which they splash back down into it […][)] This characteristic pressure depends on the temperature and is called the ‘vapour pressure’ of the liquid. When a solute is present, the vapour pressure at a given temperature is lower than that of the pure liquid […] The extent of lowering is summarized by yet another limiting law of physical chemistry, ‘Raoult’s law’ [which] states that the vapour pressure of a solvent or of a component of a liquid mixture is proportional to the proportion of solvent or liquid molecules present. […] Osmosis [is] the tendency of solvent molecules to flow from the pure solvent to a solution separated from it by a [semi-]permeable membrane […] The entropy when a solute is present in a solvent is higher than when the solute is absent, so an increase in entropy, and therefore a spontaneous process, is achieved when solvent flows through the membrane from the pure liquid into the solution. The tendency for this flow to occur can be overcome by applying pressure to the solution, and the minimum pressure needed to overcome the tendency to flow is called the ‘osmotic pressure’. If one solution is put into contact with another through a semipermeable membrane, then there will be no net flow if they exert the same osmotic pressures and are ‘isotonic’.”

“Broadly speaking, the reaction quotient [‘Q’] is the ratio of concentrations, with product concentrations divided by reactant concentrations. It takes into account how the mingling of the reactants and products affects the total Gibbs energy of the mixture. The value of Q that corresponds to the minimum in the Gibbs energy […] is called the equilibrium constant and denoted K. The equilibrium constant, which is characteristic of a given reaction and depends on the temperature, is central to many discussions in chemistry. When K is large (1000, say), we can be reasonably confident that the equilibrium mixture will be rich in products; if K is small (0.001, say), then there will be hardly any products present at equilibrium and we should perhaps look for another way of making them. If K is close to 1, then both reactants and products will be abundant at equilibrium and will need to be separated. […] Equilibrium constants vary with temperature but not […] with pressure. […] van’t Hoff’s equation implies that if the reaction is strongly exothermic (releases a lot of energy as heat when it takes place), then the equilibrium constant decreases sharply as the temperature is raised. The opposite is true if the reaction is strongly endothermic (absorbs a lot of energy as heat). […] Typically it is found that the rate of a reaction [how fast it progresses] decreases as it approaches equilibrium. […] Most reactions go faster when the temperature is raised. […] reactions with high activation energies proceed slowly at low temperatures but respond sharply to changes of temperature. […] The surface area exposed by a catalyst is important for its function, for it is normally the case that the greater that area, the more effective is the catalyst.”

Links:

John Dalton.
Atomic orbital.
Electron configuration.
S,p,d,f orbitals.
Computational chemistry.
Atomic radius.
Covalent bond.
Gilbert Lewis.
Valence bond theory.
Molecular orbital theory.
Orbital hybridisation.
Bonding and antibonding orbitals.
Schrödinger equation.
Density functional theory.
Chemical thermodynamics.
Laws of thermodynamics/Zeroth law/First law/Second law/Third Law.
Conservation of energy.
Thermochemistry.
Bioenergetics.
Spontaneous processes.
Entropy.
Rudolf Clausius.
Chemical equilibrium.
Heat capacity.
Compressibility.
Statistical thermodynamics/statistical mechanics.
Boltzmann distribution.
State of matter/gas/liquid/solid.
Perfect gas/Ideal gas law.
Robert Boyle/Joseph Louis Gay-Lussac/Jacques Charles/Amedeo Avogadro.
Equation of state.
Kinetic theory of gases.
Van der Waals equation of state.
Maxwell–Boltzmann distribution.
Thermal conductivity.
Viscosity.
Nuclear magnetic resonance.
Debye–Hückel equation.
Ionic solids.
Catalysis.
Supercritical fluid.
Liquid crystal.
Graphene.
Benoît Paul Émile Clapeyron.
Phase (matter)/phase diagram/Gibbs’ phase rule.
Ideal solution/regular solution.
Henry’s law.
Chemical kinetics.
Electrochemistry.
Rate equation/First order reactions/Second order reactions.
Rate-determining step.
Arrhenius equation.
Collision theory.
Diffusion-controlled and activation-controlled reactions.
Transition state theory.
Photochemistry/fluorescence/phosphorescence/photoexcitation.
Photosynthesis.
Redox reactions.
Electrochemical cell.
Fuel cell.
Reaction dynamics.
Spectroscopy/emission spectroscopy/absorption spectroscopy/Raman spectroscopy.
Raman effect.
Magnetic resonance imaging.
Fourier-transform spectroscopy.
Electron paramagnetic resonance.
Mass spectrum.
Electron spectroscopy for chemical analysis.
Scanning tunneling microscope.
Chemisorption/physisorption.

October 5, 2017 Posted by | Biology, Books, Chemistry, Pharmacology, Physics | Leave a comment

Earth System Science

I decided not to rate this book. Some parts are great, some parts I didn’t think were very good.

I’ve added some quotes and links below. First a few links (I’ve tried not to add links here which I’ve also included in the quotes below):

Carbon cycle.
Origin of water on Earth.
Gaia hypothesis.
Albedo (climate and weather).
Snowball Earth.
Carbonate–silicate cycle.
Carbonate compensation depth.
Isotope fractionation.
CLAW hypothesis.
Mass-independent fractionation.
δ13C.
Great Oxygenation Event.
Acritarch.
Grypania.
Neoproterozoic.
Rodinia.
Sturtian glaciation.
Marinoan glaciation.
Ediacaran biota.
Cambrian explosion.
Quarternary.
Medieval Warm Period.
Little Ice Age.
Eutrophication.
Methane emissions.
Keeling curve.
CO2 fertilization effect.
Acid rain.
Ocean acidification.
Earth systems models.
Clausius–Clapeyron relation.
Thermohaline circulation.
Cryosphere.
The limits to growth.
Exoplanet Biosignature Gases.
Transiting Exoplanet Survey Satellite (TESS).
James Webb Space Telescope.
Habitable zone.
Kepler-186f.

A few quotes from the book:

“The scope of Earth system science is broad. It spans 4.5 billion years of Earth history, how the system functions now, projections of its future state, and ultimate fate. […] Earth system science is […] a deeply interdisciplinary field, which synthesizes elements of geology, biology, chemistry, physics, and mathematics. It is a young, integrative science that is part of a wider 21st-century intellectual trend towards trying to understand complex systems, and predict their behaviour. […] A key part of Earth system science is identifying the feedback loops in the Earth system and understanding the behaviour they can create. […] In systems thinking, the first step is usually to identify your system and its boundaries. […] what is part of the Earth system depends on the timescale being considered. […] The longer the timescale we look over, the more we need to include in the Earth system. […] for many Earth system scientists, the planet Earth is really comprised of two systems — the surface Earth system that supports life, and the great bulk of the inner Earth underneath. It is the thin layer of a system at the surface of the Earth […] that is the subject of this book.”

“Energy is in plentiful supply from the Sun, which drives the water cycle and also fuels the biosphere, via photosynthesis. However, the surface Earth system is nearly closed to materials, with only small inputs to the surface from the inner Earth. Thus, to support a flourishing biosphere, all the elements needed by life must be efficiently recycled within the Earth system. This in turn requires energy, to transform materials chemically and to move them physically around the planet. The resulting cycles of matter between the biosphere, atmosphere, ocean, land, and crust are called global biogeochemical cycles — because they involve biological, geological, and chemical processes. […] The global biogeochemical cycling of materials, fuelled by solar energy, has transformed the Earth system. […] It has made the Earth fundamentally different from its state before life and from its planetary neighbours, Mars and Venus. Through cycling the materials it needs, the Earth’s biosphere has bootstrapped itself into a much more productive state.”

“Each major element important for life has its own global biogeochemical cycle. However, every biogeochemical cycle can be conceptualized as a series of reservoirs (or ‘boxes’) of material connected by fluxes (or flows) of material between them. […] When a biogeochemical cycle is in steady state, the fluxes in and out of each reservoir must be in balance. This allows us to define additional useful quantities. Notably, the amount of material in a reservoir divided by the exchange flux with another reservoir gives the average ‘residence time’ of material in that reservoir with respect to the chosen process of exchange. For example, there are around 7 × 1016 moles of carbon dioxide (CO2) in today’s atmosphere, and photosynthesis removes around 9 × 1015 moles of CO2 per year, giving each molecule of CO2 a residence time of roughly eight years in the atmosphere before it is taken up, somewhere in the world, by photosynthesis. […] There are 3.8 × 1019 moles of molecular oxygen (O2) in today’s atmosphere, and oxidative weathering removes around 1 × 1013 moles of O2 per year, giving oxygen a residence time of around four million years with respect to removal by oxidative weathering. This makes the oxygen cycle […] a geological timescale cycle.”

“The water cycle is the physical circulation of water around the planet, between the ocean (where 97 per cent is stored), atmosphere, ice sheets, glaciers, sea-ice, freshwaters, and groundwater. […] To change the phase of water from solid to liquid or liquid to gas requires energy, which in the climate system comes from the Sun. Equally, when water condenses from gas to liquid or freezes from liquid to solid, energy is released. Solar heating drives evaporation from the ocean. This is responsible for supplying about 90 per cent of the water vapour to the atmosphere, with the other 10 per cent coming from evaporation on the land and freshwater surfaces (and sublimation of ice and snow directly to vapour). […] The water cycle is intimately connected to other biogeochemical cycles […]. Many compounds are soluble in water, and some react with water. This makes the ocean a key reservoir for several essential elements. It also means that rainwater can scavenge soluble gases and aerosols out of the atmosphere. When rainwater hits the land, the resulting solution can chemically weather rocks. Silicate weathering in turn helps keep the climate in a state where water is liquid.”

“In modern terms, plants acquire their carbon from carbon dioxide in the atmosphere, add electrons derived from water molecules to the carbon, and emit oxygen to the atmosphere as a waste product. […] In energy terms, global photosynthesis today captures about 130 terrawatts (1 TW = 1012 W) of solar energy in chemical form — about half of it in the ocean and about half on land. […] All the breakdown pathways for organic carbon together produce a flux of carbon dioxide back to the atmosphere that nearly balances photosynthetic uptake […] The surface recycling system is almost perfect, but a tiny fraction (about 0.1 per cent) of the organic carbon manufactured in photosynthesis escapes recycling and is buried in new sedimentary rocks. This organic carbon burial flux leaves an equivalent amount of oxygen gas behind in the atmosphere. Hence the burial of organic carbon represents the long-term source of oxygen to the atmosphere. […] the Earth’s crust has much more oxygen trapped in rocks in the form of oxidized iron and sulphur, than it has organic carbon. This tells us that there has been a net source of oxygen to the crust over Earth history, which must have come from the loss of hydrogen to space.”

“The oxygen cycle is relatively simple, because the reservoir of oxygen in the atmosphere is so massive that it dwarfs the reservoirs of organic carbon in vegetation, soils, and the ocean. Hence oxygen cannot get used up by the respiration or combustion of organic matter. Even the combustion of all known fossil fuel reserves can only put a small dent in the much larger reservoir of atmospheric oxygen (there are roughly 4 × 1017 moles of fossil fuel carbon, which is only about 1 per cent of the O2 reservoir). […] Unlike oxygen, the atmosphere is not the major surface reservoir of carbon. The amount of carbon in global vegetation is comparable to that in the atmosphere and the amount of carbon in soils (including permafrost) is roughly four times that in the atmosphere. Even these reservoirs are dwarfed by the ocean, which stores forty-five times as much carbon as the atmosphere, thanks to the fact that CO2 reacts with seawater. […] The exchange of carbon between the atmosphere and the land is largely biological, involving photosynthetic uptake and release by aerobic respiration (and, to a lesser extent, fires). […] Remarkably, when we look over Earth history there are fluctuations in the isotopic composition of carbonates, but no net drift up or down. This suggests that there has always been roughly one-fifth of carbon being buried in organic form and the other four-fifths as carbonate rocks. Thus, even on the early Earth, the biosphere was productive enough to support a healthy organic carbon burial flux.”

“The two most important nutrients for life are phosphorus and nitrogen, and they have very different biogeochemical cycles […] The largest reservoir of nitrogen is in the atmosphere, whereas the heavier phosphorus has no significant gaseous form. Phosphorus thus presents a greater recycling challenge for the biosphere. All phosphorus enters the surface Earth system from the chemical weathering of rocks on land […]. Phosphorus is concentrated in rocks in grains or veins of the mineral apatite. Natural selection has made plants on land and their fungal partners […] very effective at acquiring phosphorus from rocks, by manufacturing and secreting a range of organic acids that dissolve apatite. […] The average terrestrial ecosystem recycles phosphorus roughly fifty times before it is lost into freshwaters. […] The loss of phosphorus from the land is the ocean’s gain, providing the key input of this essential nutrient. Phosphorus is stored in the ocean as phosphate dissolved in the water. […] removal of phosphorus into the rock cycle balances the weathering of phosphorus from rocks on land. […] Although there is a large reservoir of nitrogen in the atmosphere, the molecules of nitrogen gas (N2) are extremely strongly bonded together, making nitrogen unavailable to most organisms. To split N2 and make nitrogen biologically available requires a remarkable biochemical feat — nitrogen fixation — which uses a lot of energy. In the ocean the dominant nitrogen fixers are cyanobacteria with a direct source of energy from sunlight. On land, various plants form a symbiotic partnership with nitrogen fixing bacteria, making a home for them in root nodules and supplying them with food in return for nitrogen. […] Nitrogen fixation and denitrification form the major input and output fluxes of nitrogen to both the land and the ocean, but there is also recycling of nitrogen within ecosystems. […] There is an intimate link between nutrient regulation and atmospheric oxygen regulation, because nutrient levels and marine productivity determine the source of oxygen via organic carbon burial. However, ocean nutrients are regulated on a much shorter timescale than atmospheric oxygen because their residence times are much shorter—about 2,000 years for nitrogen and 20,000 years for phosphorus.”

“[F]orests […] are vulnerable to increases in oxygen that increase the frequency and ferocity of fires. […] Combustion experiments show that fires only become self-sustaining in natural fuels when oxygen reaches around 17 per cent of the atmosphere. Yet for the last 370 million years there is a nearly continuous record of fossil charcoal, indicating that oxygen has never dropped below this level. At the same time, oxygen has never risen too high for fires to have prevented the slow regeneration of forests. The ease of combustion increases non-linearly with oxygen concentration, such that above 25–30 per cent oxygen (depending on the wetness of fuel) it is hard to see how forests could have survived. Thus oxygen has remained within 17–30 per cent of the atmosphere for at least the last 370 million years.”

“[T]he rate of silicate weathering increases with increasing CO2 and temperature. Thus, if something tends to increase CO2 or temperature it is counteracted by increased CO2 removal by silicate weathering. […] Plants are sensitive to variations in CO2 and temperature, and together with their fungal partners they greatly amplify weathering rates […] the most pronounced change in atmospheric CO2 over Phanerozoic time was due to plants colonizing the land. This started around 470 million years ago and escalated with the first forests 370 million years ago. The resulting acceleration of silicate weathering is estimated to have lowered the concentration of atmospheric CO2 by an order of magnitude […], and cooled the planet into a series of ice ages in the Carboniferous and Permian Periods.”

“The first photosynthesis was not the kind we are familiar with, which splits water and spits out oxygen as a waste product. Instead, early photosynthesis was ‘anoxygenic’ — meaning it didn’t produce oxygen. […] It could have used a range of compounds, in place of water, as a source of electrons with which to fix carbon from carbon dioxide and reduce it to sugars. Potential electron donors include hydrogen (H2) and hydrogen sulphide (H2S) in the atmosphere, or ferrous iron (Fe2+) dissolved in the ancient oceans. All of these are easier to extract electrons from than water. Hence they require fewer photons of sunlight and simpler photosynthetic machinery. The phylogenetic tree of life confirms that several forms of anoxygenic photosynthesis evolved very early on, long before oxygenic photosynthesis. […] If the early biosphere was fuelled by anoxygenic photosynthesis, plausibly based on hydrogen gas, then a key recycling process would have been the biological regeneration of this gas. Calculations suggest that once such recycling had evolved, the early biosphere might have achieved a global productivity up to 1 per cent of the modern marine biosphere. If early anoxygenic photosynthesis used the supply of reduced iron upwelling in the ocean, then its productivity would have been controlled by ocean circulation and might have reached 10 per cent of the modern marine biosphere. […] The innovation that supercharged the early biosphere was the origin of oxygenic photosynthesis using abundant water as an electron donor. This was not an easy process to evolve. To split water requires more energy — i.e. more high-energy photons of sunlight — than any of the earlier anoxygenic forms of photosynthesis. Evolution’s solution was to wire together two existing ‘photosystems’ in one cell and bolt on the front of them a remarkable piece of biochemical machinery that can rip apart water molecules. The result was the first cyanobacterial cell — the ancestor of all organisms performing oxygenic photosynthesis on the planet today. […] Once oxygenic photosynthesis had evolved, the productivity of the biosphere would no longer have been restricted by the supply of substrates for photosynthesis, as water and carbon dioxide were abundant. Instead, the availability of nutrients, notably nitrogen and phosphorus, would have become the major limiting factors on the productivity of the biosphere — as they still are today.” [If you’re curious to know more about how that fascinating ‘biochemical machinery’ works, this is a great book on these and related topics – US].

“On Earth, anoxygenic photosynthesis requires one photon per electron, whereas oxygenic photosynthesis requires two photons per electron. On Earth it took up to a billion years to evolve oxygenic photosynthesis, based on two photosystems that had already evolved independently in different types of anoxygenic photosynthesis. Around a fainter K- or M-type star […] oxygenic photosynthesis is estimated to require three or more photons per electron — and a corresponding number of photosystems — making it harder to evolve. […] However, fainter stars spend longer on the main sequence, giving more time for evolution to occur.”

“There was a lot more energy to go around in the post-oxidation world, because respiration of organic matter with oxygen yields an order of magnitude more energy than breaking food down anaerobically. […] The revolution in biological complexity culminated in the ‘Cambrian Explosion’ of animal diversity 540 to 515 million years ago, in which modern food webs were established in the ocean. […] Since then the most fundamental change in the Earth system has been the rise of plants on land […], beginning around 470 million years ago and culminating in the first global forests by 370 million years ago. This doubled global photosynthesis, increasing flows of materials. Accelerated chemical weathering of the land surface lowered atmospheric carbon dioxide levels and increased atmospheric oxygen levels, fully oxygenating the deep ocean. […] Although grasslands now cover about a third of the Earth’s productive land surface they are a geologically recent arrival. Grasses evolved amidst a trend of declining atmospheric carbon dioxide, and climate cooling and drying, over the past forty million years, and they only became widespread in two phases during the Miocene Epoch around seventeen and six million years ago. […] Since the rise of complex life, there have been several mass extinction events. […] whilst these rolls of the extinction dice marked profound changes in evolutionary winners and losers, they did not fundamentally alter the operation of the Earth system.” [If you’re interested in this kind of stuff, the evolution of food webs and so on, Herrera et al.’s wonderful book is a great place to start – US]

“The Industrial Revolution marks the transition from societies fuelled largely by recent solar energy (via biomass, water, and wind) to ones fuelled by concentrated ‘ancient sunlight’. Although coal had been used in small amounts for millennia, for example for iron making in ancient China, fossil fuel use only took off with the invention and refinement of the steam engine. […] With the Industrial Revolution, food and biomass have ceased to be the main source of energy for human societies. Instead the energy contained in annual food production, which supports today’s population, is at fifty exajoules (1 EJ = 1018 joules), only about a tenth of the total energy input to human societies of 500 EJ/yr. This in turn is equivalent to about a tenth of the energy captured globally by photosynthesis. […] solar energy is not very efficiently converted by photosynthesis, which is 1–2 per cent efficient at best. […] The amount of sunlight reaching the Earth’s land surface (2.5 × 1016 W) dwarfs current total human power consumption (1.5 × 1013 W) by more than a factor of a thousand.”

“The Earth system’s primary energy source is sunlight, which the biosphere converts and stores as chemical energy. The energy-capture devices — photosynthesizing organisms — construct themselves out of carbon dioxide, nutrients, and a host of trace elements taken up from their surroundings. Inputs of these elements and compounds from the solid Earth system to the surface Earth system are modest. Some photosynthesizers have evolved to increase the inputs of the materials they need — for example, by fixing nitrogen from the atmosphere and selectively weathering phosphorus out of rocks. Even more importantly, other heterotrophic organisms have evolved that recycle the materials that the photosynthesizers need (often as a by-product of consuming some of the chemical energy originally captured in photosynthesis). This extraordinary recycling system is the primary mechanism by which the biosphere maintains a high level of energy capture (productivity).”

“[L]ike all stars on the ‘main sequence’ (which generate energy through the nuclear fusion of hydrogen into helium), the Sun is burning inexorably brighter with time — roughly 1 per cent brighter every 100 million years — and eventually this will overheat the planet. […] Over Earth history, the silicate weathering negative feedback mechanism has counteracted the steady brightening of the Sun by removing carbon dioxide from the atmosphere. However, this cooling mechanism is near the limits of its operation, because CO2 has fallen to limiting levels for the majority of plants, which are key amplifiers of silicate weathering. Although a subset of plants have evolved which can photosynthesize down to lower CO2 levels [the author does not go further into this topic, but here’s a relevant link – US], they cannot draw CO2 down lower than about 10 ppm. This means there is a second possible fate for life — running out of CO2. Early models projected either CO2 starvation or overheating […] occurring about a billion years in the future. […] Whilst this sounds comfortingly distant, it represents a much shorter future lifespan for the Earth’s biosphere than its past history. Earth’s biosphere is entering its old age.”

September 28, 2017 Posted by | Astronomy, Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment