Econstudentlog

Perception (I)

Here’s my short goodreads review of the book. In this post I’ll include some observations and links related to the first half of the book’s coverage.

“Since the 1960s, there have been many attempts to model the perceptual processes using computer algorithms, and the most influential figure of the last forty years has been David Marr, working at MIT. […] Marr and his colleagues were responsible for developing detailed algorithms for extracting (i) low-level information about the location of contours in the visual image, (ii) the motion of those contours, and (iii) the 3-D structure of objects in the world from binocular disparities and optic flow. In addition, one of his lasting achievements was to encourage researchers to be more rigorous in the way that perceptual tasks are described, analysed, and formulated and to use computer models to test the predictions of those models against human performance. […] Over the past fifteen years, many researchers in the field of perception have characterized perception as a Bayesian process […] According to Bayesian theory, what we perceive is a consequence of probabilistic processes that depend on the likelihood of certain events occurring in the particular world we live in. Moreover, most Bayesian models of perceptual processes assume that there is noise in the sensory signals and the amount of noise affects the reliability of those signals – the more noise, the less reliable the signal. Over the past fifteen years, Bayes theory has been used extensively to model the interaction between different discrepant cues, such as binocular disparity and texture gradients to specify the slant of an inclined surface.”

“All surfaces have the property of reflectance — that is, the extent to which they reflect (rather than absorb) the incident illumination — and those reflectances can vary between 0 per cent and 100 per cent. Surfaces can also be selective in the particular wavelengths they reflect or absorb. Our colour vision depends on these selective reflectance properties […]. Reflectance characteristics describe the physical properties of surfaces. The lightness of a surface refers to a perceptual judgement of a surface’s reflectance characteristic — whether it appears as black or white or some grey level in between. Note that we are talking about the perception of lightness — rather than brightness — which refers to our estimate of how much light is coming from a particular surface or is emitted by a source of illumination. The perception of surface lightness is one of the most fundamental perceptual abilities because it allows us not only to differentiate one surface from another but also to identify the real-world properties of a particular surface. Many textbooks start with the observation that lightness perception is a difficult task because the amount of light reflected from a particular surface depends on both the reflectance characteristic of the surface and the intensity of the incident illumination. For example, a piece of black paper under high illumination will reflect back more light to the eye than a piece of white paper under dim illumination. As a consequence, lightness constancy — the ability to correctly judge the lightness of a surface under different illumination conditions — is often considered to be an ‘achievement’ of the perceptual system. […] The alternative starting point for understanding lightness perception is to ask whether there is something that remains constant or invariant in the patterns of light reaching the eye with changes of illumination. In this case, it is the relative amount of light reflected off different surfaces. Consider two surfaces that have different reflectances—two shades of grey. The actual amount of light reflected off each of the surfaces will vary with changes in the illumination but the relative amount of light reflected off the two surfaces remains the same. This shows that lightness perception is necessarily a spatial task and hence a task that cannot be solved by considering one particular surface alone. Note that the relative amount of light reflected off different surfaces does not tell us about the absolute lightnesses of different surfaces—only their relative lightnesses […] Can our perception of lightness be fooled? Yes, of course it can and the ways in which we make mistakes in our perception of the lightnesses of surfaces can tell us much about the characteristics of the underlying processes.”

“From a survival point of view, the ability to differentiate objects and surfaces in the world by their ‘colours’ (spectral reflectance characteristics) can be extremely useful […] Most species of mammals, birds, fish, and insects possess several different types of receptor, each of which has a a different spectral sensitivity function […] having two types of receptor with different spectral sensitivities is the minimum necessary for colour vision. This is referred to as dicromacy and the majority of mammals are dichromats with the exception of the old world monkeys and humans. […] The only difference between lightness and colour perception is that in the latter case we have to consider the way a surface selectively reflects (and absorbs) different wavelengths, rather than just a surface’s average reflectance over all wavelengths. […] The similarities between the tasks of extracting lightness and colour information mean that we can ask a similar question about colour perception [as we did about lightness perception] – what is the invariant information that could specify the reflectance characteristic of a surface? […] The information that is invariant under changes of spectral illumination is the relative amounts of long, medium, and short wavelength light reaching our eyes from different surfaces in the scene. […] the successful identification and discrimination of coloured surfaces is dependent on making spatial comparisons between the amounts of short, medium, and long wavelength light reaching our eyes from different surfaces. As with lightness perception, colour perception is necessarily a spatial task. It follows that if a scene is illuminated by the light of just a single wavelength, the appropriate spatial comparisons cannot be made. This can be demonstrated by illuminating a real-world scene containing many different coloured objects with yellow, sodium light that contains only a single wavelength. All objects, whatever their ‘colours’, will only reflect back to the eye different intensities of that sodium light and hence there will only be absolute but no relative differences between the short, medium, and long wavelength lightness records. There is a similar, but less dramatic, effect on our perception of colour when the spectral characteristics of the illumination are restricted to just a few wavelengths, as is the case with fluorescent lighting.”

“Consider a single receptor mechanism, such as a rod receptor in the human visual system, that responds to a limited range of wavelengths—referred to as the receptor’s spectral sensitivity function […]. This hypothetical receptor is more sensitive to some wavelengths (around 550 nm) than others and we might be tempted to think that a single type of receptor could provide information about the wavelength of the light reaching the receptor. This is not the case, however, because an increase or decrease in the response of that receptor could be due to either a change in the wavelength or an increase or decrease in the amount of light reaching the receptor. In other words, the output of a given receptor or receptor type perfectly confounds changes in wavelength with changes in intensity because it has only one way of responding — that is, more or less. This is Rushton’s Principle of Univariance — there is only one way of varying or one degree of freedom. […] On the other hand, if we consider a visual system with two different receptor types, one more sensitive to longer wavelengths (L) and the other more sensitive to shorter wavelengths (S), there are two degrees of freedom in the system and thus the possibility of signalling our two independent variables — wavelength and intensity […] it is quite possible to have a colour visual system that is based on just two receptor types. Such a colour visual system is referred to as dichromatic.”

“So why is the human visual system trichromatic? The answer can be found in a phenomenon known as metamerism. So far, we have restricted our discussion to the effect of a single wavelength on our dichromatic visual system: for example, a single wavelength of around 550 nm that stimulated both the long and short receptor types about equally […]. But what would happen if we stimulated our dichromatic system with light of two different wavelengths at the same time — one long wavelength and one short wavelength? With a suitable choice of wavelengths, this combination of wavelengths would also have the effect of stimulating the two receptor types about equally […] As a consequence, the output of the system […] with this particular mixture of wavelengths would be indistinguishable from that created by the single wavelength of 550 nm. These two indistinguishable stimulus situations are referred to as metamers and a little thought shows that there would be many thousands of combinations of wavelength that produce the same activity […] in a dichromatic visual system. As a consequence, all these different combinations of wavelengths would be indistinguishable to a dichromatic observer, even though they were produced by very different combinations of wavelengths. […] Is there any way of avoiding the problem of metamerism? The answer is no but we can make things better. If a visual system had three receptor types rather than two, then many of the combinations of wavelengths that produce an identical pattern of activity in two of the mechanisms (L and S) would create a different amount of activity in our third receptor type (M) that is maximally sensitive to medium wavelengths. Hence the number of indistinguishable metameric matches would be significantly reduced but they would never be eliminated. Using the same logic, it follows that a further increase in the number of receptor types (beyond three) would reduce the problem of metamerism even more […]. There would, however, also be a cost. Having more distinct receptor types in a finite-sized retina would increase the average spacing between the receptors of the same type and thus make our acuity for fine detail significantly poorer. There are many species, such as dragonflies, with more than three receptor types in their eyes but the larger number of receptor types typically serves to increase the range of wavelengths to which the animal is sensitive into the infra-red or ultra-violet parts of the spectrum, rather than to reduce the number of metamers. […] the sensitivity of the short wavelength receptors in the human eye only extends to ~540 nm — the S receptors are insensitive to longer wavelengths. This means that human colour vision is effectively dichromatic for combinations of wavelengths above 540 nm. In addition, there are no short wavelength cones in the central fovea of the human retina, which means that we are also dichromatic in the central part of our visual field. The fact that we are unaware of this lack of colour vision is probably due to the fact that our eyes are constantly moving. […] It is […] important to appreciate that the description of the human colour visual system as trichromatic is not a description of the number of different receptor types in the retina – it is a property of the whole visual system.”

“Recent research has shown that although the majority of humans are trichromatic there can be significant differences in the precise matches that individuals make when matching colour patches […] the absence of one receptor type will result in a greater number of colour confusions than normal and this does have a significant effect on an observer’s colour vision. Protanopia is the absence of long wavelength receptors, deuteranopia the absence of medium wavelength receptors, and tritanopia the absence of short wavelength receptors. These three conditions are often described as ‘colour blindness’ but this is a misnomer. We are all colour blind to some extent because we all suffer from colour metamerism and fail to make discriminations that would be very apparent to any biological or machine vision system with a greater number of receptor types. For example, most stomatopod crustaceans (mantis shrimps) have twelve different visual pigments and they also have the ability to detect both linear and circularly polarized light. What I find interesting is that we believe, as trichromats, that we have the ability to discriminate all the possible shades of colour (reflectance characteristics) that exist in our world. […] we are typically unaware of the limitations of our visual systems because we have no way of comparing what we see normally with what would be seen by a ‘better’ visual system.”

“We take it for granted that we are able to segregate the visual input into separate objects and distinguish objects from their backgrounds and we rarely make mistakes except under impoverished conditions. How is this possible? In many cases, the boundaries of objects are defined by changes of luminance and colour and these changes allow us to separate or segregate an object from its background. But luminance and colour changes are also present in the textured surfaces of many objects and therefore we need to ask how it is that our visual system does not mistake these luminance and colour changes for the boundaries of objects. One answer is that object boundaries have special characteristics. In our world, most objects and surfaces are opaque and hence they occlude (cover) the surface of the background. As a consequence, the contours of the background surface typically end—they are ‘terminated’—at the boundary of the occluding object or surface. Quite often, the occluded contours of the background are also revealed at the opposite side of the occluding surface because they are physically continuous. […] The impression of occlusion is enhanced if the occluded contours contain a range of different lengths, widths, and orientations. In the natural world, many animals use colour and texture to camouflage their boundaries as well as to fool potential predators about their identity. […] There is an additional source of information — relative motion — that can be used to segregate a visual scene into objects and their backgrounds and to break any camouflage that might exist in a static view. A moving, opaque object will progressively occlude and dis-occlude (reveal) the background surface so that even a well-camouflaged, moving animal will give away its location. Hence it is not surprising that a very common and successful strategy of many animals is to freeze in order not to be seen. Unless the predator has a sophisticated visual system to break the pattern or colour camouflage, the prey will remain invisible.”

Some links:

Perception.
Ames room. Inverse problem in optics.
Hermann von Helmholtz. Richard Gregory. Irvin Rock. James Gibson. David Marr. Ewald Hering.
Optical flow.
La dioptrique.
Necker cube. Rubin’s vase.
Perceptual constancy. Texture gradient.
Ambient optic array.
Affordance.
Luminance.
Checker shadow illusion.
Shape from shading/Photometric stereo.
Colour vision. Colour constancy. Retinex model.
Cognitive neuroscience of visual object recognition.
Motion perception.
Horace Barlow. Bernhard Hassenstein. Werner E. Reichardt. Sigmund Exner. Jan Evangelista Purkyně.
Phi phenomenon.
Motion aftereffect.
Induced motion.

Advertisements

October 14, 2018 Posted by | Biology, Books, Ophthalmology, Physics, Psychology | Leave a comment

Oncology (I)

I really disliked the ‘Pocket…’ part of this book, so I’ll sort of pretend to overlook this aspect also in my coverage of the book here. This’ll be a hard thing to do, given the way the book is written – I refer to my goodreads review for details, I’ll include only one illustrative quote from that review here:

“In terms of content, the book probably compares favourably with many significantly longer oncology texts (mainly, but certainly not only, because of the publication date). In terms of readability it compares unfavourably to an Egyptian translation of Alan Sokal’s 1996 article in Social Text, if it were translated by a 12-year old dyslexic girl.”

I don’t yet know in how much detail I’ll blog the book; this may end up being the only post about the book, or I may decide to post a longer sequence of posts. The book is hard to blog, which is an argument against covering it in detail – and also the reason why I haven’t already blogged it – but some of the content included in the book is really, really nice stuff to know, which is a strong argument in favour of covering at least some of the material here. The book has a lot of stuff, so regardless of the level of detail of my future coverage a lot of interesting stuff will of necessity have been left out.

My coverage below includes some observations and links related to the first 100 pages of the book.

“Understanding Radiation Response: The 4 Rs of Radiobiology
Repair of sublethal damage
Reassortment of cells w/in the cell cycle
Repopulation of cells during the course of radiotherapy
Reoxygenation of hypoxic cells […]

*Oxygen enhances DNA damage induced by free radicals, thereby facilitating the indirect action of IR [ionizing radiation, US] *Biologically equivalent dose can vary by a factor of 2–3 depending upon the presence or absence of oxygen (referred to as the oxygen enhancement ratio) *Poorly oxygenated postoperative beds frequently require higher doses of RT than preoperative RT [radiation therapy] […] Chemotherapy is frequently used sequentially or concurrently w/radiotherapy to maximize therapeutic benefit. This has improved pt outcomes although also a/w ↑ overall tox. […] [Many chemotherapeutic agents] show significant synergy with RT […] Mechanisms for synergy vary widely: Include cell cycle effects, hypoxic cell sensitization, & modulation of the DNA damage response”.

“Specific dose–volume relationships have been linked to the risk of late organ tox. […] *Dose, volume, underlying genetics, and age of the pt at the time of RT are critical determinants of the risk for 2° malignancy *The likelihood of 2° CA is correlated w/dose, but there is no threshold dose below which there is no additional risk of 2° malignancy *Latent period for radiation-induced solid tumors is generally between 10 and 60 y […]. Latent period for leukemias […] is shorter — peak between 5 & 7 y.”

“The immune system plays an important role in CA surveillance; Rx’s that modulate & amplify the immune system are referred to as immunotherapies […] tumors escape the immune system via loss of molecules on tumor cells important for immune activation […]; tumors can secrete immunosuppressing cytokines (IL-10 & TGF-β) & downregulate IFN-γ; in addition, tumors often express nonmutated self-Ag, w/c the immune system will, by definition, not react against; tumors can express molecules that inhibit T-cell function […] Ubiquitous CD47 (Don’t eat me signal) with ↑ expression on tumor cells mediates escape from phagocytosis. *Tumor microenvironment — immune cells are found in tumors, the exact composition of these cells has been a/w [associated with, US] pt outcomes; eg, high concentration of tumor-infiltrating lymphocytes (CD8+ cells) are a/w better outcomes & ↑ response to chemotherapy, Tregs & myeloid-derived suppressor cells are a/w worse outcomes, the exact role of Th17 in tumors is still being elucidated; the milieu of cytokines & chemokines also plays a role in outcome; some cytokines (VEGF, IL-1, IL-8) lead to endothelial cell proliferation, migration, & activation […] Expression of PD-L1 in tumor microenvironment can be indicator of improved likelihood of response to immune checkpoint blockade. […] Tumor mutational load correlates w/increased response to immunotherapy (NEJM; 2014;371:2189.).”

“Over 200 hereditary CA susceptibility syndromes, most are rare […]. Inherited CAs arise from highly penetrant germline mts [mutations, US]; “familial” CAss may be caused by interaction of low-penetrance genes, gene–environment interactions, or both. […] Genetic testing should be done based on individual’s probability of being a mt carrier & after careful discussion & informed consent”.

Pharmacogenetics: Effect of heritable genes on response to drugs. Study of single genes & interindividual differences in drug metabolizing enzymes. Pharmacogenomics: Effect of inherited & acquired genetic variation on drug response. Study of the functions & interactions of all genes in the genome & how the overall variability of drug response may be used to predict the right tx in individual pts & to design new drugs. Polymorphisms: Common variations in a DNA sequence that may lead to ↓ or ↑ activity of the encoded gene (SNP, micro- & minisatellites). SNPs: Single nucleotide polymorphisms that may cause an amino acid exchange in the encoded protein, account for >90% of genetic variation in the human genome.”

Tumor lysis syndrome [TLS] is an oncologic emergency caused by electrolyte abnormalities a/w spontaneous and/or tx-induced cell death that can be potentially fatal. […] 4 key electrolyte abnormalities 2° to excessive tumor/cell lysis: *Hyperkalemia *Hyperphosphatemia *Hypocalcemia *Hyperuricemia (2° to catabolism of nucleic acids) […] Common Malignancies Associated with a High Risk of Developing TLS in Adult Patients [include] *Acute leukemias [and] *High-grade lymphomas such as Burkitt lymphoma & DLBCL […] [Disease] characteristics a/w TLS risk: Rapidly progressive, chemosensitive, myelo- or lymphoproliferative [disease] […] [Patient] characteristics a/w TLS risk: *Baseline impaired renal function, oliguria, exposure to nephrotoxins, hyperuricemia *Volume depletion/inadequate hydration, acidic urine”.

Hypercalcemia [affects] ~10–30% of all pts w/malignancy […] Symptoms: Polyuria/polydipsia, intravascular volume depletion, AKI, lethargy, AMS [Altered Mental Status, US], rarely coma/seizures; N/V [nausea/vomiting, US] […] Osteolytic Bone Lesions [are seen in] ~20% of all hyperCa of malignancy […] [Treat] underlying malignancy, only way to effectively treat, all other tx are temporizing”.

“National Consensus Project definition: Palliative care means patient and family-centered care that optimizes quality of life by anticipating, preventing, and treating suffering. Palliative care throughout the continuum of illness involves addressing physical, intellectual, emotional, social, and spiritual needs to facilitate patient autonomy, access to information, and choice.” […] *Several RCTs have supported the integration of palliative care w/oncologic care, but specific interventions & models of care have varied. Expert panels at NCCN & ASCO recently reviewed the data to release evidence-based guidelines. *NCCN guidelines (2016): “Palliative care should be initiated by the primary oncology team and then augmented by collaboration with an interdisciplinary team of palliative care experts… All cancer patients should be screened for palliative care needs at their initial visit, at appropriate intervals, and as clinically indicated.” *ASCO guideline update (2016): “Inpatients and outpatients with advanced cancer should receive dedicated palliative care services, early in the disease course, concurrent with active tx. Referral of patients to interdisciplinary palliative care teams is optimal […] Essential Components of Palliative Care (ASCO) *Rapport & relationship building w/pts & family caregivers *Symptom, distress, & functional status mgmt (eg, pain, dyspnea, fatigue, sleep disturbance, mood, nausea, or constipation) *Exploration of understanding & education about illness & prognosis *Clarification of tx goals *Assessment & support of coping needs (eg, provision of dignity therapy) *Assistance w/medical decision making *Coordination w/other care providers *Provision of referrals to other care providers as indicated […] Useful Communication Tips *Use open-ended questions to elicit pt concerns *Clarify how much information the pt would like to know […] Focus on what can be done (not just what can’t be done) […] Remove the phrase “do everything” from your medical vocabulary […] Redefine hope by supporting realistic & achievable goals […] make empathy explicit”.

Some links:

Radiation therapy.
Brachytherapy.
External beam radiotherapy.
Image-guided radiation therapy.
Stereotactic Radiosurgery.
Total body irradiation.
Cancer stem cell.
Cell cycle.
Carcinogenesis. Oncogene. Tumor suppressor gene. Principles of Cancer Therapy: Oncogene and Non-oncogene Addiction.
Cowden syndrome. Peutz–Jeghers syndrome. Familial Atypical Multiple Mole Melanoma Syndrome. Li–Fraumeni syndrome. Lynch syndrome. Turcot syndrome. Muir–Torre syndrome. Von Hippel–Lindau disease. Gorlin syndrome. Werner syndrome. Birt–Hogg–Dubé syndrome. Neurofibromatosis type I. -ll- type 2.
Knudson hypothesis.
DNA sequencing.
Cytogenetics.
Fluorescence in situ hybridization.
CAR T Cell therapy.
Antimetabolite. Alkylating antineoplastic agentAntimicrotubule agents/mitotic inhibitors. Chemotherapeutic agentsTopoisomerase inhibitorMonoclonal antibodiesBisphosphonatesProteasome inhibitors. [The book covers all of these agents, and others I for one reason or another decided not to include, in great detail, listing many different types of agents and including notes on dosing, pharmacokinetics & pharmacodynamics, associated adverse events and drug interactions etc. These parts of the book were very interesting, but they are impossible to blog – US).
Syndrome of inappropriate antidiuretic hormone secretion.
Acute lactic acidosis (“Often seen w/liver mets or rapidly dividing heme malignancies […] High mortality despite aggressive tx [treatment]”).
Superior vena cava syndrome.

October 12, 2018 Posted by | Biology, Books, Cancer/oncology, Genetics, Immunology, Medicine, Pharmacology | Leave a comment

Principles of memory (II)

I have added a few more quotes from the book below:

Watkins and Watkins (1975, p. 443) noted that cue overload is “emerging as a general principle of memory” and defined it as follows: “The efficiency of a functional retrieval cue in effecting recall of an item declines as the number of items it subsumes increases.” As an analogy, think of a person’s name as a cue. If you know only one person named Katherine, the name by itself is an excellent cue when asked how Katherine is doing. However, if you also know Cathryn, Catherine, and Kathryn, then it is less useful in specifying which person is the focus of the question. More formally, a number of studies have shown experimentally that memory performance systematically decreases as the number of items associated with a particular retrieval cue increases […] In many situations, a decrease in memory performance can be attributed to cue overload. This may not be the ultimate explanation, as cue overload itself needs an explanation, but it does serve to link a variety of otherwise disparate findings together.”

Memory, like all other cognitive processes, is inherently constructive. Information from encoding and cues from retrieval, as well as generic information, are all exploited to construct a response to a cue. Work in several areas has long established that people will use whatever information is available to help reconstruct or build up a coherent memory of a story or an event […]. However, although these strategies can lead to successful and accurate remembering in some circumstances, the same processes can lead to distortion or even confabulation in others […]. There are a great many studies demonstrating the constructive and reconstructive nature of memory, and the literature is quite well known. […] it is clear that recall of events is deeply influenced by a tendency to reconstruct them using whatever information is relevant and to repair holes or fill in the gaps that are present in memory with likely substitutes. […] Given that memory is a reconstructive process, it should not be surprising to find that there is a large literature showing that people have difficulty distinguishing between memories of events that happened and memories of events that did not happen […]. In a typical reality monitoring experiment […], subjects are shown pictures of common objects. Every so often, instead of a picture, the subjects are shown the name of an object and are asked to create a mental image of the object. The test involves presenting a list of object names, and the subject is asked to judge whether they saw the item (i.e., judge the memory as “real”) or whether they saw the name of the object and only imagined seeing it (i.e., judge the memory as “imagined”). People are more likely to judge imagined events as real than real events as imagined. The likelihood that a memory will be judged as real rather than imagined depends upon the vividness of the memory in terms of its sensory quality, detail, plausibility, and coherence […]. What this means is that there is not a firm line between memories for real and imagined events: if an imagined event has enough qualitative features of a real event it is likely to be judged as real.”

“One hallmark of reconstructive processes is that in many circumstances they aid in memory retrieval because they rely on regularities in the world. If we know what usually happens in a given circumstance, we can use that information to fill in gaps that may be present in our memory for that episode. This will lead to a facilitation effect in some cases but will lead to errors in cases in which the most probable response is not the correct one. However, if we take this standpoint, we must predict that the errors that are made when using reconstructive processes will not be random; in fact, they will display a bias toward the most likely event. This sort of mechanism has been demonstrated many times in studies of schema-based representations […], and language production errors […] but less so in immediate recall. […] Each time an event is recalled, the memory is slightly different. Because of the interaction between encoding and retrieval, and because of the variations that occur between two different retrieval attempts, the resulting memories will always differ, even if only slightly.”

In this chapter we discuss the idea that a task or a process can be a “pure” measure of memory, without contamination from other hypothetical memory stores or structures, and without contributions from other processes. Our impurity principle states that tasks and processes are not pure, and therefore one cannot separate out the contributions of different memory stores by using tasks thought to tap only one system; one cannot count on subjects using only one process for a particular task […]. Our principle follows from previous arguments articulated by Kolers and Roediger (1984) and Crowder (1993), among others, that because every event recruits slightly different encoding and retrieval processes, there is no such thing as “pure” memory. […] The fundamental issue is the extent to which one can determine the contribution of a particular memory system or structure or process to performance on a particular memory task. There are numerous ways of assessing memory, and many different ways of classifying tasks. […] For example, if you are given a word fragment and asked to complete it with the first word that pops in your head, you are free to try a variety of strategies. […] Very different types of processing can be used by subjects even when given the same type of test or cue. People will use any and all processes to help them answer a question.”

“A free recall test typically provides little environmental support. A list of items is presented, and the subject is asked to recall which items were on the list. […] The experimenter simply says, “Recall the words that were on the list,” […] A typical recognition test provides more environmental support. Although a comparable list of items might have been presented, and although the subject is asked again about memory for an item in context, the subject is provided with a more specific cue, and knows exactly how many items to respond to. Some tests, such as word fragment completion and general knowledge questions, offer more environmental support. These tests provide more targeted cues, and often the cues are unique […] One common processing distinction involves the aspects of the stimulus that are focused on or are salient at encoding and retrieval: Subjects can focus more on an item’s physical appearance (data driven processing) or on an item’s meaning (conceptually driven processing […]). In general, performance on tasks such as free recall that offer little environmental support is better if the rememberer uses conceptual rather than perceptual processing at encoding. Although there is perceptual information available at encoding, there is no perceptual information provided at test so data-driven processes tend not to be appropriate. Typical recognition and cued-recall tests provide more specific cues, and as such, data-driven processing becomes more appropriate, but these tasks still require the subject to discriminate which items were presented in a particular specific context; this is often better accomplished using conceptually driven processing. […] In addition to distinctions between data driven and conceptually driven processing, another common distinction is between an automatic retrieval process, which is usually referred to as familiarity, and a nonautomatic process, usually called recollection […]. Additional distinctions abound. Our point is that very different types of processing can be used by subjects on a particular task, and that tasks can differ from one another on a variety of different dimensions. In short, people can potentially use almost any combination of processes on any particular task.”

Immediate serial recall is basically synonymous with memory span. In one the first reviews of this topic, Blankenship (1938, p. 2) noted that “memory span refers to the ability of an individual to reproduce immediately, after one presentation, a series of discrete stimuli in their original order.”3 The primary use of memory span was not so much to measure the capacity of a short-term memory system, but rather as a measure of intellectual abilities […]. Early on, however, it was recognized that memory span, whatever it was, varied as function of a large number of variables […], and could even be increased substantially by practice […]. Nonetheless, memory span became increasingly seen as a measure of the capacity of a short-term memory system that was distinct from long-term memory. Generally, most individuals can recall about 7 ± 2 items (Miller, 1956) or the number of items that can be pronounced in about 2 s (Baddeley, 1986) without making any mistakes. Does immediate serial recall (or memory span) measure the capacity of short-term (or working) memory? The currently available evidence suggests that it does not. […] The main difficulty in attempting to construct a “pure” measure of immediate memory capacity is that […] the influence of previously acquired knowledge is impossible to avoid. There are numerous contributions of long-term knowledge not only to memory span and immediate serial recall […] but to other short-term tasks as well […] Our impurity principle predicts that when distinctions are made between types of processing (e.g., conceptually driven versus data driven; familiarity versus recollection; automatic versus conceptual; item specific versus relational), each of those individual processes will not be pure measures of memory.”

“Over the past 20 years great strides have been made in noninvasive techniques for measuring brain activity. In particular, PET and fMRI studies have allowed us to obtain an on-line glimpse into the hemodynamic changes that occur in the brain as stimuli are being processed, memorized, manipulated, and recalled. However, many of these studies rely on subtractive logic that explicitly assumes that (a) there are different brain areas (structures) subserving different cognitive processes and (b) we can subtract out background or baseline activity and determine which areas are responsible for performing a particular task (or process) by itself. There have been some serious challenges to these underlying assumptions […]. A basic assumption is that there is some baseline activation that is present all of the time and that the baseline is built upon by adding more activation. Thus, when the baseline is subtracted out, what is left is a relatively pure measure of the brain areas that are active in completing the higher-level task. One assumption of this method is that adding a second component to the task does not affect the simple task. However, this assumption does not always hold true. […] Even if the additive factors logic were correct, these studies often assume that a task is a pure measure of one process or another. […] Again, the point is that humans will utilize whatever resources they can recruit in order to perform a task. Individuals using different retrieval strategies (e.g., visualization, verbalization, lax or strict decision criteria, etc.) show very different patterns of brain activation even when performing the same memory task (Miller & Van Horn, 2007). This makes it extremely dangerous to assume that any task is made up of purely one process. Even though many researchers involved in neuroimaging do not make task purity assumptions, these examples “illustrate the widespread practice in functional neuroimaging of interpreting activations only in terms of the particular cognitive function being investigated (Cabeza et al., 2003, p. 390).” […] We do not mean to suggest that these studies have no value — they clearly do add to our knowledge of how cognitive functioning works — but, instead, would like to urge more caution in the interpretation of localization studies, which are sometimes taken as showing that an activated area is where some unique process takes place.”

October 6, 2018 Posted by | Biology, Books, Psychology | Leave a comment

Circadian Rhythms (I)

“Circadian rhythms are found in nearly every living thing on earth. They help organisms time their daily and seasonal activities so that they are synchronized to the external world and the predictable changes in the environment. These biological clocks provide a cross-cutting theme in biology and they are incredibly important. They influence everything, from the way growing sunflowers track the sun from east to west, to the migration timing of monarch butterflies, to the morning peaks in cardiac arrest in humans. […] Years of work underlie most scientific discoveries. Explaining these discoveries in a way that can be understood is not always easy. We have tried to keep the general reader in mind but in places perseverance on the part of the reader may be required. In the end we were guided by one of our reviewers, who said: ‘If you want to understand calculus you have to show the equations.’”

The above quote is from the book‘s foreword. I really liked this book and I was close to giving it five stars on goodreads. Below I have added some observations and links related to the first few chapters of the book’s coverage (as noted in my review on goodreads the second half of the book is somewhat technical, and I’ve not yet decided if I’ll be blogging that part of the book in much detail, if at all).

“There have been over a trillion dawns and dusks since life began some 3.8 billion years ago. […] This predictable daily solar cycle results in regular and profound changes in environmental light, temperature, and food availability as day follows night. Almost all life on earth, including humans, employs an internal biological timer to anticipate these daily changes. The possession of some form of clock permits organisms to optimize physiology and behaviour in advance of the varied demands of the day/night cycle. Organisms effectively ‘know’ the time of day. Such internally generated daily rhythms are called ‘circadian rhythms’ […] Circadian rhythms are embedded within the genomes of just about every plant, animal, fungus, algae, and even cyanobacteria […] Organisms that use circadian rhythms to anticipate the rotation of the earth are thought to have a major advantage over both their competitors and predators. For example, it takes about 20–30 minutes for the eyes of fish living among coral reefs to switch vision from the night to daytime state. A fish whose eyes are prepared in advance for the coming dawn can exploit the new environment immediately. The alternative would be to wait for the visual system to adapt and miss out on valuable activity time, or emerge into a world where it would be more difficult to avoid predators or catch prey until the eyes have adapted. Efficient use of time to maximize survival almost certainly provides a large selective advantage, and consequently all organisms seem to be led by such anticipation. A circadian clock also stops everything happening within an organism at the same time, ensuring that biological processes occur in the appropriate sequence or ‘temporal framework’. For cells to function properly they need the right materials in the right place at the right time. Thousands of genes have to be switched on and off in order and in harmony. […] All of these processes, and many others, take energy and all have to be timed to best effect by the millisecond, second, minute, day, and time of year. Without this internal temporal compartmentalization and its synchronization to the external environment our biology would be in chaos. […] However, to be biologically useful, these rhythms must be synchronized or entrained to the external environment, predominantly by the patterns of light produced by the earth’s rotation, but also by other rhythmic changes within the environment such as temperature, food availability, rainfall, and even predation. These entraining signals, or time-givers, are known as zeitgebers. The key point is that circadian rhythms are not driven by an external cycle but are generated internally, and then entrained so that they are synchronized to the external cycle.”

“It is worth emphasizing that the concept of an internal clock, as developed by Richter and Bünning, has been enormously powerful in furthering our understanding of biological processes in general, providing a link between our physiological understanding of homeostatic mechanisms, which try to maintain a constant internal environment despite unpredictable fluctuations in the external environment […], versus the circadian system which enables organisms to anticipate periodic changes in the external environment. The circadian system provides a predictive 24-hour baseline in physiological parameters, which is then either defended or temporarily overridden by homeostatic mechanisms that accommodate an acute environmental challenge. […] Zeitgebers and the entrainment pathway synchronize the internal day to the astronomical day, usually via the light/dark cycle, and multiple output rhythms in physiology and behaviour allow appropriately timed activity. The multitude of clocks within a multicellular organism can all potentially tick with a different phase angle […], but usually they are synchronized to each other and by a central pacemaker which is in turn entrained to the external world via appropriate zeitgebers. […] Most biological reactions vary greatly with temperature and show a Q10 temperature coefficient of about 2 […]. This means that the biological process or reaction rate doubles as a consequence of increasing the temperature by 10°C up to a maximum temperature at which the biological reaction stops. […] a 10°C temperature increase doubles muscle performance. By contrast, circadian rhythms exhibit a Q10 close to 1 […] Clocks without temperature compensation are useless. […] Although we know that circadian clocks show temperature compensation, and that this phenomenon is a conserved feature across all circadian rhythms, we have little idea how this is achieved.”

“The systematic study of circadian rhythms only really started in the 1950s, and the pioneering studies of Colin Pittendrigh brought coherence to this emerging new discipline. […] From [a] mass of emerging data, Pittendrigh had key insights and defined the essential properties of circadian rhythms across all life. Namely that: all circadian rhythms are endogenous and show near 24-hour rhythms in a biological process (biochemistry, physiology, or behaviour); they persist under constant conditions for several cycles; they are entrained to the astronomical day via synchronizing zeitgebers; and they show temperature compensation such that the period of the oscillation does not alter appreciably with changes in environmental temperature. Much of the research since the 1950s has been the translation of these formalisms into biological structures and processes, addressing such questions as: What is the clock and where is it located within the intracellular processes of the cell? How can a set of biochemical reactions produce a regular self-sustaining rhythm that persists under constant conditions and has a period of about 24 hours? How is this internal oscillation synchronized by zeitgebers such as light to the astronomical day? Why is the clock not altered by temperature, speeding up when the environment gets hotter and slowing down in the cold? How is the information of the near 24-hour rhythm communicated to the rest of the organism?”

“There have been hundreds of studies showing that a broad range of activities, both physical and cognitive, vary across the 24-hour day: tooth pain is lowest in the morning; proofreading is best performed in the evening; labour pains usually begin at night and most natural births occur in the early morning hours. The accuracy of short and long badminton serves is higher in the afternoon than in the morning and evening. Accuracy of first serves in tennis is better in the morning and afternoon than in the evening, although speed is higher in the evening than in the morning. Swimming velocity over 50 metres is higher in the evening than in the morning and afternoon. […] The majority of studies report that performance increases from morning to afternoon or evening. […] Typical ‘optimal’ times of day for physical or cognitive activity are gathered routinely from population studies […]. However, there is considerable individual variation. Peak performance will depend upon age, chronotype, time zone, and for behavioural tasks how many hours the participant has been awake when conducting the task, and even the nature of the task itself. As a general rule, the circadian modulation of cognitive functioning results in an improved performance over the day for younger adults, while in older subjects it deteriorates. […] On average the circadian rhythms of an individual in their late teens will be delayed by around two hours compared with an individual in their fifties. As a result the average teenager experiences considerable social jet lag, and asking a teenager to get up at 07.00 in the morning is the equivalent of asking a 50-year-old to get up at 05.00 in the morning.”

“Day versus night variations in blood pressure and heart rate are among the best-known circadian rhythms of physiology. In humans, there is a 24-hour variation in blood pressure with a sharp rise before awakening […]. Many cardiovascular events, such as sudden cardiac death, myocardial infarction, and stroke, display diurnal variations with an increased incidence between 06.00 and 12.00 in the morning. Both atrial and ventricular arrhythmias appear to exhibit circadian patterning as well, with a higher frequency during the day than at night. […] Myocardial infarction (MI) is two to three times more frequent in the morning than at night. In the early morning, the increased systolic blood pressure and heart rate results in an increased energy and oxygen demand by the heart, while the vascular tone of the coronary artery rises in the morning, resulting in a decreased coronary blood flow and oxygen supply. This mismatch between supply and demand underpins the high frequency of onset of MI. Plaque blockages are more likely to occur in the morning as platelet surface activation markers have a circadian pattern producing a peak of thrombus formation and platelet aggregation. The resulting hypercoagulability partially underlies the morning onset of MI.”

“A critical area where time of day matters to the individual is the optimum time to take medication, a branch of medicine that has been termed ‘chronotherapy’. Statins are a family of cholesterol-lowering drugs which inhibit HMGCR-reductase […] HMGCR is under circadian control and is highest at night. Hence those statins with a short half-life, such as simvastatin and lovastatin, are most effective when taken before bedtime. In another clinical domain entirely, recent studies have shown that anti-flu vaccinations given in the morning provoke a stronger immune response than those given in the afternoon. The idea of using chronotherapy to improve the efficacy of anti-cancer drugs has been around for the best part of 30 years. […] In experimental models more than thirty anti-cancer drugs have been found to vary in toxicity and efficacy by as much as 50 per cent as a function of time of administration. Although Lévi and others have shown the advantages to treating individual patients by different timing regimes, few hospitals have taken it up. One reason is that the best time to apply many of these treatments is late in the day or during the night, precisely when most hospitals lack the infrastructure and personnel to deliver such treatments.”

“Flying across multiple time zones and shift work has significant economic benefits, but the costs in terms of ill health are only now becoming clear. Sleep and circadian rhythm disruption (SCRD) is almost always associated with poor health. […] The impact of jet lag has long been known by elite athletes […] even when superbly fit individuals fly across time zones there is a very prolonged disturbance of circadian-driven rhythmic physiology. […] Horses also suffer from jet lag. […] Even bees can get jet lag. […] The misalignments that occur as a result of the occasional transmeridian flight are transient. Shift working represents a chronic misalignment. […] Nurses are one of the best-studied groups of night shift workers. Years of shift work in these individuals has been associated with a broad range of health problems including type II diabetes, gastrointestinal disorders, and even breast and colorectal cancers. Cancer risk increases with the number of years of shift work, the frequency of rotating work schedules, and the number of hours per week working at night [For people who are interested to know more about this, I previously covered a text devoted exclusively to these topics here and here.]. The correlations are so strong that shift work is now officially classified as ‘probably carcinogenic [Group 2A]’ by the World Health Organization. […] the partners and families of night shift workers need to be aware that mood swings, loss of empathy, and irritability are common features of working at night.”

“There are some seventy sleep disorders recognized by the medical community, of which four have been labelled as ‘circadian rhythm sleep disorders’ […] (1) Advanced sleep phase disorder (ASPD) […] is characterized by difficulty staying awake in the evening and difficulty staying asleep in the morning. Typically individuals go to bed and rise about three or more hours earlier than the societal norm. […] (2) Delayed sleep phase disorder (DSPD) is a far more frequent condition and is characterized by a 3-hour delay or more in sleep onset and offset and is a sleep pattern often found in some adolescents and young adults. […] ASPD and DSPD can be considered as pathological extremes of morning or evening preferences […] (3) Freerunning or non-24-hour sleep/wake rhythms occur in blind individuals who have either had their eyes completely removed or who have no neural connection from the retina to the brain. These people are not only visually blind but are also circadian blind. Because they have no means of detecting the synchronizing light signals they cannot reset their circadian rhythms, which freerun with a period of about 24 hours and 10 minutes. So, after six days, internal time is on average 1 hour behind environmental time. (4) Irregular sleep timing has been observed in individuals who lack a circadian clock as a result of a tumour in their anterior hypothalamus […]. Irregular sleep timing is [also] commonly found in older people suffering from dementia. It is an extremely important condition because one of the major factors in caring for those with dementia is the exhaustion of the carers which is often a consequence of the poor sleep patterns of those for whom they are caring. Various protocols have been attempted in nursing homes using increased light in the day areas and darkness in the bedrooms to try and consolidate sleep. Such approaches have been very successful in some individuals […] Although insomnia is the commonly used term to describe sleep disruption, technically insomnia is not a ‘circadian rhythm sleep disorder’ but rather a general term used to describe irregular or disrupted sleep. […] Insomnia is described as a ‘psychophysiological’ condition, in which mental and behavioural factors play predisposing, precipitating, and perpetuating roles. The factors include anxiety about sleep, maladaptive sleep habits, and the possibility of an underlying vulnerability in the sleep-regulating mechanism. […] Even normal ‘healthy ageing’ is associated with both circadian rhythm sleep disorders and insomnia. Both the generation and regulation of circadian rhythms have been shown to become less robust with age, with blunted amplitudes and abnormal phasing of key physiological processes such as core body temperature, metabolic processes, and hormone release. Part of the explanation may relate to a reduced light signal to the clock […]. In the elderly, the photoreceptors of the eye are often exposed to less light because of the development of cataracts and other age-related eye disease. Both these factors have been correlated with increased SCRD.”

“Circadian rhythm research has mushroomed in the past twenty years, and has provided a much greater understanding of the impact of both imposed and illness-related SCRD. We now appreciate that our increasingly 24/7 society and social disregard for biological time is having a major impact upon our health. Understanding has also been gained about the relationship between SCRD and a spectrum of different illnesses. SCRD in illness is not simply the inconvenience of being unable to sleep at an appropriate time but is an agent that exacerbates or causes serious health problems.”

Links:

Circadian rhythm.
Acrophase.
Phase (waves). Phase angle.
Jean-Jacques d’Ortous de Mairan.
Heliotropism.
Kymograph.
John Harrison.
Munich Chronotype Questionnaire.
Chronotype.
Seasonal affective disorder. Light therapy.
Parkinson’s disease. Multiple sclerosis.
Melatonin.

August 25, 2018 Posted by | Biology, Books, Cancer/oncology, Cardiology, Medicine | Leave a comment

Developmental Biology (II)

Below I have included some quotes from the middle chapters of the book and some links related to the topic coverage. As I already pointed out earlier, this is an excellent book on these topics.

Germ cells have three key functions: the preservation of the genetic integrity of the germline; the generation of genetic diversity; and the transmission of genetic information to the next generation. In all but the simplest animals, the cells of the germline are the only cells that can give rise to a new organism. So, unlike body cells, which eventually all die, germ cells in a sense outlive the bodies that produced them. They are, therefore, very special cells […] In order that the number of chromosomes is kept constant from generation to generation, germ cells are produced by a specialized type of cell division, called meiosis, which halves the chromosome number. Unless this reduction by meiosis occurred, the number of chromosomes would double each time the egg was fertilized. Germ cells thus contain a single copy of each chromosome and are called haploid, whereas germ-cell precursor cells and the other somatic cells of the body contain two copies and are called diploid. The halving of chromosome number at meiosis means that when egg and sperm come together at fertilization, the diploid number of chromosomes is restored. […] An important property of germ cells is that they remain pluripotent—able to give rise to all the different types of cells in the body. Nevertheless, eggs and sperm in mammals have certain genes differentially switched off during germ-cell development by a process known as genomic imprinting […] Certain genes in eggs and sperm are imprinted, so that the activity of the same gene is different depending on whether it is of maternal or paternal origin. Improper imprinting can lead to developmental abnormalities in humans. At least 80 imprinted genes have been identified in mammals, and some are involved in growth control. […] A number of developmental disorders in humans are associated with imprinted genes. Infants with Prader-Willi syndrome fail to thrive and later can become extremely obese; they also show mental retardation and mental disturbances […] Angelman syndrome results in severe motor and mental retardation. Beckwith-Wiedemann syndrome is due to a generalized disruption of imprinting on a region of chromosome 7 and leads to excessive foetal overgrowth and an increased predisposition to cancer.”

“Sperm are motile cells, typically designed for activating the egg and delivering their nucleus into the egg cytoplasm. They essentially consist of a nucleus, mitochondria to provide an energy source, and a flagellum for movement. The sperm contributes virtually nothing to the organism other than its chromosomes. In mammals, sperm mitochondria are destroyed following fertilization, and so all mitochondria in the animal are of maternal origin. […] Different organisms have different ways of ensuring fertilization by only one sperm. […] Early development is similar in both male and female mammalian embryos, with sexual differences only appearing at later stages. The development of the individual as either male or female is genetically fixed at fertilization by the chromosomal content of the egg and sperm that fuse to form the fertilized egg. […] Each sperm carries either an X or Y chromosome, while the egg has an X. The genetic sex of a mammal is thus established at the moment of conception, when the sperm introduces either an X or a Y chromosome into the egg. […] In the absence of a Y chromosome, the default development of tissues is along the female pathway. […] Unlike animals, plants do not set aside germ cells in the embryo and germ cells are only specified when a flower develops. Any meristem cell can, in principle, give rise to a germ cell of either sex, and there are no sex chromosomes. The great majority of flowering plants give rise to flowers that contain both male and female sexual organs, in which meiosis occurs. The male sexual organs are the stamens; these produce pollen, which contains the male gamete nuclei corresponding to the sperm of animals. At the centre of the flower are the female sex organs, which consist of an ovary of two carpels, which contain the ovules. Each ovule contains an egg cell.”

“The character of specialized cells such as nerve, muscle, or skin is the result of a particular pattern of gene activity that determines which proteins are synthesized. There are more than 200 clearly recognizable differentiated cell types in mammals. How these particular patterns of gene activity develop is a central question in cell differentiation. Gene expression is under a complex set of controls that include the actions of transcription factors, and chemical modification of DNA. External signals play a key role in differentiation by triggering intracellular signalling pathways that affect gene expression. […] the central feature of cell differentiation is a change in gene expression, which brings about a change in the proteins in the cells. The genes expressed in a differentiated cell include not only those for a wide range of ‘housekeeping’ proteins, such as the enzymes involved in energy metabolism, but also genes encoding cell-specific proteins that characterize a fully differentiated cell: hemoglobin in red blood cells, keratin in skin epidermal cells, and muscle-specific actin and myosin protein filaments in muscle. […] several thousand different genes are active in any given cell in the embryo at any one time, though only a small number of these may be involved in specifying cell fate or differentiation. […] Cell differentiation is known to be controlled by a wide range of external signals but it is important to remember that, while these external signals are often referred to as being ‘instructive’, they are ‘selective’, in the sense that the number of developmental options open to a cell at any given time is limited. These options are set by the cell’s internal state which, in turn, reflects its developmental history. External signals cannot, for example, convert an endodermal cell into a muscle or nerve cell. Most of the molecules that act as developmentally important signals between cells during development are proteins or peptides, and their effect is usually to induce a change in gene expression. […] The same external signals can be used again and again with different effects because the cells’ histories are different. […] At least 1,000 different transcription factors are encoded in the genomes of the fly and the nematode, and as many as 3,000 in the human genome. On average, around five different transcription factors act together at a control region […] In general, it can be assumed that activation of each gene involves a unique combination of transcription factors.”

“Stem cells involve some special features in relation to differentiation. A single stem cell can divide to produce two daughter cells, one of which remains a stem cell while the other gives rise to a lineage of differentiating cells. This occurs in our skin and gut all the time and also in the production of blood cells. It also occurs in the embryo. […] Embryonic stem (ES) cells from the inner cell mass of the early mammalian embryo when the primitive streak forms, can, in culture, differentiate into a wide variety of cell types, and have potential uses in regenerative medicine. […] it is now possible to make adult body cells into stem cells, which has important implications for regenerative medicine. […] The goal of regenerative medicine is to restore the structure and function of damaged or diseased tissues. As stem cells can proliferate and differentiate into a wide range of cell types, they are strong candidates for use in cell-replacement therapy, the restoration of tissue function by the introduction of new healthy cells. […] The generation of insulin-producing pancreatic β cells from ES cells to replace those destroyed in type 1 diabetes is a prime medical target. Treatments that direct the differentiation of ES cells towards making endoderm derivatives such as pancreatic cells have been particularly difficult to find. […] The neurodegenerative Parkinson disease is another medical target. […] To generate […] stem cells of the patient’s own tissue type would be a great advantage, and the recent development of induced pluripotent stem cells (iPS cells) offers […] exciting new opportunities. […] There is [however] risk of tumour induction in patients undergoing cell-replacement therapy with ES cells or iPS cells; undifferentiated pluripotent cells introduced into the patient could cause tumours. Only stringent selection procedures that ensure no undifferentiated cells are present in the transplanted cell population will overcome this problem. And it is not yet clear how stable differentiated ES cells and iPS cells will be in the long term.”

“In general, the success rate of cloning by body-cell nuclear transfer in mammals is low, and the reasons for this are not yet well understood. […] Most cloned mammals derived from nuclear transplantation are usually abnormal in some way. The cause of failure is incomplete reprogramming of the donor nucleus to remove all the earlier modifications. A related cause of abnormality may be that the reprogrammed genes have not gone through the normal imprinting process that occurs during germ-cell development, where different genes are silenced in the male and female parents. The abnormalities in adults that do develop from cloned embryos include early death, limb deformities and hypertension in cattle, and immune impairment in mice. All these defects are thought to be due to abnormalities of gene expression that arise from the cloning process. Studies have shown that some 5% of the genes in cloned mice are not correctly expressed and that almost half of the imprinted genes are incorrectly expressed.”

“Organ development involves large numbers of genes and, because of this complexity, general principles can be quite difficult to distinguish. Nevertheless, many of the mechanisms used in organogenesis are similar to those of earlier development, and certain signals are used again and again. Pattern formation in development in a variety of organs can be specified by position information, which is specified by a gradient in some property. […] Not surprisingly, the vascular system, including blood vessels and blood cells, is among the first organ systems to develop in vertebrate embryos, so that oxygen and nutrients can be delivered to the rapidly developing tissues. The defining cell type of the vascular system is the endothelial cell, which forms the lining of the entire circulatory system, including the heart, veins, and arteries. Blood vessels are formed by endothelial cells and these vessels are then covered by connective tissue and smooth muscle cells. Arteries and veins are defined by the direction of blood flow as well as by structural and functional differences; the cells are specified as arterial or venous before they form blood vessels but they can switch identity. […] Differentiation of the vascular cells requires the growth factor VEGF (vascular endothelial growth factor) and its receptors, and VEGF stimulates their proliferation. Expression of the Vegf gene is induced by lack of oxygen and thus an active organ using up oxygen promotes its own vascularization. New blood capillaries are formed by sprouting from pre-existing blood vessels and proliferation of cells at the tip of the sprout. […] During their development, blood vessels navigate along specific paths towards their targets […]. Many solid tumours produce VEGF and other growth factors that stimulate vascular development and so promote the tumour’s growth, and blocking new vessel formation is thus a means of reducing tumour growth. […] In humans, about 1 in 100 live-born infants has some congenital heart malformation, while in utero, heart malformation leading to death of the embryo occurs in between 5 and 10% of conceptions.”

“Separation of the digits […] is due to the programmed cell death of the cells between these digits’ cartilaginous elements. The webbed feet of ducks and other waterfowl are simply the result of less cell death between the digits. […] the death of cells between the digits is essential for separating the digits. The development of the vertebrate nervous system also involves the death of large numbers of neurons.”

Links:

Budding.
Gonad.
Down Syndrome.
Fertilization. In vitro fertilisation. Preimplantation genetic diagnosis.
SRY gene.
X-inactivation. Dosage compensation.
Cellular differentiation.
MyoD.
Signal transduction. Enhancer (genetics).
Epigenetics.
Hematopoiesis. Hematopoietic stem cell transplantation. Hemoglobin. Sickle cell anemia.
Skin. Dermis. Fibroblast. Epidermis.
Skeletal muscle. Myogenesis. Myoblast.
Cloning. Dolly.
Organogenesis.
Limb development. Limb bud. Progress zone model. Apical ectodermal ridge. Polarizing region/Zone of polarizing activity. Sonic hedgehog.
Imaginal disc. Pax6. Aniridia. Neural tube.
Branching morphogenesis.
Pistil.
ABC model of flower development.

July 16, 2018 Posted by | Biology, Books, Botany, Cancer/oncology, Diabetes, Genetics, Medicine, Molecular biology, Ophthalmology | Leave a comment

Oceans (II)

In this post I have added some more observations from the book and some more links related to the book‘s coverage.

“Almost all the surface waves we observe are generated by wind stress, acting either locally or far out to sea. Although the wave crests appear to move forwards with the wind, this does not occur. Mechanical energy, created by the original disturbance that caused the wave, travels through the ocean at the speed of the wave, whereas water does not. Individual molecules of water simply move back and forth, up and down, in a generally circular motion. […] The greater the wind force, the bigger the wave, the more energy stored within its bulk, and the more energy released when it eventually breaks. The amount of energy is enormous. Over long periods of time, whole coastlines retreat before the pounding waves – cliffs topple, rocks are worn to pebbles, pebbles to sand, and so on. Individual storm waves can exert instantaneous pressures of up to 30,000 kilograms […] per square metre. […] The rate at which energy is transferred across the ocean is the same as the velocity of the wave. […] waves typically travel at speeds of 30-40 kilometres per hour, and […] waves with a greater wavelength will travel faster than those with a shorter wavelength. […] With increasing wind speed and duration over which the wind blows, the wave height, period, and length all increase. The distance over which the wind blows is known as fetch, and is critical in influencing the growth of waves — the greater the area of ocean over which a storm blows, then the larger and more powerful the waves generated. The three stages in wave development are known as sea, swell, and surf. […] The ocean is highly efficient at transmitting energy. Water offers so little resistance to the small orbital motion of water particles in waves that individual wave trains may continue for thousands of kilometres. […] When the wave train encounters shallow water — say 50 metres for a 100-metre wavelength — the waves first feel the bottom and begin to slow down in response to frictional resistance. Wavelength decreases, the crests bunch closer together, and wave height increases until the wave becomes unstable and topples forwards as surf. […] Very often, waves approach obliquely to the coast and set up a significant transfer of water and sediment along the shoreline. The long-shore currents so developed can be very powerful, removing beach sand and building out spits and bars across the mouths of estuaries.” (People who’re interested in knowing more about these topics will probably enjoy Fredric Raichlen’s book on these topics – I did, US.)

“Wind is the principal force that drives surface currents, but the pattern of circulation results from a more complex interaction of wind drag, pressure gradients, and Coriolis deflection. Wind drag is a very inefficient process by which the momentum of moving air molecules is transmitted to water molecules at the ocean surface setting them in motion. The speed of water molecules (the current), initially in the direction of the wind, is only about 3–4 per cent of the wind speed. This means that a wind blowing constantly over a period of time at 50 kilometres per hour will produce a water current of about 1 knot (2 kilometres per hour). […] Although the movement of wind may seem random, changing from one day to the next, surface winds actually blow in a very regular pattern on a planetary scale. The subtropics are known for the trade winds with their strong easterly component, and the mid-latitudes for persistent westerlies. Wind drag by such large-scale wind systems sets the ocean waters in motion. The trade winds produce a pair of equatorial currents moving to the west in each ocean, while the westerlies drive a belt of currents that flow to the east at mid-latitudes in both hemispheres. […] Deflection by the Coriolis force and ultimately by the position of the continents creates very large oval-shaped gyres in each ocean.”

“The control exerted by the oceans is an integral and essential part of the global climate system. […] The oceans are one of the principal long-term stores on Earth for carbon and carbon dioxide […] The oceans are like a gigantic sponge holding fifty times more carbon dioxide than the atmosphere […] the sea surface acts as a two-way control valve for gas transfer, which opens and closes in response to two key properties – gas concentration and ocean stirring. First, the difference in gas concentration between the air and sea controls the direction and rate of gas exchange. Gas concentration in water depends on temperature—cold water dissolves more carbon dioxide than warm water, and on biological processes—such as photosynthesis and respiration by microscopic plants, animals, and bacteria that make up the plankton. These transfer processes affect all gases […]. Second, the strength of the ocean-stirring process, caused by wind and foaming waves, affects the ease with which gases are absorbed at the surface. More gas is absorbed during stormy weather and, once dissolved, is quickly mixed downwards by water turbulence. […] The transfer of heat, moisture, and other gases between the ocean and atmosphere drives small-scale oscillations in climate. The El Niño Southern Oscillation (ENSO) is the best known, causing 3–7-year climate cycles driven by the interaction of sea-surface temperature and trade winds along the equatorial Pacific. The effects are worldwide in their impact through a process of atmospheric teleconnection — causing floods in Europe and North America, monsoon failure and severe drought in India, South East Asia, and Australia, as well as decimation of the anchovy fishing industry off Peru.”

“Earth’s climate has not always been as it is today […] About 100 million years ago, for example, palm trees and crocodiles lived as far north as 80°N – the equivalent of Arctic Canada or northern Greenland today. […] Most of the geological past has enjoyed warm conditions. These have been interrupted at irregular intervals by cold and glacial climates of altogether shorter duration […][,] the last [of them] beginning around 3 million years ago. We are still in the grip of this last icehouse state, although in one of its relatively brief interglacial phases. […] Sea level has varied in the past in close consort with climate change […]. Around twenty-five thousand years ago, at the height of the last Ice Age, the global sea level was 120 metres lower than today. Huge tracts of the continental shelves that rim today’s landmasses were exposed. […] Further back in time, 80 million years ago, the sea level was around 250–350 metres higher than today, so that 82 per cent of the planet was ocean and only 18 per cent remained as dry land. Such changes have been the norm throughout geological history and entirely the result of natural causes.”

“Most of the solar energy absorbed by seawater is converted directly to heat, and water temperature is vital for the distribution and activity of life in the oceans. Whereas mean temperature ranges from 0 to 40 degrees Celsius, 90 per cent of the oceans are permanently below 5°C. Most marine animals are ectotherms (cold-blooded), which means that they obtain their body heat from their surroundings. They generally have narrow tolerance limits and are restricted to particular latitudinal belts or water depths. Marine mammals and birds are endotherms (warm-blooded), which means that their metabolism generates heat internally thereby allowing the organism to maintain constant body temperature. They can tolerate a much wider range of external conditions. Coping with the extreme (hydrostatic) pressure exerted at depth within the ocean is a challenge. For every 30 metres of water, the pressure increases by 3 atmospheres – roughly equivalent to the weight of an elephant.”

“There are at least 6000 different species of diatom. […] An average litre of surface water from the ocean contains over half a million diatoms and other unicellular phytoplankton and many thousands of zooplankton.”

“Several different styles of movement are used by marine organisms. These include floating, swimming, jet propulsion, creeping, crawling, and burrowing. […] The particular physical properties of water that most affect movement are density, viscosity, and buoyancy. Seawater is about 800 times denser than air and nearly 100 times more viscous. Consequently there is much more resistance on movement than on land […] Most large marine animals, including all fishes and mammals, have adopted some form of active swimming […]. Swimming efficiency in fishes has been achieved by minimizing the three types of drag resistance created by friction, turbulence, and body form. To reduce surface friction, the body must be smooth and rounded like a sphere. The scales of most fish are also covered with slime as further lubrication. To reduce form drag, the cross-sectional area of the body should be minimal — a pencil shape is ideal. To reduce the turbulent drag as water flows around the moving body, a rounded front end and tapered rear is required. […] Fins play a versatile role in the movement of a fish. There are several types including dorsal fins along the back, caudal or tail fins, and anal fins on the belly just behind the anus. Operating together, the beating fins provide stability and steering, forwards and reverse propulsion, and braking. They also help determine whether the motion is up or down, forwards or backwards.”

Links:

Rip current.
Rogue wave. Agulhas Current. Kuroshio Current.
Tsunami.
Tide. Tidal range.
Geostrophic current.
Ekman Spiral. Ekman transport. Upwelling.
Global thermohaline circulation system. Antarctic bottom water. North Atlantic Deep Water.
Rio Grande Rise.
Denmark Strait. Denmark Strait cataract (/waterfall?).
Atmospheric circulation. Jet streams.
Monsoon.
Cyclone. Tropical cyclone.
Ozone layer. Ozone depletion.
Milankovitch cycles.
Little Ice Age.
Oxygen Isotope Stratigraphy of the Oceans.
Contourite.
Earliest known life forms. Cyanobacteria. Prokaryote. Eukaryote. Multicellular organism. Microbial mat. Ediacaran. Cambrian explosion. Pikaia. Vertebrate. Major extinction events. Permian–Triassic extinction event. (The author seems to disagree with the authors of this article about potential causes, in particular in so far as they relate to the formation of Pangaea – as I felt uncertain about the accuracy of the claims made in the book I decided against covering this topic in this post, even though I find it interesting).
Tethys Ocean.
Plesiosauria. Pliosauroidea. Ichthyosaur. Ammonoidea. Belemnites. Pachyaena. Cetacea.
Pelagic zone. Nekton. Benthic zone. Neritic zone. Oceanic zone. Bathyal zone. Hadal zone.
Phytoplankton. Silicoflagellates. Coccolithophore. Dinoflagellate. Zooplankton. Protozoa. Tintinnid. Radiolaria. Copepods. Krill. Bivalves.
Elasmobranchii.
Ampullae of Lorenzini. Lateral line.
Baleen whale. Humpback whale.
Coral reef.
Box jellyfish. Stonefish.
Horseshoe crab.
Greenland shark. Giant squid.
Hydrothermal vent. Pompeii worms.
Atlantis II Deep. Aragonite. Phosphorite. Deep sea mining. Oil platform. Methane clathrate.
Ocean thermal energy conversion. Tidal barrage.
Mariculture.
Exxon Valdez oil spill.
Bottom trawling.

June 24, 2018 Posted by | Biology, Books, Engineering, Geology, Paleontology, Physics | Leave a comment

Developmental Biology (I)

On goodreads I called the book “[a]n excellent introduction to the field of developmental biology” and I gave it five stars.

Below I have included some sample observations from the first third of the book or so, as well as some supplementary links.

“The major processes involved in development are: pattern formation; morphogenesis or change in form; cell differentiation by which different types of cell develop; and growth. These processes involve cell activities, which are determined by the proteins present in the cells. Genes control cell behaviour by controlling where and when proteins are synthesized, and cell behaviour provides the link between gene action and developmental processes. What a cell does is determined very largely by the proteins it contains. The hemoglobin in red blood cells enables them to transport oxygen; the cells lining the vertebrate gut secrete specialized digestive enzymes. These activities require specialized proteins […] In development we are concerned primarily with those proteins that make cells different from one another and make them carry out the activities required for development of the embryo. Developmental genes typically code for proteins involved in the regulation of cell behaviour. […] An intriguing question is how many genes out of the total genome are developmental genes – that is, genes specifically required for embryonic development. This is not easy to estimate. […] Some studies suggest that in an organism with 20,000 genes, about 10% of the genes may be directly involved in development.”

“The fate of a group of cells in the early embryo can be determined by signals from other cells. Few signals actually enter the cells. Most signals are transmitted through the space outside of cells (the extracellular space) in the form of proteins secreted by one cell and detected by another. Cells may interact directly with each other by means of molecules located on their surfaces. In both these cases, the signal is generally received by receptor proteins in the cell membrane and is subsequently relayed through other signalling proteins inside the cell to produce the cellular response, usually by turning genes on or off. This process is known as signal transduction. These pathways can be very complex. […] The complexity of the signal transduction pathway means that it can be altered as the cell develops so the same signal can have a different effect on different cells. How a cell responds to a particular signal depends on its internal state and this state can reflect the cell’s developmental history — cells have good memories. Thus, different cells can respond to the same signal in very different ways. So the same signal can be used again and again in the developing embryo. There are thus rather few signalling proteins.”

“All vertebrates, despite their many outward differences, have a similar basic body plan — the segmented backbone or vertebral column surrounding the spinal cord, with the brain at the head end enclosed in a bony or cartilaginous skull. These prominent structures mark the antero-posterior axis with the head at the anterior end. The vertebrate body also has a distinct dorso-ventral axis running from the back to the belly, with the spinal cord running along the dorsal side and the mouth defining the ventral side. The antero-posterior and dorso-ventral axes together define the left and right sides of the animal. Vertebrates have a general bilateral symmetry around the dorsal midline so that outwardly the right and left sides are mirror images of each other though some internal organs such as the heart and liver are arranged asymmetrically. How these axes are specified in the embryo is a key issue. All vertebrate embryos pass through a broadly similar set of developmental stages and the differences are partly related to how and when the axes are set up, and how the embryo is nourished. […] A quite rare but nevertheless important event before gastrulation in mammalian embryos, including humans, is the splitting of the embryo into two, and identical twins can then develop. This shows the remarkable ability of the early embryo to regulate [in this context, regulation refers to ‘the ability of an embryo to restore normal development even if some portions are removed or rearranged very early in development’ – US] and develop normally when half the normal size […] In mammals, there is no sign of axes or polarity in the fertilized egg or during early development, and it only occurs later by an as yet unknown mechanism.”

“How is left–right established? Vertebrates are bilaterally symmetric about the midline of the body for many structures, such as eyes, ears, and limbs, but most internal organs are asymmetric. In mice and humans, for example, the heart is on the left side, the right lung has more lobes than the left, the stomach and spleen lie towards the left, and the bulk of the liver is towards the right. This handedness of organs is remarkably consistent […] Specification of left and right is fundamentally different from specifying the other axes of the embryo, as left and right have meaning only after the antero-posterior and dorso-ventral axes have been established. If one of these axes were reversed, then so too would be the left–right axis and this is the reason that handedness is reversed when you look in a mirror—your dorsoventral axis is reversed, and so left becomes right and vice versa. The mechanisms by which left–right symmetry is initially broken are still not fully understood, but the subsequent cascade of events that leads to organ asymmetry is better understood. The ‘leftward’ flow of extracellular fluid across the embryonic midline by a population of ciliated cells has been shown to be critical in mouse embryos in inducing asymmetric expression of genes involved in establishing left versus right. The antero-posterior patterning of the mesoderm is most clearly seen in the differences in the somites that form vertebrae: each individual vertebra has well defined anatomical characteristics depending on its location along the axis. Patterning of the skeleton along the body axis is based on the somite cells acquiring a positional value that reflects their position along the axis and so determines their subsequent development. […] It is the Hox genes that define positional identity along the antero-posterior axis […]. The Hox genes are members of the large family of homeobox genes that are involved in many aspects of development and are the most striking example of a widespread conservation of developmental genes in animals. The name homeobox comes from their ability to bring about a homeotic transformation, converting one region into another. Most vertebrates have clusters of Hox genes on four different chromosomes. A very special feature of Hox gene expression in both insects and vertebrates is that the genes in the clusters are expressed in the developing embryo in a temporal and spatial order that reflects their order on the chromosome. Genes at one end of the cluster are expressed in the head region, while those at the other end are expressed in the tail region. This is a unique feature in development, as it is the only known case where a spatial arrangement of genes on a chromosome corresponds to a spatial pattern in the embryo. The Hox genes provide the somites and adjacent mesoderm with positional values that determine their subsequent development.”

“Many of the genes that control the development of flies are similar to those controlling development in vertebrates, and indeed in many other animals. it seems that once evolution finds a satisfactory way of developing animal bodies, it tends to use the same mechanisms and molecules over and over again with, of course, some important modifications. […] The insect body is bilaterally symmetrical and has two distinct and largely independent axes: the antero-posterior and dorso-ventral axes, which are at right angles to each other. These axes are already partly set up in the fly egg, and become fully established and patterned in the very early embryo. Along the antero-posterior axis the embryo becomes divided into a number of segments, which will become the head, thorax, and abdomen of the larva. A series of evenly spaced grooves forms more or less simultaneously and these demarcate parasegments, which later give rise to the segments of the larva and adult. Of the fourteen larval parasegments, three contribute to mouthparts of the head, three to the thoracic region, and eight to the abdomen. […] Development is initiated by a gradient of the protein Bicoid, along the axis running from anterior to posterior in the egg; this provides the positional information required for further patterning along this axis. Bicoid is a transcription factor and acts as a morphogen—a graded concentration of a molecule that switches on particular genes at different threshold concentrations, thereby initiating a new pattern of gene expression along the axis. Bicoid activates anterior expression of the gene hunchback […]. The hunchback gene is switched on only when Bicoid is present above a certain threshold concentration. The protein of the hunchback gene, in turn, is instrumental in switching on the expression of the other genes, along the antero-posterior axis. […] The dorso-ventral axis is specified by a different set of maternal genes from those that specify the anterior-posterior axis, but by a similar mechanism. […] Once each parasegment is delimited, it behaves as an independent developmental unit, under the control of a particular set of genes. The parasegments are initially similar but each will soon acquire its own unique identity mainly due to Hox genes.”

“Because plant cells have rigid cell walls and, unlike animal cells, cannot move, a plant’s development is very much the result of patterns of oriented cell divisions and increase in cell size. Despite this difference, cell fate in plant development is largely determined by similar means as in animals – by a combination of positional signals and intercellular communication. […] The logic behind the spatial layouts of gene expression that pattern a developing flower is similar to that of Hox gene action in patterning the body axis in animals, but the genes involved are completely different. One general difference between plant and animal development is that most of the development occurs not in the embryo but in the growing plant. Unlike an animal embryo, the mature plant embryo inside a seed is not simply a smaller version of the organism it will become. All the ‘adult’ structures of the plant – shoots, roots, stalks, leaves, and flowers – are produced in the adult plant from localized groups of undifferentiated cells known as meristems. […] Another important difference between plant and animal cells is that a complete, fertile plant can develop from a single differentiated somatic cell and not just from a fertilized egg. This suggests that, unlike the differentiated cells of adult animals, some differentiated cells of the adult plant may retain totipotency and so behave like animal embryonic stem cells. […] The small organic molecule auxin is one of the most important and ubiquitous chemical signals in plant development and plant growth.”

“All animal embryos undergo a dramatic change in shape during their early development. This occurs primarily during gastrulation, the process that transforms a two-dimensional sheet of cells into the complex three-dimensional animal body, and involves extensive rearrangements of cell layers and the directed movement of cells from one location to another. […] Change in form is largely a problem in cell mechanics and requires forces to bring about changes in cell shape and cell migration. Two key cellular properties involved in changes in animal embryonic form are cell contraction and cell adhesiveness. Contraction in one part of a cell can change the cell’s shape. Changes in cell shape are generated by forces produced by the cytoskeleton, an internal protein framework of filaments. Animal cells stick to one another, and to the external support tissue that surrounds them (the extracellular matrix), through interactions involving cell-surface proteins. Changes in the adhesion proteins at the cell surface can therefore determine the strength of cell–cell adhesion and its specificity. These adhesive interactions affect the surface tension at the cell membrane, a property that contributes to the mechanics of the cell behaviour. Cells can also migrate, with contraction again playing a key role. An additional force that operates during morphogenesis, particularly in plants but also in a few aspects of animal embryogenesis, is hydrostatic pressure, which causes cells to expand. In plants there is no cell movement or change in shape, and changes in form are generated by oriented cell division and cell expansion. […] Localized contraction can change the shape of the cells as well as the sheet they are in. For example, folding of a cell sheet—a very common feature in embryonic development—is caused by localized changes in cell shape […]. Contraction on one side of a cell results in it acquiring a wedge-like form; when this occurs among a few cells locally in a sheet, a bend occurs at the site, deforming the sheet.”

“The integrity of tissues in the embryo is maintained by adhesive interactions between cells and between cells and the extracellular matrix; differences in cell adhesiveness also help maintain the boundaries between different tissues and structures. Cells stick to each other by means of cell adhesion molecules, such as cadherins, which are proteins on the cell surface that can bind strongly to proteins on other cell surfaces. About 30 different types of cadherins have been identified in vertebrates. […] Adhesion of a cell to the extracellular matrix, which contains proteins such as collagen, is by the binding of integrins in the cell membrane to these matrix molecules. […] Convergent extension plays a key role in gastrulation of [some] animals and […] morphogenetic processes. It is a mechanism for elongating a sheet of cells in one direction while narrowing its width, and occurs by rearrangement of cells within the sheet, rather than by cell migration or cell division. […] For convergent extension to take place, the axes along which the cells will intercalate and extend must already have been defined. […] Gastrulation in vertebrates involves a much more dramatic and complex rearrangement of tissues than in sea urchins […] But the outcome is the same: the transformation of a two-dimensional sheet of cells into a three-dimensional embryo, with ectoderm, mesoderm, and endoderm in the correct positions for further development of body structure. […] Directed dilation is an important force in plants, and results from an increase in hydrostatic pressure inside a cell. Cell enlargement is a major process in plant growth and morphogenesis, providing up to a fiftyfold increase in the volume of a tissue. The driving force for expansion is the hydrostatic pressure exerted on the cell wall as a result of the entry of water into cell vacuoles by osmosis. Plant-cell expansion involves synthesis and deposition of new cell-wall material, and is an example of directed dilation. The direction of cell growth is determined by the orientation of the cellulose fibrils in the cell wall.”

Links:

Developmental biology.
August Weismann. Hans Driesch. Hans Spemann. Hilde Mangold. Spemann-Mangold organizer.
Induction. Cleavage.
Developmental model organisms.
Blastula. Embryo. Ectoderm. Mesoderm. Endoderm.
Gastrulation.
Xenopus laevis.
Notochord.
Neurulation.
Organogenesis.
DNA. Gene. Protein. Transcription factor. RNA polymerase.
Epiblast. Trophoblast/trophectoderm. Inner cell mass.
Pluripotency.
Polarity in embryogenesis/animal-vegetal axis.
Primitive streak.
Hensen’s node.
Neural tube. Neural fold. Neural crest cells.
Situs inversus.
Gene silencing. Morpholino.
Drosophila embryogenesis.
Pair-rule gene.
Cell polarity.
Mosaic vs regulative development.
Caenorhabditis elegans.
Fate mapping.
Plasmodesmata.
Arabidopsis thaliana.
Apical-basal axis.
Hypocotyl.
Phyllotaxis.
Primordium.
Quiescent centre.
Filopodia.
Radial cleavage. Spiral cleavage.

June 11, 2018 Posted by | Biology, Books, Botany, Genetics, Molecular biology | Leave a comment

Blood (II)

Below I have added some quotes from the chapters of the book I did not cover in my first post, as well as some supplementary links.

Haemoglobin is of crucial biological importance; it is also easy to obtain safely in large quantities from donated blood. These properties have resulted in its becoming the most studied protein in human history. Haemoglobin played a key role in the history of our understanding of all proteins, and indeed the science of biochemistry itself. […] Oxygen transport defines the primary biological function of blood. […] Oxygen gas consists of two atoms of oxygen bound together to form a symmetrical molecule. However, oxygen cannot be transported in the plasma alone. This is because water is very poor at dissolving oxygen. Haemoglobin’s primary function is to increase this solubility; it does this by binding the oxygen gas on to the iron in its haem group. Every haem can bind one oxygen molecule, increasing the amount of oxygen able to dissolve in the blood.”

“An iron atom can exist in a number of different forms depending on how many electrons it has in its atomic orbitals. In its ferrous (iron II) state iron can bind oxygen readily. The haemoglobin protein has therefore evolved to stabilize its haem iron cofactor in this ferrous state. The result is that over fifty times as much oxygen is stored inside the confines of the red blood cell compared to outside in the watery plasma. However, using iron to bind oxygen comes at a cost. Iron (II) can readily lose one of its electrons to the bound oxygen, a process called ‘oxidation’. So the same form of iron that can bind oxygen avidly (ferrous) also readily reacts with that same oxygen forming an unreactive iron III state, called ‘ferric’. […] The complex structure of the protein haemoglobin is required to protect the ferrous iron from oxidizing. The haem iron is held in a precise configuration within the protein. Specific amino acids are ideally positioned to stabilize the iron–oxygen bond and prevent it from oxidizing. […] the iron stays ferrous despite the presence of the nearby oxygen. Having evolved over many hundreds of millions of years, this stability is very difficult for chemists to mimic in the laboratory. This is one reason why, desirable as it might be in terms of cost and convenience, it is not currently possible to replace blood transfusions with a simple small chemical iron oxygen carrier.”

“Given the success of the haem iron and globin combination in haemoglobin, it is no surprise that organisms have used this basic biochemical architecture for a variety of purposes throughout evolution, not just oxygen transport in blood. One example is the protein myoglobin. This protein resides inside animal cells; in the human it is found in the heart and skeletal muscle. […] Myoglobin has multiple functions. Its primary role is as an aid to oxygen diffusion. Whereas haemoglobin transports oxygen from the lung to the cell, myoglobin transports it once it is inside the cell. As oxygen is so poorly soluble in water, having a chain of molecules inside the cell that can bind and release oxygen rapidly significantly decreases the time it takes the gas to get from the blood capillary to the part of the cell—the mitochondria—where it is needed. […] Myoglobin can also act as an emergency oxygen backup store. In humans this is trivial and of questionable importance. Not so in diving mammals such as whales and dolphins that have as much as thirty times the myoglobin content of the terrestrial equivalent; indeed those mammals that dive for the longest duration have the most myoglobin. […] The third known function of myoglobin is to protect the muscle cells from damage by nitric oxide gas.”

“The heart is the organ that pumps blood around the body. If the heart stops functioning, blood does not flow. The driving force for this flow is the pressure difference between the arterial blood leaving the heart and the returning venous blood. The decreasing pressure in the venous side explains the need for unidirectional valves within veins to prevent the blood flowing in the wrong direction. Without them the return of the blood through the veins to the heart would be too slow, especially when standing up, when the venous pressure struggles to overcome gravity. […] normal [blood pressure] ranges rise slowly with age. […] high resistance in the arterial circulation at higher blood pressures [places] additional strain on the left ventricle. If the heart is weak, it may fail to achieve the extra force required to pump against this resistance, resulting in heart failure. […] in everyday life, a low blood pressure is rarely of concern. Indeed, it can be a sign of fitness as elite athletes have a much lower resting blood pressure than the rest of the population. […] the effect of exercise training is to thicken the muscles in the walls of the heart and enlarge the chambers. This enables more blood to be pumped per beat during intense exercise. The consequence of this extra efficiency is that when an athlete is resting—and therefore needs no more oxygen than a more sedentary person—the heart rate and blood pressure are lower than average. Most people’s experience of hypotension will be reflected by dizzy spells and lack of balance, especially when moving quickly to an upright position. This is because more blood pools in the legs when you stand up, meaning there is less blood for the heart to pump. The immediate effect should be for the heart to beat faster to restore the pressure. If there is a delay, the decrease in pressure can decrease the blood flow to the brain and cause dizziness; in extreme cases this can lead to fainting.”

“If hypertension is persistent, patients are most likely to be treated with drugs that target specific pathways that the body uses to control blood pressure. For example angiotensin is a protein that can trigger secretion of the hormone aldosterone from the adrenal gland. In its active form angiotensin can directly constrict blood vessels, while aldosterone enhances salt and water retention, so raising blood volume. Both these effects increase blood pressure. Angiotensin is converted into its active form by an enzyme called ‘Angiotensin Converting Enzyme’ (ACE). An ACE inhibitor drug prevents this activity, keeping angiotensin in its inactive form; this will therefore drop the patient’s blood pressure. […] The metal calcium controls many processes in the body. Its entry into muscle cells triggers muscle contraction. Preventing this entry can therefore reduce the force of contraction of the heart and the ability of arteries to constrict. Both of these will have the effect of decreasing blood pressure. Calcium enters muscle cells via specific protein-based channels. Drugs that block these channels (calcium channel blockers) are therefore highly effective at treating hypertension.”

Autoregulation is a homeostatic process designed to ensure that blood flow remains constant [in settings where constancy is desirable]. However, there are many occasions when an organism actively requires a change in blood flow. It is relatively easy to imagine what these are. In the short term, blood supplies oxygen and nutrients. When these are used up rapidly, or their supply becomes limited, the response will be to increase blood flow. The most obvious example is the twenty-fold increase in oxygen and glucose consumption that occurs in skeletal muscle during exercise when compared to rest. If there were no accompanying increase in blood flow to the muscle the oxygen supply would soon run out. […] There are hundreds of molecules known that have the ability to increase or decrease blood flow […] The surface of all blood vessels is lined by a thin layer of cells, the ‘endothelium’. Endothelial cells form a barrier between the blood and the surrounding tissue, controlling access of materials into and out of the blood. For example white blood cells can enter or leave the circulation via interacting with the endothelium; this is the route by which neutrophils migrate from the blood to the site of tissue damage or bacterial/viral attack as part of the innate immune response. However, the endothelium is not just a selective barrier. It also plays an active role in blood physiology and biochemistry.”

“Two major issues [related to blood transfusions] remained at the end of the 19th century: the problem of clotting, which all were aware of; and the problem of blood group incompatbility, which no one had the slightest idea even existed. […] For blood transfusions to ever make a recovery the key issues of blood clotting and adverse side effects needed to be resolved. In 1875 the Swedish biochemist Olof Hammarsten showed that adding calcium accelerated the rate of blood clotting (we now know the mechanism for this is that key enzymes in blood platelets that catalyse fibrin formation require calcium for their function). It therefore made sense to use chemicals that bind calcium to try to prevent clotting. Calcium ions are positively charged; adding negatively charged ions such as oxalate and citrate neutralized the calcium, preventing its clot-promoting action. […] At the same time as anticoagulants were being discovered, the reason why some blood transfusions failed even when there were no clots was becoming clear. It had been shown that animal blood given to humans tended to clump together or agglutinate, eventually bursting and releasing free haemoglobin and causing kidney damage. In the early 1900s, working in Vienna, Karl Landsteiner showed the same effect could occur with human-to-human transfusion. The trick was the ability to separate blood cells from serum. This enabled mixing blood cells from a variety of donors with plasma from a variety of participants. Using his laboratory staff as subjects, Landsteiner showed that only some combinations caused the agglutination reaction. Some donor cells (now known as type O) never clumped. Others clumped depending on the nature of the plasma in a reproducible manner. A careful study of Landsteiner’s results revealed the ABO blood type distinctions […]. Versions of these agglutination tests still form the basis of checking transfused blood today.”

“No blood product can be made completely sterile, no matter how carefully it is processed. The best that can be done is to ensure that no new bacteria or viruses are added during the purification, storage, and transportation processes. Nothing can be done to inactivate any viruses that are already present in the donor’s blood, for the harsh treatments necessary to do this would inevitably damage the viability of the product or be prohibitively expensive to implement on the industrial scale that the blood market has become. […] In the 1980s over half the US haemophiliac population was HIV positive.”

“Three fundamentally different ways have been attempted to replace red blood cell transfusions. The first uses a completely chemical approach and makes use of perfluorocarbons, inert chemicals that, in liquid form, can dissolve gasses without reacting with them. […] Perfluorocarbons can dissolve oxygen much more effectively than water. […] The problem with their use as a blood substitute is that the amount of oxygen dissolved in these solutions is linear with increasing pressure. This means that the solution lacks the advantages of the sigmoidal binding curve of haemoglobin, which has evolved to maximize the amount of oxygen captured from the limited fraction found in air (20 per cent oxygen). However, to deliver the same amount of oxygen as haemoglobin, patients using the less efficient perfluorocarbons in their blood need to breathe gas that is almost 100 per cent pure oxygen […]; this restricts the use of these compounds. […] The second type of blood substitute makes use of haemoglobin biology. Initial attempts used purified haemoglobin itself. […] there is no haemoglobin-based blood substitute in general use today […] The problem for the lack of uptake is not that blood substitutes cannot replace red blood cell function. A variety of products have been shown to stay in the vasculature for several days, provide volume support, and deliver oxygen. However, they have suffered due to adverse side effects, most notably cardiac complications. […] In nature the plasma proteins haptoglobin and haemopexin bind and detoxify any free haemoglobin and haem released from red blood cells. The challenge for blood substitute research is to mimic these effects in a product that can still deliver oxygen. […] Despite ongoing research, these problems may prove to be insurmountable. There is therefore interest in a third approach. This is to grow artificial red blood cells using stem cell technology.”

Links:

Porphyrin. Globin.
Felix Hoppe-Seyler. Jacques Monod. Jeffries Wyman. Jean-Pierre Changeux.
Allosteric regulation. Monod-Wyman-Changeux model.
Structural Biochemistry/Hemoglobin (wikibooks). (Many of the topics covered in this link – e.g. comments on affinity, T/R-states, oxygen binding curves, the Bohr effect, etc. – are also covered in the book, so although I do link to some of the other topics also covered in this link below it should be noted that I did in fact leave out quite a few potentially relevant links on account of those topics being covered in the above link).
1,3-Bisphosphoglycerate.
Erythrocruorin.
Haemerythrin.
Hemocyanin.
Cytoglobin.
Neuroglobin.
Sickle cell anemia. Thalassaemia. Hemoglobinopathy. Porphyria.
Pulse oximetry.
Daniel Bernoulli. Hydrodynamica. Stephen Hales. Karl von Vierordt.
Arterial line.
Sphygmomanometer. Korotkoff sounds. Systole. Diastole. Blood pressure. Mean arterial pressure. Hypertension. Antihypertensive drugs. Atherosclerosis Pathology. Beta blocker. Diuretic.
Autoregulation.
Guanylate cyclase. Glyceryl trinitrate.
Blood transfusion. Richard Lower. Jean-Baptiste Denys. James Blundell.
Parabiosis.
Penrose Inquiry.
ABLE (Age of Transfused Blood in Critically Ill Adults) trial.
RECESS trial.

June 7, 2018 Posted by | Biology, Books, Cardiology, Chemistry, History, Medicine, Molecular biology, Pharmacology, Studies | Leave a comment

Molecular biology (III)

Below I have added a few quotes and links related to the last few chapters of the book‘s coverage.

“Normal ageing results in part from exhaustion of stem cells, the cells that reside in most organs to replenish damaged tissue. As we age DNA damage accumulates and this eventually causes the cells to enter a permanent non-dividing state called senescence. This protective ploy however has its downside as it limits our lifespan. When too many stem cells are senescent the body is compromised in its capacity to renew worn-out tissue, causing the effects of ageing. This has a knock-on effect of poor intercellular communication, mitochondrial dysfunction, and loss of protein balance (proteostasis). Low levels of chronic inflammation also increase with ageing and could be the trigger for changes associated with many age-related disorders.”

“There has been a dramatic increase in ageing research using yeast and invertebrates, leading to the discovery of more ‘ageing genes’ and their pathways. These findings can be extrapolated to humans since longevity pathways are conserved between species. The major pathways known to influence ageing have a common theme, that of sensing and metabolizing nutrients. […] The field was advanced by identification of the mammalian Target Of Rapamycin, aptly named mTOR. mTOR acts as a molecular sensor that integrates growth stimuli with nutrient and oxygen availability. Small molecules such as rapamycin that reduce mTOR signalling act in a similar way to severe dietary restriction in slowing the ageing process in organisms such as yeast and worms. […] Rapamycin and its derivatives (rapalogs) have been involved in clinical trials on reducing age-related pathologies […] Another major ageing pathway is telomere maintenance. […] Telomere attrition is a hallmark of ageing and studies have established an association between shorter telomere length (TL) and the risk of various common age-related ailments […] Telomere loss is accelerated by known determinants of ill health […] The relationship between TL and cancer appears complex.”

“Cancer is not a single disease but a range of diseases caused by abnormal growth and survival of cells that have the capacity to spread. […] One of the early stages in the acquisition of an invasive phenotype is epithelial-mesenchymal transition (EMT). Epithelial cells form skin and membranes and for this they have a strict polarity (a top and a bottom) and are bound in position by close connections with adjacent cells. Mesenchymal cells on the other hand are loosely associated, have motility, and lack polarization. The transition between epithelial and mesenchymal cells is a normal process during embryogenesis and wound healing but is deregulated in cancer cells. EMT involves transcriptional reprogramming in which epithelial structural proteins are lost and mesenchymal ones acquired. This facilitates invasion of a tumour into surrounding tissues. […] Cancer is a genetic disease but mostly not inherited from the parents. Normal cells evolve to become cancer cells by acquiring successive mutations in cancer-related genes. There are two main classes of cancer genes, the proto-oncogenes and the tumour suppressor genes. The proto-oncogenes code for protein products that promote cell proliferation. […] A mutation in a proto-oncogene changes it to an ‘oncogene’ […] One gene above all others is associated with cancer suppression and that is TP53. […] approximately half of all human cancers carry a mutated TP53 and in many more, p53 is deregulated. […] p53 plays a key role in eliminating cells that have either acquired activating oncogenes or excessive genomic damage. Thus mutations in the TP53 gene allows cancer cells to survive and divide further by escaping cell death […] A mutant p53 not only lacks the tumour suppressor functions of the normal or wild type protein but in many cases it also takes on the role of an oncogene. […] Overall 5-10 per cent of cancers occur due to inherited or germ line mutations that are passed from parents to offspring. Many of these genes code for DNA repair enzymes […] The vast majority of cancer mutations are not inherited; instead they are sporadic with mutations arising in somatic cells. […] At least 15 per cent of cancers are attributable to infectious agents, examples being HPV and cervical cancer, H. pylori and gastric cancer, and also hepatitis B or C and liver cancer.”

“There are about 10 million different sites at which people can vary in their DNA sequence withing the 3 billion bases in our DNA. […] A few, but highly variable sequences or minisatellites are chosen for DNA profiling. These give a highly sensitive procedure suitable for use with small amounts of body fluids […] even shorter sequences called microsatellite repeats [are also] used. Each marker or microsatellite is a short tandem repeat (STR) of two to five base pairs of DNA sequence. A single STR will be shared by up to 20 per cent of the population but by using a dozen or so identification markers in profile, the error is miniscule. […] Microsatellites are extremely useful for analysing low-quality or degraded DNA left at a crime scene as their short sequences are usually preserved. However, DNA in specimens that have not been optimally preserved persists in exceedingly small amounts and is also highly fragmented. It is probably also riddled by contamination and chemical damage. Such sources of DNA sources of DNA are too degraded to obtain a profile using genomic STRs and in these cases mitochondrial DNA, being more abundant, is more useful than nuclear DNA for DNA profiling. […]  Mitochondrial DNA profiling is the method of choice for determining the identities of missing or unknown people when a maternally linked relative can be found. Molecular biologists can amplify hypervariable regions of mitochondrial DNA by PCR to obtain enough material for analysis. The DNA products are sequenced and single nucleotide differences are sought with a reference DNA from a maternal relative. […] It has now become possible for […] ancient DNA to reveal much more than genotype matches. […] Pigmentation characteristics can now be determined from ancient DNA since skin, hair, and eye colour are some of the easiest characteristics to predict. This is due to the limited number of base differences or SNPs required to explain most of the variability.”

“A broad range of debilitating and fatal conditions, non of which can be cured, are associated with mitochondrial DNA mutations. […] [M]itochondrial DNA mutates ten to thirty times faster than nuclear DNA […] Mitochondrial DNA mutates at a higher rate than nuclear DNA due to higher numbers of DNA molecules and reduced efficiency in controlling DNA replication errors. […] Over 100,000 copies of mitochondrial DNA are present in the cytoplasm of the human egg or oocyte. After fertilization, only maternal mitochondria survive; the small numbers of the father’s mitochondria in the zygote are targeted for destruction. Thus all mitochondrial DNA for all cell types in the resulting embryo is maternal-derived. […] Patients affected by mitochondrial disease usually have a mixture of wild type (normal) and mutant mitochondrial DNA and the disease severity depends on the ratio of the two. Importantly the actual level of mutant DNA in a mother’s heteroplas[m]y […curiously the authors throughout the coverage insist on spelling this ‘heteroplasty’, which according to google is something quite different – I decided to correct the spelling error (?) here – US] is not inherited and offspring can be better or worse off than the mother. This also causes uncertainty since the ratio of wild type to mutant mitochondria may change during development. […] Over 700 mutations in mitochondrial DNA have been found leading to myopathies, neurodegeneration, diabetes, cancer, and infertility.”

Links:

Dementia. Alzheimer’s disease. Amyloid hypothesis. Tau protein. Proteopathy. Parkinson’s disease. TP53-inducible glycolysis and apoptosis regulator (TIGAR).
Progeria. Progerin. Werner’s syndrome. Xeroderma pigmentosum. Cockayne syndrome.
Shelterin.
Telomerase.
Alternative lengthening of telomeres: models, mechanisms and implications (Nature).
Coats plus syndrome.
Neoplasia. Tumor angiogenesis. Inhibitor protein MDM2.
Li–Fraumeni syndrome.
Non-coding RNA networks in cancer (Nature).
Cancer stem cell. (“The reason why current cancer therapies often fail to eradicate the disease is that the CSCs survive current DNA damaging treatments and repopulate the tumour.” See also this IAS lecture which covers closely related topics – US.)
Imatinib.
Restriction fragment length polymorphism (RFLP).
CODIS.
MC1R.
Archaic human admixture with modern humans.
El Tor strain.
DNA barcoding.
Hybrid breakdown/-inviability.
Trastuzumab.
Digital PCR.
Pearson’s syndrome.
Mitochondrial replacement therapy.
Synthetic biology.
Artemisinin.
Craig Venter.
Genome editing.
Indel.
CRISPR.
Tyrosinemia.

June 3, 2018 Posted by | Biology, Books, Cancer/oncology, Genetics, Medicine, Molecular biology | Leave a comment

Blood (I)

As I also mentioned on goodreads I was far from impressed with the first few pages of this book – but I read on, and the book actually turned out to include a decent amount of very reasonable coverage. Taking into consideration the way the author started out the three star rating should be considered a high rating, and in some parts of the book the author covers very complicated stuff in a really very decent manner, considering the format of the book and its target group.

Below I have added some quotes and some links to topics/people/ideas/etc. covered in the first half of the book.

“[Clotting] makes it difficult to study the components of blood. It also [made] it impossible to store blood for transfusion [in the past]. So there was a need to find a way to prevent clotting. Fortunately the discovery that the metal calcium accelerated the rate of clotting enabled the development of a range of compounds that bound calcium and therefore prevented this process. One of them, citrate, is still in common use today [here’s a relevant link, US] when blood is being prepared for storage, or to stop blood from clotting while it is being pumped through kidney dialysis machines and other extracorporeal circuits. Adding citrate to blood, and leaving it alone, will result in gravity gradually separating the blood into three layers; the process can be accelerated by rapid spinning in a centrifuge […]. The top layer is clear and pale yellow or straw-coloured in appearance. This is the plasma, and it contains no cells. The bottom layer is bright red and contains the dense pellet of red cells that have sunk to the bottom of the tube. In-between these two layers is a very narrow layer, called the ‘buffy coat’ because of its pale yellow-brown appearance. This contains white blood cells and platelets. […] red cells, white cells, and platelets […] define the primary functions of blood: oxygen transport, immune defence, and coagulation.”

“The average human has about five trillion red blood cells per litre of blood or thirty trillion […] in total, making up a quarter of the total number of cells in the body. […] It is clear that the red cell has primarily evolved to perform a single function, oxygen transportation. Lacking a nucleus, and the requisite machinery to control the synthesis of new proteins, there is a limited ability for reprogramming or repair. […] each cell [makes] a complete traverse of the body’s circulation about once a minute. In its three- to four-month lifetime, this means every cell will do the equivalent of 150,000 laps around the body. […] Red cells lack mitochondria; they get their energy by fermenting glucose. […] A prosaic explanation for their lack of mitochondria is that it prevents the loss of any oxygen picked up from the lungs on the cells’ journey to the tissues that need it. The shape of the red cell is both deformable and elastic. In the bloodstream each cell is exposed to large shear forces. Yet, due to the properties of the membrane, they are able to constrict to enter blood vessels smaller in diameter than their normal size, bouncing back to their original shape on exiting the vessel the other side. This ability to safely enter very small openings allows capillaries to be very small. This in turn enables every cell in the body to be close to a capillary. Oxygen consequently only needs to diffuse a short distance from the blood to the surrounding tissue; this is vital as oxygen diffusion outside the bloodstream is very slow. Various pathologies, such as diabetes, peripheral vascular disease, and septic shock disturb this deformability of red blood cells, with deleterious consequences.”

“Over thirty different substances, proteins and carbohydrates, contribute to an individual’s blood group. By far the best known are the ABO and Rhesus systems. This is not because the proteins and carbohydrates that comprise these particular blood group types are vitally important for red cell function, but rather because a failure to account for these types during a blood transfusion can have catastrophic consequences. The ABO blood group is sugar-based […] blood from an O person can be safely given to anyone (with no sugar antigens this person is a ‘universal’ donor). […] As all that is needed to convert A and B to O is to remove a sugar, there is commercial and medical interest in devising ways to do this […] the Rh system […] is protein-based rather than sugar based. […] Rh proteins sit in the lipid membrane of the cell and control the transport of molecules into and out of the cell, most probably carbon dioxide and ammonia. The situation is complex, with over thirty different subgroups relating to subtle differences in the protein structure.”

“Unlike the red cells, all white cell subtypes contain nuclei. Some also contain on their surface a set of molecules called the ‘major histocompatibility complex’ (MHC). In humans, these receptors are also called ‘human leucocyte antigens’ (HLA). Their role is to recognize fragments of protein from pathogens and trigger the immune response that will ultimately destroy the invaders. Crudely, white blood cells can be divided into those that attack ‘on sight’ any foreign material — whether it be a fragment of inanimate material such as a splinter or an invading microorganism — and those that form part of a defence mechanism that recognizes specific biomolecules and marshals a slower, but equally devastating response. […] cells of the non-specific (or innate) immune system […] are divided into those that have nuclei with multiple lobed shapes (polymorphonuclear leukocytes or PMN) and those that have a single lobe nucleus ([…] ‘mononuclear leucocytes‘ or ‘MN’). PMN contain granules inside them and so are sometimes called ‘granulocytes‘.”

“Neutrophils are by far the most abundant PMN, making up over half of the total white blood cell count. The primary role of a neutrophil is to engulf a foreign object such as an invading microorganism. […] Eosinophils and basophils are the least abundant PMN cell type, each making up less than 2 per cent of white blood cells. The role of basophils is to respond to tissue injury by triggering an inflammatory response. […] When activated, basophils and mast cells degranulate, releasing molecules such as histamine, leukotrienes, and cytokines. Some of these molecules trigger an increase in blood flow causing redness and heat in the damaged site, others sensitize the area to pain. Greater permeability of the blood vessels results in plasma leaking out of the vessels and into the surrounding tissue at an increased rate, causing swelling. […] This is probably an evolutionary adaption to prevent overuse of a damaged part of the body but also helps to bring white cells and proteins to the damaged, inflamed area. […] The main function of eosinophils is to tackle invaders too large to be engulfed by neutrophils, such as the multicellular parasitic tapeworms and nematodes. […] Monocytes are a type of mononuclear leucocyte (MN) making up about 5 per cent of white blood cells. They spend even less tiem in the circulation than neutrophils, generally less than ten hours, but their time in the blood circulation does not end in death. Instead, they are converted into a cell called a ‘macrophage‘ […] Their role is similar to the neutrophil, […] the ultimate fate of both the red blood cell and the neutrophil is to be engulfed by a macrophage. An excess of monocytes in a blood count (monocytosis) is an indicator of chronic inflammation”.

“Blood has to flow freely. Therefore, the red cells, white cells, and platelets are all suspended in a watery solution called ‘plasma’. But plasma is more than just water. In fact if it were only water all the cells would burst. Plasma has to have a very similar concentration of molecules and ions as the cells. This is because cells are permeable to water. So if the concentration of dissolved substances in the plasma was significantly higher than that in the cells, water would flow from the cells to the plasma in an attempt to equalize this gradient by diluting the plasma; this would result in cell shrinkage. Even worse, if the concentration in the plasma was lower than in the cells, water would flow into the cells from the plasma, and the resulting pressure increase would burst the cells, releasing all their contents into the plasma in the process. […] Plasma contains much more than just the ions required to prevent cells bursting or shrinking. It also contains key components designed to assist in cellular function. The protein clotting factors that are part of the coagulation cascade are always present in low concentrations […] Low levels of antibodies, produced by the lymphocytes, circulate […] In addition to antibodies, the plasma contains C-reactive proteins, Mannose-binding lectin and complement proteins that function as ‘opsonins‘ […] A host of other proteins perform roles independent of oxygen delivery or immune defence. By far the most abundant protein in serum is albumin. […] Blood is the transport infrastructure for any molecule that needs to be moved around the body. Some, such as the water-soluble fuel glucose, and small hormones like insulin, dissolve freely in the plasma. Others that are less soluble hitch a ride on proteins [….] Dangerous reactive molecules, such as iron, are also bound to proteins, in this case transferrin.”

Immunoglobulins are produced by B lymphocytes and either remain bound on the surface of the cell (as part of the B cell receptor) or circulate freely in the plasma (as antibodies). Whatever their location, their purpose is the same – to bind to and capture foreign molecules (antigens). […] To perform the twin role of binding the antigen and the phagocytosing cell, immunoglobulins need to have two distinct parts to their structure — one that recognizes the foreign antigen and one that can be recognized — and destroyed — by the host defence system. The host defence system does not vary; a specific type of immunoglobulin will be recognized by one of the relatively few types of immune cells or proteins. Therefore this part of the immunoglobulin structure is not variable. But the nature of the foreign antigen will vary greatly; so the antigen-recognizing part of the structure must be highly variable. It is this that leads to the great variety of immunoglobulins. […] within the blood there is an army of potential binding sites that can recognize and bind to almost any conceivable chemical structure. Such variety is why the body is able to adapt and kill even organisms it has never encountered before. Indeed the ability to make an immunoglobulin recognize almost any structure has resulted in antibody binding assays being used historically in diagnostic tests ranging from pregnancy to drugs testing.”

“[I]mmunoglobulins consist of two different proteins — a heavy chain and a light chain. In the human heavy chain there are about forty different V (variable) segments, twenty-five different D (Diversity) segments, and six J (Joining) segments. The light chain also contains variable V and J segments. A completed immunoglobulin has a heavy chain with only one V, D, and J segment, and a light chain with only one V and D segment. It is the shuffling of these segments during development of the mature B lymphocyte that creates the diversity required […] the hypervariable regions are particularly susceptible to mutation during development. […] A separate class of immunoglobulin-like molecules also provide the key to cell-to-cell communication in the immune system. In humans, with the exception of the egg and sperm cells, all cells that possess a nucleus also have a protein on their surface called ‘Human Leucocyte Antigen (HLA) Class I’. The function of HLA Class I is to display fragments (antigens) of all the proteins currently being made inside the cell. It therefore acts like a billboard displaying the current highlights of cellular activity. Any proteins recognized as non-self by cytotoxic T cell lymphocytes will result in the whole cell being targeted for destruction […]. Another form of HLA, Class II, is only present on the surface of specialized cells of the immune system termed antigen presenting cells. In contrast to HLA Class I, the surface of HLA Class II cells displays antigens that originate from outside of the cell.”

Galen.
Bloodletting.
Marcello Malpighi.
William Harvey. De Motu Cordis.
Andreas Vesalius. De humani corporis fabrica.
Ibn al-Nafis. Michael Servetus. Realdo Colombo. Andrea Cesalpino.
Pulmonary circulation.
Hematopoietic stem cell. Bone marrow. Erythropoietin.
Hemoglobin.
Anemia.
Peroxidase.
Lymphocytes. NK cells. Granzyme. B lymphocytes. T lymphocytes. Antibody/Immunoglobulin. Lymphoblast.
Platelet. Coagulation cascade. Fibrinogen. Fibrin. Thrombin. Haemophilia. Hirudin. Von Willebrand disease. Haemophilia A. -ll- B.
Tonicity. Colloid osmotic pressure.
Adaptive immune system. Vaccination. VariolationAntiserum. Agostino Bassi. Muscardine. Louis Pasteur. Élie Metchnikoff. Paul Ehrlich.
Humoral immunity. Membrane attack complex.
Niels Kaj Jerne. David Talmage. Frank Burnet. Clonal selection theory. Peter Medawar.
Susumu Tonegawa.

June 2, 2018 Posted by | Biology, Books, Immunology, Medicine, Molecular biology | Leave a comment

Molecular biology (II)

Below I have added some more quotes and links related to the book’s coverage:

“[P]roteins are the most abundant molecules in the body except for water. […] Proteins make up half the dry weight of a cell whereas DNA and RNA make up only 3 per cent and 20 per cent respectively. […] The approximately 20,000 protein-coding genes in the human genome can, by alternative splicing, multiple translation starts, and post-translational modifications, produce over 1,000,000 different proteins, collectively called ‘the proteome‘. It is the size of the proteome and not the genome that defines the complexity of an organism. […] For simple organisms, such as viruses, all the proteins coded by their genome can be deduced from its sequence and these comprise the viral proteome. However for higher organisms the complete proteome is far larger than the genome […] For these organisms not all the proteins coded by the genome are found in any one tissue at any one time and therefore a partial proteome is usually studied. What are of interest are those proteins that are expressed in specific cell types under defined conditions.”

“Enzymes are proteins that catalyze or alter the rate of chemical reactions […] Enzymes can speed up reactions […] but they can also slow some reactions down. Proteins play a number of other critical roles. They are involved in maintaining cell shape and providing structural support to connective tissues like cartilage and bone. Specialized proteins such as actin and myosin are required [for] muscular movement. Other proteins act as ‘messengers’ relaying signals to regulate and coordinate various cell processes, e.g. the hormone insulin. Yet another class of protein is the antibodies, produced in response to foreign agents such as bacteria, fungi, and viruses.”

“Proteins are composed of amino acids. Amino acids are organic compounds with […] an amino group […] and a carboxyl group […] In addition, amino acids carry various side chains that give them their individual functions. The twenty-two amino acids found in proteins are called proteinogenic […] but other amino acids exist that are non-protein functioning. […] A peptide bond is formed between two amino acids by the removal of a water molecule. […] each individual unit in a peptide or protein is known as an amino acid residue. […] Chains of less than 50-70 amino acid residues are known as peptides or polypeptides and >50-70 as proteins, although many proteins are composed of more than one polypeptide chain. […] Proteins are macromolecules consisting of one or more strings of amino acids folded into highly specific 3D-structures. Each amino acid has a different size and carries a different side group. It is the nature of the different side groups that facilitates the correct folding of a polypeptide chain into a functional tertiary protein structure.”

“Atoms scatter the waves of X-rays mainly through their electrons, thus forming secondary or reflected waves. The pattern of X-rays diffracted by the atoms in the protein can be captured on a photographic plate or an image sensor such as a charge coupled device placed behind the crystal. The pattern and relative intensity of the spots on the diffraction image are then used to calculate the arrangement of atoms in the original protein. Complex data processing is required to convert the series of 2D diffraction or scatter patterns into a 3D image of the protein. […] The continued success and significance of this technique for molecular biology is witnessed by the fact that almost 100,000 structures of biological molecules have been determined this way, of which most are proteins.”

“The number of proteins in higher organisms far exceeds the number of known coding genes. The fact that many proteins carry out multiple functions but in a regulated manner is one way a complex proteome arises without increasing the number of genes. Proteins that performed a single role in the ancestral organism have acquired extra and often disparate functions through evolution. […] The active site of an enzyme employed in catalysis is only a small part of the protein, leaving spare capacity for acquiring a second function. […] The glycolytic pathway is involved in the breakdown of sugars such as glucose to release energy. Many of the highly conserved and ancient enzymes from this pathway have developed secondary or ‘moonlighting’ functions. Proteins often change their location in the cell in order to perform a ‘second job’. […] The limited size of the genome may not be the only evolutionary pressure for proteins to moonlight. Combining two functions in one protein can have the advantage of coordinating multiple activities in a cell, enabling it to respond quickly to changes in the environment without the need for lengthy transcription and translational processes.”

Post-translational modifications (PTMs) […] is [a] process that can modify the role of a protein by addition of chemical groups to amino acids in the peptide chain after translation. Addition of phosphate groups (phosphorylation), for example, is a common mechanism for activating or deactivating an enzyme. Other common PTMs include addition of acetyl groups (acetylation), glucose (glucosylation), or methyl groups (methylation). […] Some additions are reversible, facilitating the switching between active and inactive states, and others are irreversible such as marking a protein for destruction by ubiquitin. [The difference between reversible and irreversible modifications can be quite important in pharmacology, and if you’re curious to know more about these topics Coleman’s drug metabolism text provide great coverage of related topics – US.] Diseases caused by malfunction of these modifications highlight the importance of PTMs. […] in diabetes [h]igh blood glucose lead to unwanted glocosylation of proteins. At the high glucose concentrations associated with diabetes, an unwanted irreversible chemical reaction binds the gllucose to amino acid residues such as lysines exposed on the protein surface. The glucosylated proteins then behave badly, cross-linking themselves to the extracellular matrix. This is particularly dangerous in the kidney where it decreases function and can lead to renal failure.”

“Twenty thousand protein-coding genes make up the human genome but for any given cell only about half of these are expressed. […] Many genes get switched off during differentiation and a major mechanism for this is epigenetics. […] an epigenetic trait […] is ‘a stably heritable phenotype resulting from changes in the chromosome without alterations in the DNA sequence’. Epigenetics involves the chemical alteration of DNA by methyl or other small molecular groups to affect the accessibility of a gene by the transcription machinery […] Epigenetics can […] act on gene expression without affecting the stability of the genetic code by modifying the DNA, the histones in chromatin, or a whole chromosome. […] Epigenetic signatures are not only passed on to somatic daughter cells but they can also be transferred through the germline to the offspring. […] At first the evidence appeared circumstantial but more recent studies have provided direct proof of epigenetic changes involving gene methylation being inherited. Rodent models have provided mechanistic evidence. […] the importance of epigenetics in development is highlighted by the fact that low dietary folate, a nutrient essential for methylation, has been linked to higher risk of birth defects in the offspring.” […on the other hand, well…]

The cell cycle is divided into phases […] Transition from G1 into S phase commits the cell to division and is therefore a very tightly controlled restriction point. Withdrawal of growth factors, insufficient nucleotides, or energy to complete DNA replication, or even a damaged template DNA, would compromise the process. Problems are therefore detected and the cell cycle halted by cell cycle inhibitors before the cell has committed to DNA duplication. […] The cell cycle inhibitors inactive the kinases that promote transition through the phases, thus halting the cell cycle. […] The cell cycle can also be paused in S phase to allow time for DNA repairs to be carried out before cell division. The consequences of uncontrolled cell division are so catastrophic that evolution has provided complex checks and balances to maintain fidelity. The price of failure is apoptosis […] 50 to 70 billion cells die every day in a human adult by the controlled molecular process of apoptosis.”

“There are many diseases that arise because a particular protein is either absent or a faulty protein is produced. Administering a correct version of that protein can treat these patients. The first commercially available recombinant protein to be produced for medical use was human insulin to treat diabetes mellitus. […] (FDA) approved the recombinant insulin for clinical use in 1982. Since then over 300 protein-based recombinant pharmaceuticals have been licensed by the FDA and the European Medicines Agency (EMA) […], and many more are undergoing clinical trials. Therapeutic proteins can be produced in bacterial cells but more often mammalian cells such as the Chinese hamster ovary cell line and human fibroblasts are used as these hosts are better able to produce fully functional human protein. However, using mammalian cells is extremely expensive and an alternative is to use live animals or plants. This is called molecular pharming and is an innovative way of producing large amounts of protein relatively cheaply. […] In plant pharming, tobacco, rice, maize, potato, carrots, and tomatoes have all been used to produce therapeutic proteins. […] [One] class of proteins that can be engineered using gene-cloning technology is therapeutic antibodies. […] Therapeutic antibodies are designed to be monoclonal, that is, they are engineered so that they are specific for a particular antigen to which they bind, to block the antigen’s harmful effects. […] Monoclonal antibodies are at the forefront of biological therapeutics as they are highly specific and tend not to induce major side effects.”

“In gene therapy the aim is to restore the function of a faulty gene by introducing a correct version of that gene. […] a cloned gene is transferred into the cells of a patient. Once inside the cell, the protein encoded by the gene is produced and the defect is corrected. […] there are major hurdles to be overcome for gene therapy to be effective. One is the gene construct has to be delivered to the diseased cells or tissues. This can often be difficult […] Mammalian cells […] have complex mechanisms that have evolved to prevent unwanted material such as foreign DNA getting in. Second, introduction of any genetic construct is likely to trigger the patient’s immune response, which can be fatal […] once delivered, expression of the gene product has to be sustained to be effective. One approach to delivering genes to the cells is to use genetically engineered viruses constructed so that most of the viral genome is deleted […] Once inside the cell, some viral vectors such as the retroviruses integrate into the host genome […]. This is an advantage as it provides long-lasting expression of the gene product. However, it also poses a safety risk, as there is little control over where the viral vector will insert into the patient’s genome. If the insertion occurs within a coding gene, this may inactivate gene function. If it integrates close to transcriptional start sites, where promoters and enhancer sequences are located, inappropriate gene expression can occur. This was observed in early gene therapy trials [where some patients who got this type of treatment developed cancer as a result of it. A few more details hereUS] […] Adeno-associated viruses (AAVs) […] are often used in gene therapy applications as they are non-infectious, induce only a minimal immune response, and can be engineered to integrate into the host genome […] However, AAVs can only carry a small gene insert and so are limited to use with genes that are of a small size. […] An alternative delivery system to viruses is to package the DNA into liposomes that are then taken up by the cells. This is safer than using viruses as liposomes do not integrate into the host genome and are not very immunogenic. However, liposome uptake by the cells can be less efficient, resulting in lower expression of the gene.”

Links:

One gene–one enzyme hypothesis.
Molecular chaperone.
Protein turnover.
Isoelectric point.
Gel electrophoresis. Polyacrylamide.
Two-dimensional gel electrophoresis.
Mass spectrometry.
Proteomics.
Peptide mass fingerprinting.
Worldwide Protein Data Bank.
Nuclear magnetic resonance spectroscopy of proteins.
Immunoglobulins. Epitope.
Western blot.
Immunohistochemistry.
Crystallin. β-catenin.
Protein isoform.
Prion.
Gene expression. Transcriptional regulation. Chromatin. Transcription factor. Gene silencing. Histone. NF-κB. Chromatin immunoprecipitation.
The agouti mouse model.
X-inactive specific transcript (Xist).
Cell cycle. Cyclin. Cyclin-dependent kinase.
Retinoblastoma protein pRb.
Cytochrome c. CaspaseBcl-2 family. Bcl-2-associated X protein.
Hybridoma technology. Muromonab-CD3.
Recombinant vaccines and the development of new vaccine strategies.
Knockout mouse.
Adenovirus Vectors for Gene Therapy, Vaccination and Cancer Gene Therapy.
Genetically modified food. Bacillus thuringiensis. Golden rice.

 

May 29, 2018 Posted by | Biology, Books, Chemistry, Diabetes, Engineering, Genetics, Immunology, Medicine, Molecular biology, Pharmacology | Leave a comment

Molecular biology (I?)

“This is a great publication, considering the format. These authors in my opinion managed to get quite close to what I’d consider to be ‘the ideal level of coverage’ for books of this nature.”

The above was what I wrote in my short goodreads review of the book. In this post I’ve added some quotes from the first chapters of the book and some links to topics covered.

Quotes:

“Once the base-pairing double helical structure of DNA was understood it became apparent that by holding and preserving the genetic code DNA is the source of heredity. The heritable material must also be capable of faithful duplication every time a cell divides. The DNA molecule is ideal for this. […] The effort then concentrated on how the instructions held by the DNA were translated into the choice of the twenty different amino acids that make up proteins. […] George Gamov [yes, that George Gamov! – US] made the suggestion that information held in the four bases of DNA (A, T, C, G) must be read as triplets, called codons. Each codon, made up of three nucleotides, codes for one amino acid or a ‘start’ or ‘stop’ signal. This information, which determines an organism’s biochemical makeup, is known as the genetic code. An encryption based on three nucleotides means that there are sixty-four possible three-letter combinations. But there are only twenty amino acids that are universal. […] some amino acids can be coded for by more than one codon.”

“The mechanism of gene expression whereby DNA transfers its information into proteins was determined in the early 1960s by Sydney Brenner, Francois Jacob, and Matthew Meselson. […] Francis Crick proposed in 1958 that information flowed in one direction only: from DNA to RNA to protein. This was called the ‘Central Dogma‘ and describes how DNA is transcribed into RNA, which then acts as a messenger carrying the information to be translated into proteins. Thus the flow of information goes from DNA to RNA to proteins and information can never be transferred back from protein to nucleic acid. DNA can be copied into more DNA (replication) or into RNA (transcription) but only the information in mRNA [messenger RNA] can be translated into protein”.

“The genome is the entire DNA contained within the forty-six chromosomes located in the nucleus of each human somatic (body) cell. […] The complete human genome is composed of over 3 billion bases and contain approximately 20,000 genes that code for proteins. This is much lower than earlier estimates of 80,000 to 140,000 and astonished the scientific community when revealed through human genome sequencing. Equally surprising was the finding that genomes of much simpler organisms sequenced at the same time contained a higher number of protein-coding genes than humans. […] It is now clear that the size of the genome does not correspond with the number of protein-coding genes, and these do not determine the complexity of an organism. Protein-coding genes can be viewed as ‘transcription units’. These are made up of sequences called exons that code for amino acids, and separated by by non-coding sequences called introns. Associated with these are additional sequences termed promoters and enhancers that control the expression of that gene.”

“Some sections of the human genome code for RNA molecules that do not have the capacity to produce proteins. […] it is now becoming apparent that many play a role in controlling gene expression. Despite the importance of proteins, less than 1.5 per cent of the genome is made up of exon sequences. A recent estimate is that about 80 per cent of the genome is transcribed or involved in regulatory functions with the rest mainly composed of repetitive sequences. […] Satellite DNA […] is a short sequence repeated many thousands of times in tandem […] A second type of repetitive DNA is the telomere sequence. […] Their role is to prevent chromosomes from shortening during DNA replication […] Repetitive sequences can also be found distributed or interspersed throughout the genome. These repeats have the ability to move around the genome and are referred to as mobile or transposable DNA. […] Such movements can be harmful sometimes as gene sequences can be disrupted causing disease. […] The vast majority of transposable sequences are no longer able to move around and are considered to be ‘silent’. However, these movements have contributed, over evolutionary time, to the organization and evolution of the genome, by creating new or modified genes leading to the production of proteins with novel functions.”

“A very important property of DNA is that it can make an accurate copy of itself. This is necessary since cells die during the normal wear and tear of tissues and need to be replenished. […] DNA replication is a highly accurate process with an error occurring every 10,000 to 1 million bases in human DNA. This low frequency is because the DNA polymerases carry a proofreading function. If an incorrect nucleotide is incorporated during DNA synthesis, the polymerase detects the error and excises the incorrect base. Following excision, the polymerase reinserts the correct base and replication continues. Any errors that are not corrected through proofreading are repaired by an alternative mismatch repair mechanism. In some instances, proofreading and repair mechanisms fail to correct errors. These become permanent mutations after the next cell division cycle as they are no longer recognized as errors and are therefore propagated each time the DNA replicates.”

DNA sequencing identifies the precise linear order of the nucleotide bases A, C, G, T, in a DNA fragment. It is possible to sequence individual genes, segments of a genome, or whole genomes. Sequencing information is fundamental in helping us understand how our genome is structured and how it functions. […] The Human Genome Project, which used Sanger sequencing, took ten years to sequence and cost 3 billion US dollars. Using high-throughput sequencing, the entire human genome can now be sequenced in a few days at a cost of 3,000 US dollars. These costs are continuing to fall, making it more feasible to sequence whole genomes. The human genome sequence published in 2003 was built from DNA pooled from a number of donors to generate a ‘reference’ or composite genome. However, the genome of each individual is unique and so in 2005 the Personal Genome Project was launched in the USA aiming to sequence and analyse the genomes of 100,000 volunteers across the world. Soon after, similar projects followed in Canada and Korea and, in 2013, in the UK. […] To store and analyze the huge amounts of data, computational systems have developed in parallel. This branch of biology, called bioinformatics, has become an extremely important collaborative research area for molecular biologists drawing on the expertise of computer scientists, mathematicians, and statisticians.”

“[T]he structure of RNA differs from DNA in three fundamental ways. First, the sugar is a ribose, whereas in DNA it is a deoxyribose. Secondly, in RNA the nucleotide bases are A, G, C, and U (uracil) instead of A, G, C, and T. […] Thirdly, RNA is a single-stranded molecule unlike double-stranded DNA. It is not helical in shape but can fold to form a hairpin or stem-loop structure by base-pairing between complementary regions within the same RNA molecule. These two-dimensional secondary structures can further fold to form complex three-dimensional, tertiary structures. An RNA molecule is able to interact not only with itself, but also with other RNAs, with DNA, and with proteins. These interactions, and the variety of conformations that RNAs can adopt, enables them to carry out a wide range of functions. […] RNAs can influence many normal cellular and disease processes by regulating gene expression. RNA interference […] is one of the main ways in which gene expression is regulated.”

“Translation of the mRNA to a protein takes place in the cell cytoplasm on ribosomes. Ribosomes are cellular structures made up primarily of rRNA and proteins. At the ribosomes, the mRNA is decoded to produce a specific protein according to the rules defined by the genetic code. The correct amino acids are brought to the mRNA at the ribosomes by molecules called transfer RNAs (tRNAs). […] At the start of translation, a tRNA binds to the mRNA at the start codon AUG. This is followed by the binding of a second tRNA matching the adjacent mRNA codon. The two neighbouring amino acids linked to the tRNAs are joined together by a chemical bond called the peptide bond. Once the peptide bond forms, the first tRNA detaches leaving its amino acid behind. The ribosome then moves one codon along the mRNA and a third tRNA binds. In this way, tRNAs sequentially bind to the mRNA as the ribosome moves from codon to codon. Each time a tRNA molecule binds, the linked amino acid is transferred to the growing amino acid chain. Thus the mRNA sequence is translated into a chain of amino acids connected by peptide bonds to produce a polypeptide chain. Translation is terminated when the ribosome encounters a stop codon […]. After translation, the chain is folded and very often modified by the addition of sugar or other molecules to produce fully functional proteins.”

“The naturally occurring RNAi pathway is now extensively exploited in the laboratory to study the function of genes. It is possible to design synthetic siRNA molecules with a sequence complementary to the gene under study. These double-stranded RNA molecules are then introduced into the cell by special techniques to temporarily knock down the expression of that gene. By studying the phenotypic effects of this severe reduction of gene expression, the function of that gene can be identified. Synthetic siRNA molecules also have the potential to be used to treat diseases. If a disease is caused or enhanced by a particular gene product, then siRNAs can be designed against that gene to silence its expression. This prevents the protein which drives the disease from being produced. […] One of the major challenges to the use of RNAi as therapy is directing siRNA to the specific cells in which gene silencing is required. If released directly into the bloodstream, enzymes in the bloodstream degrade siRNAs. […] Other problems are that siRNAs can stimulate the body’s immune response and can produce off-target effects by silencing RNA molecules other than those against which they were specifically designed. […] considerable attention is currently focused on designing carrier molecules that can transport siRNA through the bloodstream to the diseased cell.”

“Both Northern blotting and RT-PCR enable the expression of one or a few genes to be measured simultaneously. In contrast, the technique of microarrays allows gene expression to be measured across the full genome of an organism in a single step. This massive scale genome analysis technique is very useful when comparing gene expression profiles between two samples. […] This can identify gene subsets that are under- or over-expressed in one sample relative to the second sample to which it is compared.”

Links:

Molecular biology.
Charles Darwin. Alfred Wallace. Gregor Mendel. Wilhelm Johannsen. Heinrich Waldeyer. Theodor Boveri. Walter Sutton. Friedrich Miescher. Phoebus Levene. Oswald Avery. Colin MacLeod. Maclyn McCarty. James Watson. Francis Crick. Rosalind Franklin. Andrew Fire. Craig Mello.
Gene. Genotype. Phenotype. Chromosome. Nucleotide. DNA. RNA. Protein.
Chargaff’s rules.
Photo 51.
Human Genome Project.
Long interspersed nuclear elements (LINEs). Short interspersed nuclear elements (SINEs).
Histone. Nucleosome.
Chromatin. Euchromatin. Heterochromatin.
Mitochondrial DNA.
DNA replication. Helicase. Origin of replication. DNA polymeraseOkazaki fragments. Leading strand and lagging strand. DNA ligase. Semiconservative replication.
Mutation. Point mutation. Indel. Frameshift mutation.
Genetic polymorphism. Single-nucleotide polymorphism (SNP).
Genome-wide association study (GWAS).
Molecular cloning. Restriction endonuclease. Multiple cloning site (MCS). Bacterial artificial chromosome.
Gel electrophoresis. Southern blot. Polymerase chain reaction (PCR). Reverse transcriptase PCR (RT-PCR). Quantitative PCR (qPCR).
GenBank. European Molecular Biology Laboratory (EMBL). Encyclopedia of DNA Elements (ENCODE).
RNA polymerase II. TATA box. Transcription factor IID. Stop codon.
Protein biosynthesis.
SmRNA (small nuclear RNA).
Untranslated region (/UTR sequences).
Transfer RNA.
Micro RNA (miRNA).
Dicer (enzyme).
RISC (RNA-induced silencing complex).
Argonaute.
Lipid-Based Nanoparticles for siRNA Delivery in Cancer Therapy.
Long non-coding RNA.
Ribozyme/catalytic RNA.
RNA-sequencing (RNA-seq).

May 5, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine, Molecular biology | Leave a comment

A few diabetes papers of interest

i. Economic Costs of Diabetes in the U.S. in 2017.

“This study updates previous estimates of the economic burden of diagnosed diabetes and quantifies the increased health resource use and lost productivity associated with diabetes in 2017. […] The total estimated cost of diagnosed diabetes in 2017 is $327 billion, including $237 billion in direct medical costs and $90 billion in reduced productivity. For the cost categories analyzed, care for people with diagnosed diabetes accounts for 1 in 4 health care dollars in the U.S., and more than half of that expenditure is directly attributable to diabetes. People with diagnosed diabetes incur average medical expenditures of ∼$16,750 per year, of which ∼$9,600 is attributed to diabetes. People with diagnosed diabetes, on average, have medical expenditures ∼2.3 times higher than what expenditures would be in the absence of diabetes. Indirect costs include increased absenteeism ($3.3 billion) and reduced productivity while at work ($26.9 billion) for the employed population, reduced productivity for those not in the labor force ($2.3 billion), inability to work because of disease-related disability ($37.5 billion), and lost productivity due to 277,000 premature deaths attributed to diabetes ($19.9 billion). […] After adjusting for inflation, economic costs of diabetes increased by 26% from 2012 to 2017 due to the increased prevalence of diabetes and the increased cost per person with diabetes. The growth in diabetes prevalence and medical costs is primarily among the population aged 65 years and older, contributing to a growing economic cost to the Medicare program.”

The paper includes a lot of details about how they went about estimating these things, but I decided against including these details here – read the full paper if you’re interested. I did however want to add some additional details, so here goes:

Absenteeism is defined as the number of work days missed due to poor health among employed individuals, and prior research finds that people with diabetes have higher rates of absenteeism than the population without diabetes. Estimates from the literature range from no statistically significant diabetes effect on absenteeism to studies reporting 1–6 extra missed work days (and odds ratios of more absences ranging from 1.5 to 3.3) (1214). Analyzing 2014–2016 NHIS data and using a negative binomial regression to control for overdispersion in self-reported missed work days, we estimate that people with diabetes have statistically higher missed work days—ranging from 1.0 to 4.2 additional days missed per year by demographic group, or 1.7 days on average — after controlling for age-group, sex, race/ethnicity, diagnosed hypertension status (yes/no), and body weight status (normal, overweight, obese, unknown). […] Presenteeism is defined as reduced productivity while at work among employed individuals and is generally measured through worker responses to surveys. Multiple recent studies report that individuals with diabetes display higher rates of presenteeism than their peers without diabetes (12,1517). […] We model productivity loss associated with diabetes-attributed presenteeism using the estimate (6.6%) from the 2012 study—which is toward the lower end of the 1.8–38% range reported in the literature. […] Reduced performance at work […] accounted for 30% of the indirect cost of diabetes.”

It is of note that even with a somewhat conservative estimate of presenteeism, this cost component is an order of magnitude larger than the absenteeism variable. It is worth keeping in mind that this ratio is likely to be different elsewhere; due to the way the American health care system is structured/financed – health insurance is to a significant degree linked to employment – you’d expect the estimated ratio to be different from what you might observe in countries like the UK or Denmark. Some more related numbers from the paper:

Inability to work associated with diabetes is estimated using a conservative approach that focuses on unemployment related to long-term disability. Logistic regression with 2014–2016 NHIS data suggests that people aged 18–65 years with diabetes are significantly less likely to be in the workforce than people without diabetes. […] we use a conservative approach (which likely underestimates the cost associated with inability to work) to estimate the economic burden associated with reduced labor force participation. […] Study results suggest that people with diabetes have a 3.1 percentage point higher rate of being out of the workforce and receiving disability payments compared with their peers without diabetes. The diabetes effect increases with age and varies by demographic — ranging from 2.1 percentage points for non-Hispanic white males aged 60–64 years to 10.6 percentage points for non-Hispanic black females aged 55–59 years.”

“In 2017, an estimated 24.7 million people in the U.S. are diagnosed with diabetes, representing ∼7.6% of the total population (and 9.7% of the adult population). The estimated national cost of diabetes in 2017 is $327 billion, of which $237 billion (73%) represents direct health care expenditures attributed to diabetes and $90 billion (27%) represents lost productivity from work-related absenteeism, reduced productivity at work and at home, unemployment from chronic disability, and premature mortality. Particularly noteworthy is that excess costs associated with medications constitute 43% of the total direct medical burden. This includes nearly $15 billion for insulin, $15.9 billion for other antidiabetes agents, and $71.2 billion in excess use of other prescription medications attributed to higher disease prevalence associated with diabetes. […] A large portion of medical costs associated with diabetes costs is for comorbidities.”

Insulin is ~$15 billion/year, out of a total estimated cost of $327 billion. This is less than 5% of the total cost. Take note of the 70 billion. I know I’ve said this before, but it bears repeating: Most of diabetes-related costs are not related to insulin.

“…of the projected 162 million hospital inpatient days in the U.S. in 2017, an estimated 40.3 million days (24.8%) are incurred by people with diabetes [who make up ~7.6% of the population – see above], of which 22.6 million days are attributed to diabetes. About one-fourth of all nursing/residential facility days are incurred by people with diabetes. About half of all physician office visits, emergency department visits, hospital outpatient visits, and medication prescriptions (excluding insulin and other antidiabetes agents) incurred by people with diabetes are attributed to their diabetes. […] The largest contributors to the cost of diabetes are higher use of prescription medications beyond antihyperglycemic medications ($71.2 billion), higher use of hospital inpatient services ($69.7 billion), medications and supplies to directly treat diabetes ($34.6 billion), and more office visits to physicians and other health providers ($30.0 billion). Approximately 61% of all health care expenditures attributed to diabetes are for health resources used by the population aged ≥65 years […] we estimate the average annual excess expenditures for the population aged <65 years and ≥65 years, respectively, at $6,675 and $13,239. Health care expenditures attributed to diabetes generally increase with age […] The population with diabetes is older and sicker than the population without diabetes, and consequently annual medical expenditures are much higher (on average) than for people without diabetes“.

“Of the estimated 24.7 million people with diagnosed diabetes, analysis of NHIS data suggests that ∼8.1 million are in the workforce. If people with diabetes participated in the labor force at rates similar to their peers without diabetes, there would be ∼2 million additional people aged 18–64 years in the workforce.”

Comparing the 2017 estimates with those produced for 2012, the overall cost of diabetes appears to have increased by ∼25% after adjusting for inflation, reflecting an 11% increase in national prevalence of diagnosed diabetes and a 13% increase in the average annual diabetes-attributed cost per person with diabetes.”

ii. Current Challenges and Opportunities in the Prevention and Management of Diabetic Foot Ulcers.

“Diabetic foot ulcers remain a major health care problem. They are common, result in considerable suffering, frequently recur, and are associated with high mortality, as well as considerable health care costs. While national and international guidance exists, the evidence base for much of routine clinical care is thin. It follows that many aspects of the structure and delivery of care are susceptible to the beliefs and opinion of individuals. It is probable that this contributes to the geographic variation in outcome that has been documented in a number of countries. This article considers these issues in depth and emphasizes the urgent need to improve the design and conduct of clinical trials in this field, as well as to undertake systematic comparison of the results of routine care in different health economies. There is strong suggestive evidence to indicate that appropriate changes in the relevant care pathways can result in a prompt improvement in clinical outcomes.”

“Despite considerable advances made over the last 25 years, diabetic foot ulcers (DFUs) continue to present a very considerable health care burden — one that is widely unappreciated. DFUs are common, the median time to healing without surgery is of the order of 12 weeks, and they are associated with a high risk of limb loss through amputation (14). The 5-year survival following presentation with a new DFU is of the order of only 50–60% and hence worse than that of many common cancers (4,5). While there is evidence that mortality is improving with more widespread use of cardiovascular risk reduction (6), the most recent data — derived from a Veterans Health Adminstration population—reported that 1-, 2-, and 5-year survival was only 81, 69, and 29%, respectively, and the association between mortality and DFU was stronger than that of any macrovascular disease (7). […] There is […] wide variation in clinical outcome within the same country (1315), suggesting that some people are being managed considerably less well than others.”

“Data on community-wide ulcer incidence are very limited. Overall incidences of 5.8 and 6.0% have been reported in selected populations of people with diabetes in the U.S. (2,12,20) while incidences of 2.1 and 2.2% have been reported from less selected populations in Europe—either in all people with diabetes (21) or in those with type 2 disease alone (22). It is not known whether the incidence is changing […] Although a number of risk factors associated with the development of ulceration are well recognized (23), there is no consensus on which dominate, and there are currently no reports of any studies that might justify the adoption of any specific strategy for population selection in primary prevention.”

“The incidence of major amputation is used as a surrogate measure of the failure of DFUs to heal. Its main value lies in the relative ease of data capture, but its value is limited because it is essentially a treatment and not a true measure of disease outcome. In no other major disease (including malignancies, cardiovascular disease, or cerebrovascular disease) is the number of treatments used as a measure of outcome. But despite this and other limitations of major amputation as an outcome measure (36), there is evidence that the overall incidence of major amputation is falling in some countries with nationwide databases (37,38). Perhaps the most convincing data come from the U.K., where the unadjusted incidence has fallen dramatically from about 3.0–3.5 per 1,000 people with diabetes per year in the mid-1990s to 1.0 or less per 1,000 per year in both England and Scotland (14,39).”

New ulceration after healing is high, with ∼40% of people having a new ulcer (whether at the same site or another) within 12 months (10). This is a critical aspect of diabetic foot disease—emphasizing that when an ulcer heals, foot disease must be regarded not as cured, but in remission (10). In this respect, diabetic foot disease is directly analogous to malignancy. It follows that the person whose foot disease is in remission should receive the same structured follow-up as a person who is in remission following treatment for cancer. Of all areas concerned with the management of DFUs, this long-term need for specialist surveillance is arguably the one that should command the greatest attention.

“There is currently little evidence to justify the adoption of very many of the products and procedures currently promoted for use in clinical practice. Guidelines are required to encourage clinicians to adopt only those treatments that have been shown to be effective in robust studies and principally in RCTs. The design and conduct of such RCTs needs improved governance because many are of low standard and do not always provide the evidence that is claimed.”

Incidence numbers like the ones included above will not always give you the full picture when there are a lot of overlapping data points in the sample (due to recurrence), but sometimes that’s all you have. However in the type 1 context we also do have some additional numbers that make it easier to appreciate the scale of the problem in that context. Here are a few additional data from a related publication I blogged some time ago (do keep in mind that estimates are likely to be lower in community samples of type 2 diabetics, even if perhaps nobody actually know precisely how much lower):

“The rate of nontraumatic amputation in T1DM is high, occurring at 0.4–7.2% per year (28). By 65 years of age, the cumulative probability of lower-extremity amputation in a Swedish administrative database was 11% for women with T1DM and 20.7% for men (10). In this Swedish population, the rate of lower-extremity amputation among those with T1DM was nearly 86-fold that of the general population.” (link)

Do keep in mind that people don’t stop getting ulcers once they reach retirement age (the 11%/20.7% is not lifetime risk, it’s a biased lower bound).

iii. Excess Mortality in Patients With Type 1 Diabetes Without Albuminuria — Separating the Contribution of Early and Late Risks.

“The current study investigated whether the risk of mortality in patients with type 1 diabetes without any signs of albuminuria is different than in the general population and matched control subjects without diabetes.”

“Despite significant improvements in management, type 1 diabetes remains associated with an increase in mortality relative to the age- and sex-matched general population (1,2). Acute complications of diabetes may initially account for this increased risk (3,4). However, with increasing duration of disease, the leading contributor to excess mortality is its vascular complications including diabetic kidney disease (DKD) and cardiovascular disease (CVD). Consequently, patients who subsequently remain free of complications may have little or no increased risk of mortality (1,2,5).”

“Mortality was evaluated in a population-based cohort of 10,737 children (aged 0–14 years) with newly diagnosed type 1 diabetes in Finland who were listed on the National Public Health Institute diabetes register, Central Drug Register, and Hospital Discharge Register in 1980–2005 […] We excluded patients with type 2 diabetes and diabetes occurring secondary to other conditions, such as steroid use, Down syndrome, and congenital malformations of the pancreas. […] FinnDiane participants who died were more likely to be male, older, have a longer duration of diabetes, and later age of diabetes onset […]. Notably, none of the conventional variables associated with complications (e.g., HbA1c, hypertension, smoking, lipid levels, or AER) were associated with all-cause mortality in this cohort of patients without albuminuria. […] The most frequent cause of death in the FinnDiane cohort was IHD [ischaemic heart disease, US] […], largely driven by events in patients with long-standing diabetes and/or previously established CVD […]. The mortality rate ratio for IHD was 4.34 (95% CI 2.49–7.57, P < 0.0001). There remained a number of deaths due to acute complications of diabetes, including ketoacidosis and hypoglycemia. This was most significant in patients with a shorter duration of diabetes but still apparent in those with long-standing diabetes[…]. Notably, deaths due to “risk-taking behavior” were lower in adults with type 1 diabetes compared with matched individuals without diabetes: mortality rate ratio was 0.42 (95% CI 0.22–0.79, P = 0.006) […] This was largely driven by the 80% reduction (95% CI 0.06–0.66) in deaths due to alcohol and drugs in males with type 1 diabetes (Table 3). No reduction was observed in female patients (rate ratio 0.90 [95% CI 0.18–4.44]), although the absolute event rate was already more than seven times lower in Finnish women than in men.”

The chief determinant of excess mortality in patients with type 1 diabetes is its complications. In the first 10 years of type 1 diabetes, the acute complications of diabetes dominate and result in excess mortality — more than twice that observed in the age- and sex-matched general population. This early excess explains why registry studies following patients with type 1 diabetes from diagnosis have consistently reported reduced life expectancy, even in patients free of chronic complications of diabetes (68). By contrast, studies of chronic complications, like FinnDiane and the Pittsburgh Epidemiology of Diabetes Complications Study (1,2), have followed participants with, usually, >10 years of type 1 diabetes at baseline. In these patients, the presence or absence of chronic complications of diabetes is critical for survival. In particular, the presence and severity of albuminuria (as a marker of vascular burden) is strongly associated with mortality outcomes in type 1 diabetes (1). […] the FinnDiane normoalbuminuric patients showed increased all-cause mortality compared with the control subjects without diabetes in contrast to when the comparison was made with the Finnish general population, as in our previous publication (1). Two crucial causes behind the excess mortality were acute diabetes complications and IHD. […] Comparisons with the general population, rather than matched control subjects, may overestimate expected mortality, diluting the SMR estimate”.

Despite major improvements in the delivery of diabetes care and other technological advances, acute complications remain a major cause of death both in children and in adults with type 1 diabetes. Indeed, the proportion of deaths due to acute events has not changed significantly over the last 30 years. […] Even in patients with long-standing diabetes (>20 years), the risk of death due to hypoglycemia or ketoacidosis remains a constant companion. […] If it were possible to eliminate all deaths from acute events, the observed mortality rate would have been no different from the general population in the early cohort. […] In long-term diabetes, avoiding chronic complications may be associated with mortality rates comparable with those of the general population; although death from IHD remains increased, this is offset by reduced risk-taking behavior, especially in men.”

“It is well-known that CVD is strongly associated with DKD (15). However, in the current study, mortality from IHD remained higher in adults with type 1 diabetes without albuminuria compared with matched control subjects in both men and women. This is concordant with other recent studies also reporting increased mortality from CVD in patients with type 1 diabetes in the absence of DKD (7,8) and reinforces the need for aggressive cardiovascular risk reduction even in patients without signs of microvascular disease. However, it is important to note that the risk of death from CVD, though significant, is still at least 10-fold lower than observed in patients with albuminuria (1). Alcohol- and drug-related deaths were substantially lower in patients with type 1 diabetes compared with the age-, sex-, and region-matched control subjects. […] This may reflect a selection bias […] Nonparticipation in health studies is associated with poorer health, stress, and lower socioeconomic status (17,18), which are in turn associated with increased risk of premature mortality. It can be speculated that with inclusion of patients with risk-taking behavior, the mortality rate in patients with diabetes would be even higher and, consequently, the SMR would also be significantly higher compared with the general population. Selection of patients who despite long-standing diabetes remained free of albuminuria may also have included individuals more accepting of general health messages and less prone to depression and nihilism arising from treatment failure.”

I think the selection bias problem is likely to be quite significant, as these results don’t really match what I’ve seen in the past. For example a recent Norwegian study on young type 1 diabetics found high mortality in their sample in significant degree due to alcohol-related causes and suicide: “A relatively high proportion of deaths were related to alcohol. […] Death was related to alcohol in 15% of cases. SMR for alcohol-related death was 6.8 (95% CI 4.5–10.3), for cardiovascular death was 7.3 (5.4–10.0), and for violent death was 3.6 (2.3–5.3).” That doesn’t sound very similar to the study above, and that study’s also from Scandinavia. In this study, in which they used data from diabetic organ donors, they found that a large proportion of the diabetics included in the study used illegal drugs: “we observed a high rate of illicit substance abuse: 32% of donors reported or tested positive for illegal substances (excluding marijuana), and multidrug use was common.”

Do keep in mind that one of the main reasons why ‘alcohol-related’ deaths are higher in diabetes is likely to be that ‘drinking while diabetic’ is a lot more risky than is ‘drinking while not diabetic’. On a related note, diabetics may not appreciate the level of risk they’re actually exposed to while drinking, due to community norms etc., so there might be a disconnect between risk preferences and observed behaviour (i.e., a diabetic might be risk averse but still engage in risky behaviours because he doesn’t know how risky those behaviours in which he’s engaging actually are).

Although the illicit drugs study indicates that diabetics at least in some samples are not averse to engaging in risky behaviours, a note of caution is probably warranted in the alcohol context: High mortality from alcohol-mediated acute complications needn’t be an indication that diabetics drink more than non-diabetics; that’s a separate question, you might see numbers like these even if they in general drink less. And a young type 1 diabetic who suffers a cardiac arrhythmia secondary to long-standing nocturnal hypoglycemia and subsequently is found ‘dead in bed’ after a bout of drinking is conceptually very different from a 50-year old alcoholic dying from a variceal bleed or acute pancreatitis. Parenthetically, if it is true that illicit drugs use is common in type 1 diabetics one reason might be that they are aware of the risks associated with alcohol (which is particularly nasty in terms of the metabolic/glycemic consequences in diabetes, compared to some other drugs) and thus they deliberately make a decision to substitute this drug with other drugs less likely to cause acute complications like severe hypoglycemic episodes or DKA (depending on the setting and the specifics, alcohol might be a contributor to both of these complications). If so, classical ‘risk behaviours’ may not always be ‘risk behaviours’ in diabetes. You need to be careful, this stuff’s complicated.

iv. Are All Patients With Type 1 Diabetes Destined for Dialysis if They Live Long Enough? Probably Not.

“Over the past three decades there have been numerous innovations, supported by large outcome trials that have resulted in improved blood glucose and blood pressure control, ultimately reducing cardiovascular (CV) risk and progression to nephropathy in type 1 diabetes (T1D) (1,2). The epidemiological data also support the concept that 25–30% of people with T1D will progress to end-stage renal disease (ESRD). Thus, not everyone develops progressive nephropathy that ultimately requires dialysis or transplantation. This is a result of numerous factors […] Data from two recent studies reported in this issue of Diabetes Care examine the long-term incidence of chronic kidney disease (CKD) in T1D. Costacou and Orchard (7) examined a cohort of 932 people evaluated for 50-year cumulative kidney complication risk in the Pittsburgh Epidemiology of Diabetes Complications study. They used both albuminuria levels and ESRD/transplant data for assessment. By 30 years’ duration of diabetes, ESRD affected 14.5% and by 40 years it affected 26.5% of the group with onset of T1D between 1965 and 1980. For those who developed diabetes between 1950 and 1964, the proportions developing ESRD were substantially higher at 34.6% at 30 years, 48.5% at 40 years, and 61.3% at 50 years. The authors called attention to the fact that ESRD decreased by 45% after 40 years’ duration between these two cohorts, emphasizing the beneficial roles of improved glycemic control and blood pressure control. It should also be noted that at 40 years even in the later cohort (those diagnosed between 1965 and 1980), 57.3% developed >300 mg/day albuminuria (7).”

Numbers like these may seem like ancient history (data from the 60s and 70s), but it’s important to keep in mind that many type 1 diabetics are diagnosed in early childhood, and that they don’t ‘get better’ later on – if they’re still alive, they’re still diabetic. …And very likely macroalbuminuric, at least if they’re from Pittsburgh. I was diagnosed in ’87.

“Gagnum et al. (8), using data from a Norwegian registry, also examined the incidence of CKD development over a 42-year follow-up period in people with childhood-onset (<15 years of age) T1D (8). The data from the Norwegian registry noted that the cumulative incidence of ESRD was 0.7% after 20 years and 5.3% after 40 years of T1D. Moreover, the authors noted the risk of developing ESRD was lower in women than in men and did not identify any difference in risk of ESRD between those diagnosed with diabetes in 1973–1982 and those diagnosed in 1989–2012. They concluded that there is a very low incidence of ESRD among patients with childhood-onset T1D diabetes in Norway, with a lower risk in women than men and among those diagnosed at a younger age. […] Analyses of population-based studies, similar to the Pittsburgh and Norway studies, showed that after 30 years of T1D the cumulative incidences of ESRD were only 10% for those diagnosed with T1D in 1961–1984 and 3% for those diagnosed in 1985–1999 in Japan (11), 3.3% for those diagnosed with T1D in 1977–2007 in Sweden (12), and 7.8% for those diagnosed with T1D in 1965–1999 in Finland (13) (Table 1).”

Do note that ESRD (end stage renal disease) is not the same thing as DKD (diabetic kidney disease), and that e.g. many of the Norwegians who did not develop ESRD nevertheless likely have kidney complications from their diabetes. That 5.3% is not the number of diabetics in that cohort who developed diabetes-related kidney complications, it’s the proportion of them who did and as a result of this needed a new kidney or dialysis in order not to die very soon. Do also keep in mind that both microalbuminuria and macroalbuminuria will substantially increase the risk of cardiovascular disease and -cardiac death. I recall a study where they looked at the various endpoints and found that more diabetics with microalbuminuria eventually died of cardiovascular disease than did ever develop kidney failure – cardiac risk goes up a lot long before end-stage renal disease. ESRD estimates don’t account for the full risk profile, and even if you look at mortality risk the number accounts for perhaps less than half of the total risk attributable to DKD. One thing the ESRD diagnosis does have going for it is that it’s a much more reliable variable indicative of significant pathology than is e.g. microalbuminuria (see e.g. this paper). The paper is short and not at all detailed, but they do briefly discuss/mention these issues:

“…there is a substantive difference between the numbers of people with stage 3 CKD (estimated glomerular filtration rate [eGFR] 30–59 mL/min/1.73 m2) versus those with stages 4 and 5 CKD (eGFR <30 mL/min/1.73 m2): 6.7% of the National Health and Nutrition Examination Survey (NHANES) population compared with 0.1–0.3%, respectively (14). This is primarily because of competing risks, such as death from CV disease that occurs in stage 3 CKD; hence, only the survivors are progressing into stages 4 and 5 CKD. Overall, these studies are very encouraging. Since the 1980s, risk of ESRD has been greatly reduced, while risk of CKD progression persists but at a slower rate. This reduced ESRD rate and slowed CKD progression is largely due to improvements in glycemic and blood pressure control and probably also to the institution of RAAS blockers in more advanced CKD. These data portend even better future outcomes if treatment guidance is followed. […] many medications are effective in blood pressure control, but RAAS blockade should always be a part of any regimen when very high albuminuria is present.”

v. New Understanding of β-Cell Heterogeneity and In Situ Islet Function.

“Insulin-secreting β-cells are heterogeneous in their regulation of hormone release. While long known, recent technological advances and new markers have allowed the identification of novel subpopulations, improving our understanding of the molecular basis for heterogeneity. This includes specific subpopulations with distinct functional characteristics, developmental programs, abilities to proliferate in response to metabolic or developmental cues, and resistance to immune-mediated damage. Importantly, these subpopulations change in disease or aging, including in human disease. […] We will discuss recent findings revealing functional β-cell subpopulations in the intact islet, the underlying basis for these identified subpopulations, and how these subpopulations may influence in situ islet function.”

I won’t cover this one in much detail, but this part was interesting:

“Gap junction (GJ) channels electrically couple β-cells within mouse and human islets (25), serving two main functions. First, GJ channels coordinate oscillatory dynamics in electrical activity and Ca2+ under elevated glucose or GLP-1, allowing pulsatile insulin secretion (26,27). Second, GJ channels lower spontaneous elevations in Ca2+ under low glucose levels (28). GJ coupling is also heterogeneous within the islet (29), leading to some β-cells being highly coupled and others showing negligible coupling. Several studies have examined how electrically heterogeneous cells interact via GJ channels […] This series of experiments indicate a “bistability” in islet function, where a threshold number of poorly responsive β-cells is sufficient to totally suppress islet function. Notably, when islets lacking GJ channels are treated with low levels of the KATP activator diazoxide or the GCK inhibitor mannoheptulose, a subpopulation of cells are silenced, presumably corresponding to the less functional population (30). Only diazoxide/mannoheptulose concentrations capable of silencing >40% of these cells will fully suppress Ca2+ elevations in normal islets. […] this indicates that a threshold number of poorly responsive cells can inhibit the whole islet. Thus, if there exists a threshold number of functionally competent β-cells (∼60–85%), then the islet will show coordinated elevations in Ca2+ and insulin secretion.

Below this threshold number, the islet will lack Ca2+ elevation and insulin secretion (Fig. 2). The precise threshold depends on the characteristics of the excitable and inexcitable populations: small numbers of inexcitable cells will increase the number of functionally competent cells required for islet activity, whereas small numbers of highly excitable cells will do the opposite. However, if GJ coupling is lowered, then inexcitable cells will exert a reduced suppression, also decreasing the threshold required. […] Paracrine communication between β-cells and other endocrine cells is also important for regulating insulin secretion. […] Little is known how these paracrine and juxtacrine mechanisms impact heterogeneous cells.”

vi. Closing in on the Mechanisms of Pulsatile Insulin Secretion.

“Insulin secretion from pancreatic islet β-cells occurs in a pulsatile fashion, with a typical period of ∼5 min. The basis of this pulsatility in mouse islets has been investigated for more than four decades, and the various theories have been described as either qualitative or mathematical models. In many cases the models differ in their mechanisms for rhythmogenesis, as well as other less important details. In this Perspective, we describe two main classes of models: those in which oscillations in the intracellular Ca2+ concentration drive oscillations in metabolism, and those in which intrinsic metabolic oscillations drive oscillations in Ca2+ concentration and electrical activity. We then discuss nine canonical experimental findings that provide key insights into the mechanism of islet oscillations and list the models that can account for each finding. Finally, we describe a new model that integrates features from multiple earlier models and is thus called the Integrated Oscillator Model. In this model, intracellular Ca2+ acts on the glycolytic pathway in the generation of oscillations, and it is thus a hybrid of the two main classes of models. It alone among models proposed to date can explain all nine key experimental findings, and it serves as a good starting point for future studies of pulsatile insulin secretion from human islets.”

This one covers material closely related to the study above, so if you find one of these papers interesting you might want to check out the other one as well. The paper is quite technical but if you were wondering why people are interested in this kind of stuff, one reason is that there’s good evidence at this point that insulin pulsativity is disturbed in type 2 diabetics and so it’d be nice to know why that is so that new drugs can be developed to correct this.

April 25, 2018 Posted by | Biology, Cardiology, Diabetes, Epidemiology, Health Economics, Medicine, Nephrology, Pharmacology, Studies | Leave a comment

Networks

I actually think this was a really nice book, considering the format – I gave it four stars on goodreads. One of the things I noticed people didn’t like about it in the reviews is that it ‘jumps’ a bit in terms of topic coverage; it covers a wide variety of applications and analytical settings. I mostly don’t consider this a weakness of the book – even if occasionally it does get a bit excessive – and I can definitely understand the authors’ choice of approach; it’s sort of hard to illustrate the potential the analytical techniques described within this book have if you’re not allowed to talk about all the areas in which they have been – or could be gainfully – applied. A related point is that many people who read the book might be familiar with the application of these tools in specific contexts but have perhaps not thought about the fact that similar methods are applied in many other areas (and they might all of them be a bit annoyed the authors don’t talk more about computer science applications, or foodweb analyses, or infectious disease applications, or perhaps sociometry…). Most of the book is about graph-theory-related stuff, but a very decent amount of the coverage deals with applications, in a broad sense of the word at least, not theory. The discussion of theoretical constructs in the book always felt to me driven to a large degree by their usefulness in specific contexts.

I have covered related topics before here on the blog, also quite recently – e.g. there’s at least some overlap between this book and Holland’s book about complexity theory in the same series (I incidentally think these books probably go well together) – and as I found the book slightly difficult to blog as it was I decided against covering it in as much detail as I sometimes do when covering these texts – this means that I decided to leave out the links I usually include in posts like these.

Below some quotes from the book.

“The network approach focuses all the attention on the global structure of the interactions within a system. The detailed properties of each element on its own are simply ignored. Consequently, systems as different as a computer network, an ecosystem, or a social group are all described by the same tool: a graph, that is, a bare architecture of nodes bounded by connections. […] Representing widely different systems with the same tool can only be done by a high level of abstraction. What is lost in the specific description of the details is gained in the form of universality – that is, thinking about very different systems as if they were different realizations of the same theoretical structure. […] This line of reasoning provides many insights. […] The network approach also sheds light on another important feature: the fact that certain systems that grow without external control are still capable of spontaneously developing an internal order. […] Network models are able to describe in a clear and natural way how self-organization arises in many systems. […] In the study of complex, emergent, and self-organized systems (the modern science of complexity), networks are becoming increasingly important as a universal mathematical framework, especially when massive amounts of data are involved. […] networks are crucial instruments to sort out and organize these data, connecting individuals, products, news, etc. to each other. […] While the network approach eliminates many of the individual features of the phenomenon considered, it still maintains some of its specific features. Namely, it does not alter the size of the system — i.e. the number of its elements — or the pattern of interaction — i.e. the specific set of connections between elements. Such a simplified model is nevertheless enough to capture the properties of the system. […] The network approach [lies] somewhere between the description by individual elements and the description by big groups, bridging the two of them. In a certain sense, networks try to explain how a set of isolated elements are transformed, through a pattern of interactions, into groups and communities.”

“[T]he random graph model is very important because it quantifies the properties of a totally random network. Random graphs can be used as a benchmark, or null case, for any real network. This means that a random graph can be used in comparison to a real-world network, to understand how much chance has shaped the latter, and to what extent other criteria have played a role. The simplest recipe for building a random graph is the following. We take all the possible pair of vertices. For each pair, we toss a coin: if the result is heads, we draw a link; otherwise we pass to the next pair, until all the pairs are finished (this means drawing the link with a probability p = ½, but we may use whatever value of p). […] Nowadays [the random graph model] is a benchmark of comparison for all networks, since any deviations from this model suggests the presence of some kind of structure, order, regularity, and non-randomness in many real-world networks.”

“…in networks, topology is more important than metrics. […] In the network representation, the connections between the elements of a system are much more important than their specific positions in space and their relative distances. The focus on topology is one of its biggest strengths of the network approach, useful whenever topology is more relevant than metrics. […] In social networks, the relevance of topology means that social structure matters. […] Sociology has classified a broad range of possible links between individuals […]. The tendency to have several kinds of relationships in social networks is called multiplexity. But this phenomenon appears in many other networks: for example, two species can be connected by different strategies of predation, two computers by different cables or wireless connections, etc. We can modify a basic graph to take into account this multiplexity, e.g. by attaching specific tags to edges. […] Graph theory [also] allows us to encode in edges more complicated relationships, as when connections are not reciprocal. […] If a direction is attached to the edges, the resulting structure is a directed graph […] In these networks we have both in-degree and out-degree, measuring the number of inbound and outbound links of a node, respectively. […] in most cases, relations display a broad variation or intensity [i.e. they are not binary/dichotomous]. […] Weighted networks may arise, for example, as a result of different frequencies of interactions between individuals or entities.”

“An organism is […] the outcome of several layered networks and not only the deterministic result of the simple sequence of genes. Genomics has been joined by epigenomics, transcriptomics, proteomics, metabolomics, etc., the disciplines that study these layers, in what is commonly called the omics revolution. Networks are at the heart of this revolution. […] The brain is full of networks where various web-like structures provide the integration between specialized areas. In the cerebellum, neurons form modules that are repeated again and again: the interaction between modules is restricted to neighbours, similarly to what happens in a lattice. In other areas of the brain, we find random connections, with a more or less equal probability of connecting local, intermediate, or distant neurons. Finally, the neocortex — the region involved in many of the higher functions of mammals — combines local structures with more random, long-range connections. […] typically, food chains are not isolated, but interwoven in intricate patterns, where a species belongs to several chains at the same time. For example, a specialized species may predate on only one prey […]. If the prey becomes extinct, the population of the specialized species collapses, giving rise to a set of co-extinctions. An even more complicated case is where an omnivore species predates a certain herbivore, and both eat a certain plant. A decrease in the omnivore’s population does not imply that the plant thrives, because the herbivore would benefit from the decrease and consume even more plants. As more species are taken into account, the population dynamics can become more and more complicated. This is why a more appropriate description than ‘foodchains’ for ecosystems is the term foodwebs […]. These are networks in which nodes are species and links represent relations of predation. Links are usually directed (big fishes eat smaller ones, not the other way round). These networks provide the interchange of food, energy, and matter between species, and thus constitute the circulatory system of the biosphere.”

“In the cell, some groups of chemicals interact only with each other and with nothing else. In ecosystems, certain groups of species establish small foodwebs, without any connection to external species. In social systems, certain human groups may be totally separated from others. However, such disconnected groups, or components, are a strikingly small minority. In all networks, almost all the elements of the systems take part in one large connected structure, called a giant connected component. […] In general, the giant connected component includes not less than 90 to 95 per cent of the system in almost all networks. […] In a directed network, the existence of a path from one node to another does not guarantee that the journey can be made in the opposite direction. Wolves eat sheep, and sheep eat grass, but grass does not eat sheep, nor do sheep eat wolves. This restriction creates a complicated architecture within the giant connected component […] according to an estimate made in 1999, more than 90 per cent of the WWW is composed of pages connected to each other, if the direction of edges is ignored. However, if we take direction into account, the proportion of nodes mutually reachable is only 24 per cent, the giant strongly connected component. […] most networks are sparse, i.e. they tend to be quite frugal in connections. Take, for example, the airport network: the personal experience of every frequent traveller shows that direct flights are not that common, and intermediate stops are necessary to reach several destinations; thousands of airports are active, but each city is connected to less than 20 other cities, on average. The same happens in most networks. A measure of this is given by the mean number of connection of their nodes, that is, their average degree.”

“[A] puzzling contradiction — a sparse network can still be very well connected — […] attracted the attention of the Hungarian mathematicians […] Paul Erdős and Alfréd Rényi. They tackled it by producing different realizations of their random graph. In each of them, they changed the density of edges. They started with a very low density: less than one edge per node. It is natural to expect that, as the density increases, more and more nodes will be connected to each other. But what Erdős and Rényi found instead was a quite abrupt transition: several disconnected components coalesced suddenly into a large one, encompassing almost all the nodes. The sudden change happened at one specific critical density: when the average number of links per node (i.e. the average degree) was greater than one, then the giant connected component suddenly appeared. This result implies that networks display a very special kind of economy, intrinsic to their disordered structure: a small number of edges, even randomly distributed between nodes, is enough to generate a large structure that absorbs almost all the elements. […] Social systems seem to be very tightly connected: in a large enough group of strangers, it is not unlikely to find pairs of people with quite short chains of relations connecting them. […] The small-world property consists of the fact that the average distance between any two nodes (measured as the shortest path that connects them) is very small. Given a node in a network […], few nodes are very close to it […] and few are far from it […]: the majority are at the average — and very short — distance. This holds for all networks: starting from one specific node, almost all the nodes are at very few steps from it; the number of nodes within a certain distance increases exponentially fast with the distance. Another way of explaining the same phenomenon […] is the following: even if we add many nodes to a network, the average distance will not increase much; one has to increase the size of a network by several orders of magnitude to notice that the paths to new nodes are (just a little) longer. The small-world property is crucial to many network phenomena. […] The small-world property is something intrinsic to networks. Even the completely random Erdős-Renyi graphs show this feature. By contrast, regular grids do not display it. If the Internet was a chessboard-like lattice, the average distance between two routers would be of the order of 1,000 jumps, and the Net would be much slower [the authors note elsewhere that “The Internet is composed of hundreds of thousands of routers, but just about ten ‘jumps’ are enough to bring an information packet from one of them to any other.”] […] The key ingredient that transforms a structure of connections into a small world is the presence of a little disorder. No real network is an ordered array of elements. On the contrary, there are always connections ‘out of place’. It is precisely thanks to these connections that networks are small worlds. […] Shortcuts are responsible for the small-world property in many […] situations.”

“Body size, IQ, road speed, and other magnitudes have a characteristic scale: that is, an average value that in the large majority of cases is a rough predictor of the actual value that one will find. […] While height is a homogeneous magnitude, the number of social connection[s] is a heterogeneous one. […] A system with this feature is said to be scale-free or scale-invariant, in the sense that it does not have a characteristic scale. This can be rephrased by saying that the individual fluctuations with respect to the average are too large for us to make a correct prediction. […] In general, a network with heterogeneous connectivity has a set of clear hubs. When a graph is small, it is easy to find whether its connectivity is homogeneous or heterogeneous […]. In the first case, all the nodes have more or less the same connectivity, while in the latter it is easy to spot a few hubs. But when the network to be studied is very big […] things are not so easy. […] the distribution of the connectivity of the nodes of the […] network […] is the degree distribution of the graph. […] In homogeneous networks, the degree distribution is a bell curve […] while in heterogeneous networks, it is a power law […]. The power law implies that there are many more hubs (and much more connected) in heterogeneous networks than in homogeneous ones. Moreover, hubs are not isolated exceptions: there is a full hierarchy of nodes, each of them being a hub compared with the less connected ones.”

“Looking at the degree distribution is the best way to check if a network is heterogeneous or not: if the distribution is fat tailed, then the network will have hubs and heterogeneity. A mathematically perfect power law is never found, because this would imply the existence of hubs with an infinite number of connections. […] Nonetheless, a strongly skewed, fat-tailed distribution is a clear signal of heterogeneity, even if it is never a perfect power law. […] While the small-world property is something intrinsic to networked structures, hubs are not present in all kind of networks. For example, power grids usually have very few of them. […] hubs are not present in random networks. A consequence of this is that, while random networks are small worlds, heterogeneous ones are ultra-small worlds. That is, the distance between their vertices is relatively smaller than in their random counterparts. […] Heterogeneity is not equivalent to randomness. On the contrary, it can be the signature of a hidden order, not imposed by a top-down project, but generated by the elements of the system. The presence of this feature in widely different networks suggests that some common underlying mechanism may be at work in many of them. […] the Barabási–Albert model gives an important take-home message. A simple, local behaviour, iterated through many interactions, can give rise to complex structures. This arises without any overall blueprint”.

Homogamy, the tendency of like to marry like, is very strong […] Homogamy is a specific instance of homophily: this consists of a general trend of like to link to like, and is a powerful force in shaping social networks […] assortative mixing [is] a special form of homophily, in which nodes tend to connect with others that are similar to them in the number of connections. By contrast [when] high- and low-degree nodes are more connected to each other [it] is called disassortative mixing. Both cases display a form of correlation in the degrees of neighbouring nodes. When the degrees of neighbours are positively correlated, then the mixing is assortative; when negatively, it is disassortative. […] In random graphs, the neighbours of a given node are chosen completely at random: as a result, there is no clear correlation between the degrees of neighbouring nodes […]. On the contrary, correlations are present in most real-world networks. Although there is no general rule, most natural and technological networks tend to be disassortative, while social networks tend to be assortative. […] Degree assortativity and disassortativity are just an example of the broad range of possible correlations that bias how nodes tie to each other.”

“[N]etworks (neither ordered lattices nor random graphs), can have both large clustering and small average distance at the same time. […] in almost all networks, the clustering of a node depends on the degree of that node. Often, the larger the degree, the smaller the clustering coefficient. Small-degree nodes tend to belong to well-interconnected local communities. Similarly, hubs connect with many nodes that are not directly interconnected. […] Central nodes usually act as bridges or bottlenecks […]. For this reason, centrality is an estimate of the load handled by a node of a network, assuming that most of the traffic passes through the shortest paths (this is not always the case, but it is a good approximation). For the same reason, damaging central nodes […] can impair radically the flow of a network. Depending on the process one wants to study, other definitions of centrality can be introduced. For example, closeness centrality computes the distance of a node to all others, and reach centrality factors in the portion of all nodes that can be reached in one step, two steps, three steps, and so on.”

“Domino effects are not uncommon in foodwebs. Networks in general provide the backdrop for large-scale, sudden, and surprising dynamics. […] most of the real-world networks show a doubled-edged kind of robustness. They are able to function normally even when a large fraction of the network is damaged, but suddenly certain small failures, or targeted attacks, bring them down completely. […] networks are very different from engineered systems. In an airplane, damaging one element is enough to stop the whole machine. In order to make it more resilient, we have to use strategies such as duplicating certain pieces of the plane: this makes it almost 100 per cent safe. In contrast, networks, which are mostly not blueprinted, display a natural resilience to a broad range of errors, but when certain elements fail, they collapse. […] A random graph of the size of most real-world networks is destroyed after the removal of half of the nodes. On the other hand, when the same procedure is performed on a heterogeneous network (either a map of a real network or a scale-free model of a similar size), the giant connected component resists even after removing more than 80 per cent of the nodes, and the distance within it is practically the same as at the beginning. The scene is different when researchers simulate a targeted attack […] In this situation the collapse happens much faster […]. However, now the most vulnerable is the second: while in the homogeneous network it is necessary to remove about one-fifth of its more connected nodes to destroy it, in the heterogeneous one this happens after removing the first few hubs. Highly connected nodes seem to play a crucial role, in both errors and attacks. […] hubs are mainly responsible for the overall cohesion of the graph, and removing a few of them is enough to destroy it.”

“Studies of errors and attacks have shown that hubs keep different parts of a network connected. This implies that they also act as bridges for spreading diseases. Their numerous ties put them in contact with both infected and healthy individuals: so hubs become easily infected, and they infect other nodes easily. […] The vulnerability of heterogeneous networks to epidemics is bad news, but understanding it can provide good ideas for containing diseases. […] if we can immunize just a fraction, it is not a good idea to choose people at random. Most of the times, choosing at random implies selecting individuals with a relatively low number of connections. Even if they block the disease from spreading in their surroundings, hubs will always be there to put it back into circulation. A much better strategy would be to target hubs. Immunizing hubs is like deleting them from the network, and the studies on targeted attacks show that eliminating a small fraction of hubs fragments the network: thus, the disease will be confined to a few isolated components. […] in the epidemic spread of sexually transmitted diseases the timing of the links is crucial. Establishing an unprotected link with a person before they establish an unprotected link with another person who is infected is not the same as doing so afterwards.”

April 3, 2018 Posted by | Biology, Books, Ecology, Engineering, Epidemiology, Genetics, Mathematics, Statistics | Leave a comment

Marine Biology (II)

Below some observations and links related to the second half of the book’s coverage:

[C]oral reefs occupy a very small proportion of the planet’s surface – about 284,000 square kilometres – roughly equivalent to the size of Italy [yet they] are home to an incredibly diversity of marine organisms – about a quarter of all marine species […]. Coral reef systems provide food for hundreds of millions of people, with about 10 per cent of all fish consumed globally caught on coral reefs. […] Reef-building corals thrive best at sea temperatures above about 23°C and few exist where sea temperatures fall below 18°C for significant periods of time. Thus coral reefs are absent at tropical latitudes where upwelling of cold seawater occurs, such as the west coasts of South America and Africa. […] they are generally restricted to areas of clear water less than about 50 metres deep. Reef-building corals are very intolerant of any freshening of seawater […] and so do not occur in areas exposed to intermittent influxes of freshwater, such as near the mouths of rivers, or in areas where there are high amounts of rainfall run-off. This is why coral reefs are absent along much of the tropical Atlantic coast of South America, which is exposed to freshwater discharge from the Amazon and Orinoco Rivers. Finally, reef-building corals flourish best in areas with moderate to high wave action, which keeps the seawater well aerated […]. Spectacular and productive coral reef systems have developed in those parts of the Global Ocean where this special combination of physical conditions converges […] Each colony consists of thousands of individual animals called polyps […] all reef-building corals have entered into an intimate relationship with plant cells. The tissues lining the inside of the tentacles and stomach cavity of the polyps are packed with photosynthetic cells called zooxanthellae, which are photosynthetic dinoflagellates […] Depending on the species, corals receive anything from about 50 per cent to 95 per cent of their food from their zooxanthellae. […] Healthy coral reefs are very productive marine systems. This is in stark contrast to the nutrient-poor and unproductive tropical waters adjacent to reefs. Coral reefs are, in general, roughly one hundred times more productive than the surrounding environment”.

“Overfishing constitutes a significant threat to coral reefs at this time. About an eighth of the world’s population – roughly 875 million people – live within 100 kilometres of a coral reef. Most of the people live in developing countries and island nations and depend greatly on fish obtained from coral reefs as a food source. […] Some of the fishing practices are very harmful. Once the large fish are removed from a coral reef, it becomes increasingly more difficult to make a living harvesting the more elusive and lower-value smaller fish that remain. Fishers thus resort to more destructive techniques such as dynamiting parts of the reef and scooping up the dead and stunned fish that float to the surface. People capturing fish for the tropical aquarium trade will often poison parts of the reef with sodium cyanide which paralyses the fish, making them easier to catch. An unfortunate side effect of this practice is that the poison kills corals. […] Coral reefs have only been seriously studied since the 1970s, which in most cases was well after human impacts had commenced. This makes it difficult to define what might actually constitute a ‘natural’ and healthy coral reef system, as would have existed prior to extensive human impacts.”

“Mangrove is a collective term applied to a diverse group of trees and scrubs that colonize protected muddy intertidal areas in tropical and subtropical regions, creating mangrove forests […] Mangroves are of great importance from a human perspective. The sheltered waters of a mangrove forest provide important nursery areas for juvenile fish, crabs, and shrimp. Many commercial fisheries depend on the existence of healthy mangrove forests, including blue crab, shrimp, spiny lobster, and mullet fisheries. Mangrove forests also stabilize the foreshore and protect the adjacent land from erosion, particularly from the effects of large storms and tsunamis. They also act as biological filters by removing excess nutrients and trapping sediment from land run-off before it enters the coastal environment, thereby protecting other habitats such as seagrass meadows and coral reefs. […] [However] mangrove forests are disappearing rapidly. In a twenty-year period between 1980 and 2000 the area of mangrove forest globally declined from around 20 million hectares to below 15 million hectares. In some specific regions the rate of mangrove loss is truly alarming. For example, Puerto Rico lost about 89 per cent of its mangrove forests between 1930 and 1985, while the southern part of India lost about 96 per cent of its mangroves between 1911 and 1989.”

“[A]bout 80 per cent of the entire volume of the Global Ocean, or roughly one billion cubic kilometres, consists of seawater with depths greater than 1,000 metres […] The deep ocean is a permanently dark environment devoid of sunlight, the last remnants of which cannot penetrate much beyond 200 metres in most parts of the Global Ocean, and no further than 800 metres or so in even the clearest oceanic waters. The only light present in the deep ocean is of biological origin […] Except in a few very isolated places, the deep ocean is a permanently cold environment, with sea temperatures ranging from about 2° to 4°C. […] Since there is no sunlight, there is no plant life, and thus no primary production of organic matter by photosynthesis. The base of the food chain in the deep ocean consists mostly of a ‘rain’ of small particles of organic material sinking down through the water column from the sunlit surface waters of the ocean. This reasonably constant rain of organic material is supplemented by the bodies of large fish and marine mammals that sink more rapidly to the bottom following death, and which provide sporadic feasts for deep-ocean bottom dwellers. […] Since food is a scarce commodity for deep-ocean fish, full advantage must be taken of every meal encountered. This has resulted in a number of interesting adaptations. Compared to fish in the shallow ocean, many deep-ocean fish have very large mouths capable of opening very wide, and often equipped with numerous long, sharp, inward-pointing teeth. […] These fish can capture and swallow whole prey larger than themselves so as not to pass up a rare meal simply because of its size. These fish also have greatly extensible stomachs to accommodate such meals.”

“In the pelagic environment of the deep ocean, animals must be able to keep themselves within an appropriate depth range without using up energy in their food-poor habitat. This is often achieved by reducing the overall density of the animal to that of seawater so that it is neutrally buoyant. Thus the tissues and bones of deep-sea fish are often rather soft and watery. […] There is evidence that deep-ocean organisms have developed biochemical adaptations to maintain the functionality of their cell membranes under pressure, including adjusting the kinds of lipid molecules present in membranes to retain membrane fluidity under high pressure. High pressures also affect protein molecules, often preventing them from folding up into the correct shapes for them to function as efficient metabolic enzymes. There is evidence that deep-ocean animals have evolved pressure-resistant variants of common enzymes that mitigate this problem. […] The pattern of species diversity of the deep-ocean benthos appears to differ from that of other marine communities, which are typically dominated by a small number of abundant and highly visible species which overshadow the presence of a large number of rarer and less obvious species which are also present. In the deep-ocean benthic community, in contrast, no one group of species tends to dominate, and the community consists of a high number of different species all occurring in low abundance. […] In general, species diversity increases with the size of a habitat – the larger the area of a habitat, the more species that have developed ways to successfully live in that habitat. Since the deep-ocean bottom is the largest single habitat on the planet, it follows that species diversity would be expected to be high.”

Seamounts represent a special kind of biological hotspot in the deep ocean. […] In contrast to the surrounding flat, soft-bottomed abyssal plains, seamounts provide a complex rocky platform that supports an abundance of organisms that are distinct from the surrounding deep-ocean benthos. […] Seamounts support a great diversity of fish species […] This [has] triggered the creation of new deep-ocean fisheries focused on seamounts. […] [However these species are generally] very slow-growing and long-lived and mature at a late age, and thus have a low reproductive potential. […] Seamount fisheries have often been described as mining operations rather than sustainable fisheries. They typically collapse within a few years of the start of fishing and the trawlers then move on to other unexplored seamounts to maintain the fishery. The recovery of localized fisheries will inevitably be very slow, if achievable at all, because of the low reproductive potential of these deep-ocean fish species. […] Comparisons of ‘fished’ and ‘unfished’ seamounts have clearly shown the extent of habitat damage and loss of species diversity brought about by trawl fishing, with the dense coral habitats reduced to rubble over much of the area investigated. […] Unfortunately, most seamounts exist in areas beyond national jurisdiction, which makes it very difficult to regulate fishing activities on them, although some efforts are underway to establish international treaties to better manage and protect seamount ecosystems.”

“Hydrothermal vents are unstable and ephemeral features of the deep ocean. […] The lifespan of a typical vent is likely in the order of tens of years. Thus the rich communities surrounding vents have a very limited lifespan. Since many vent animals can live only near vents, and the distance between vent systems can be hundreds to thousands of kilometres, it is a puzzle as to how vent animals escape a dying vent and colonize other distant vents or newly created vents. […] Hydrothermal vents are [however] not the only source of chemical-laden fluids supporting unique chemosynthetic-based communities in the deep ocean. Hydrogen sulphide and methane also ooze from the ocean buttom at some locations at temperatures similar to the surrounding seawater. These so-called ‘cold seeps‘ are often found along continental margins […] The communities associated with cold seeps are similar to hydrothermal vent communities […] Cold seeps appear to be more permanent sources of fluid compared to the ephemeral nature of hot water vents.”

“Seepage of crude oil into the marine environment occurs naturally from oil-containing geological formations below the seabed. It is estimated that around 600,000 tonnes of crude oil seeps into the marine environment each year, which represents almost half of all the crude oil entering the oceans. […] The human activities associated with exploring for and producing oil result in the release on average of an estimated 38,000 tonnes of crude oil into the oceans each year, which is about 6 per cent of the total anthropogenic input of oil into the oceans worldwide. Although small in comparison to natural seepage, crude oil pollution from this source can cause serious damage to coastal ecosystems because it is released near the coast and sometimes in very large, concentrated amounts. […] The transport of oil and oil products around the globe in tankers results in the release of about 150,000 tonnes of oil worldwide each year on average, or about 22 per cent of the total anthropogenic input. […] About 480,000 tonnes of oil make their way into the marine environment each year worldwide from leakage associated with the consumption of oil-derived products in cars and trucks, and to a lesser extent in boats. Oil lost from the operation of cars and trucks collects on paved urban areas from where it is washed off into streams and rivers, and from there into the oceans. Surprisingly, this represents the most significant source of human-derived oil pollution into the marine environment – about 72 per cent of the total. Because it is a very diffuse source of pollution, it is the most difficult to control.”

“Today it has been estimated that virtually all of the marine food resources in the Mediterranean sea have been reduced to less than 50 per cent of their original abundance […] The greatest impact has been on the larger predatory fish, which were the first to be targeted by fishers. […] It is estimated that, collectively, the European fish stocks of today are just one-tenth of their size in 1900. […] In 1950 the total global catch of marine seafood was just less than twenty million tonnes fresh weight. This increased steadily and rapidly until by the late 1980s more than eighty million tonnes were being taken each year […] Starting in the early 1990s, however, yields began to show signs of levelling off. […] By far the most heavily exploited marine fishery in the world is the Peruvian anchoveta (Engraulis ringens) fishery, which can account for 10 per cent or more of the global marine catch of seafood in any particular year. […] The anchoveta is a very oily fish, which makes it less desirable for direct consumption by humans. However, the high oil content makes it ideal for the production of fish meal and fish oil […] the demand for fish meal and fish oil is huge and about a third of the entire global catch of fish is converted into these products rather than consumed directly by humans. Feeding so much fish protein to livestock comes with a considerable loss of potential food energy (around 25 per cent) compared to if it was eaten directly by humans. This could be viewed as a potential waste of available energy for a rapidly growing human population […] around 90 per cent of the fish used to produce fish meal and oil is presently unpalatable to most people and thus unmarketable in large quantities as a human food”.

“On heavily fished areas of the continental shelves, the same parts of the sea floor can be repeatedly trawled many times per year. Such intensive bottom trawling causes great cumulative damage to seabed habitats. The trawls scrape and pulverize rich and complex bottom habitats built up over centuries by living organisms such as tube worms, cold-water corals, and oysters. These habitats are eventually reduced to uniform stretches of rubble and sand. For all intents and purposes these areas are permanently altered and become occupied by a much changed and much less rich community adapted to frequent disturbance.”

“The eighty million tonnes or so of marine seafood caught each year globally equates to about eleven kilograms of wild-caught marine seafood per person on the planet. […] What is perfectly clear […] on the basis of theory backed up by real data on marine fish catches, is that marine fisheries are now fully exploited and that there is little if any headroom for increasing the amount of wild-caught fish humans can extract from the oceans to feed a burgeoning human population. […] This conclusion is solidly supported by the increasingly precarious state of global marine fishery resources. The most recent information from the Food and Agriculture Organization of the United Nations (The State of World Fisheries and Aquaculture 2010) shows that over half (53 per cent of all fish stocks are fully exploited – their current catches are at or close to their maximum sustainable levels of production and there is no scope for further expansion. Another 32 per cent are overexploited and in decline. Of the remaining 15 per cent of stocks, 12 per cent are considered moderately exploited and only 3 per cent underexploited. […] in the mid 1970s 40 per cent of all fish stocks were in [the moderately exploited or unexploited] category as opposed to around 15 per cent now. […] the real question is not so much whether we can get more fish from the sea but whether we can sustain the amount of fish we are harvesting at present”.

Links:

Scleractinia.
Atoll. Fringing reef. Barrier reef.
Corallivore.
Broadcast spawning.
Acanthaster planci.
Coral bleaching. Ocean acidification.
Avicennia germinans. Pneumatophores. Lenticel.
Photophore. Lanternfish. Anglerfish. Black swallower.
Deep scattering layer. Taylor column.
Hydrothermal vent. Black smokers and white smokers. Chemosynthesis. Siboglinidae.
Intertidal zone. Tides. Tidal range.
Barnacle. Mussel.
Clupeidae. Gadidae. Scombridae.

March 16, 2018 Posted by | Biology, Books, Chemistry, Ecology, Evolutionary biology, Geology | Leave a comment

Marine Biology (I)

This book was ‘okay’.

Some quotes and links related to the first half of the book below.

Quotes:

“The Global Ocean has come to be divided into five regional oceans – the Pacific, Atlantic, Indian, Arctic, and Southern Oceans […] These oceans are large, seawater-filled basins that share characteristic structural features […] The edge of each basin consists of a shallow, gently sloping extension of the adjacent continental land mass and is term the continental shelf or continental margin. Continental shelves typically extend off-shore to depths of a couple of hundred metres and vary from several kilometres to hundreds of kilometres in width. […] At the outer edge of the continental shelf, the seafloor drops off abruptly and steeply to form the continental slope, which extends down to depths of 2–3 kilometres. The continental slope then flattens out and gives way to a vast expanse of flat, soft, ocean bottom — the abyssal plain — which extends over depths of about 3–5 kilometres and accounts for about 76 per cent of the Global Ocean floor. The abyssal plains are transected by extensive mid-ocean ridges—underwater mountain chains […]. Mid-ocean ridges form a continuous chain of mountains that extend linearly for 65,000 kilometres across the floor of the Global Ocean basins […]. In some places along the edges of the abyssal plains the ocean bottom is cut by narrow, oceanic trenches or canyons which plunge to extraordinary depths — 3–4 kilometres below the surrounding seafloor — and are thousands of kilometres long but only tens of kilometres wide. […] Seamounts are another distinctive and dramatic feature of ocean basins. Seamounts are typically extinct volcanoes that rise 1,000 or more metres above the surrounding ocean but do not reach the surface of the ocean. […] Seamounts generally occur in chains or clusters in association with mid-ocean ridges […] The Global Ocean contains an estimated 100,000 or so seamounts that rise more than 1,000 metres above the surrounding deep-ocean floor. […] on a planetary scale, the surface of the Global Ocean is moving in a series of enormous, roughly circular, wind-driven current systems, or gyres […] These gyres transport enormous volumes of water and heat energy from one part of an ocean basin to another

“We now know that the oceans are literally teeming with life. Viruses […] are astoundingly abundant – there are around ten million viruses per millilitre of seawater. Bacteria and other microorganisms occur at concentrations of around 1 million per millilitre”

“The water in the oceans is in the form of seawater, a dilute brew of dissolved ions, or salts […] Chloride and sodium ions are the predominant salts in seawater, along with smaller amounts of other ions such as sulphate, magnesium, calcium, and potassium […] The total amount of dissolved salts in seawater is termed its salinity. Seawater typically has a salinity of roughly 35 – equivalent to about 35 grams of salts in one kilogram of seawater. […] Most marine organisms are exposed to seawater that, compared to the temperature extremes characteristic of terrestrial environments, ranges within a reasonably moderate range. Surface waters in tropical parts of ocean basins are consistently warm throughout the year, ranging from about 20–27°C […]. On the other hand, surface seawater in polar parts of ocean basins can get as cold as −1.9°C. Sea temperatures typically decrease with depth, but not in a uniform fashion. A distinct zone of rapid temperature transition is often present that separates warm seawater at the surface from cooler deeper seawater. This zone is called the thermocline layer […]. In tropical ocean waters the thermocline layer is a strong, well-defined and permanent feature. It may start at around 100 metres and be a hundred or so metres thick. Sea temperatures above the thermocline can be a tropical 25°C or more, but only 6–7°C just below the thermocline. From there the temperature drops very gradually with increasing depth. Thermoclines in temperate ocean regions are a more seasonal phenomenon, becoming well established in the summer as the sun heats up the surface waters, and then breaking down in the autumn and winter. Thermoclines are generally absent in the polar regions of the Global Ocean. […] As a rule of thumb, in the clearest ocean waters some light will penetrate to depths of 150-200 metres, with red light being absorbed within the first few metres and green and blue light penetrating the deepest. At certain times of the year in temperate coastal seas light may penetrate only a few tens of metres […] In the oceans, pressure increases by an additional atmosphere every 10 metres […] Thus, an organism living at a depth of 100 metres on the continental shelf experiences a pressure ten times greater than an organism living at sea level; a creature living at 5 kilometres depth on an abyssal plain experiences pressures some 500 times greater than at the surface”.

“With very few exceptions, dissolved oxygen is reasonably abundant throughout all parts of the Global Ocean. However, the amount of oxygen in seawater is much less than in air — seawater at 20°C contains about 5.4 millilitres of oxygen per litre of seawater, whereas air at this temperature contains about 210 millilitres of oxygen per litre. The colder the seawater, the more oxygen it contains […]. Oxygen is not distributed evenly with depth in the oceans. Oxygen levels are typically high in a thin surface layer 10–20 metres deep. Here oxygen from the atmosphere can freely diffuse into the seawater […] Oxygen concentration then decreases rapidly with depth and reaches very low levels, sometimes close to zero, at depths of around 200–1,000 metres. This region is referred to as the oxygen minimum zone […] This zone is created by the low rates of replenishment of oxygen diffusing down from the surface layer of the ocean, combined with the high rates of depletion of oxygen by decaying particulate organic matter that sinks from the surface and accumulates at these depths. Beneath the oxygen minimum zone, oxygen content increases again with depth such that the deep oceans contain quite high levels of oxygen, though not generally as high as in the surface layer. […] In contrast to oxygen, carbon dioxide (CO2) dissolves readily in seawater. Some of it is then converted into carbonic acid (H2CO3), bicarbonate ion (HCO3-), and carbonate ion (CO32-), with all four compounds existing in equilibrium with one another […] The pH of seawater is inversely proportional to the amount of carbon dioxide dissolved in it. […] the warmer the seawater, the less carbon dioxide it can absorb. […] Seawater is naturally slightly alkaline, with a pH ranging from about 7.5 to 8.5, and marine organisms have become well adapted to life within this stable pH range. […] In the oceans, carbon is never a limiting factor to marine plant photosynthesis and growth, as it is for terrestrial plants.”

“Since the beginning of the industrial revolution, the average pH of the Global Ocean has dropped by about 0.1 pH unit, making it 30 per cent more acidic than in pre-industrial times. […] As a result, more and more parts of the oceans are falling below a pH of 7.5 for longer periods of time. This trend, termed ocean acidification, is having profound impacts on marine organisms and the overall functioning of the marine ecosystem. For example, many types of marine organisms such as corals, clams, oysters, sea urchins, and starfish manufacture external shells or internal skeletons containing calcium carbonate. When the pH of seawater drops below about 7.5, calcium carbonate starts to dissolve, and thus the shells and skeletons of these organisms begin to erode and weaken, with obvious impacts on the health of the animal. Also, these organisms produce their calcium carbonate structures by combining calcium dissolved in seawater with carbonate ion. As the pH decreases, more of the carbonate ions in seawater become bound up with the increasing numbers of hydrogen ions, making fewer carbonate ions available to the organisms for shell-forming purposes. It thus becomes more difficult for these organisms to secrete their calcium carbonate structures and grow.”

“Roughly half of the planet’s primary production — the synthesis of organic compounds by chlorophyll-bearing organisms using energy from the sun—is produced within the Global Ocean. On land the primary producers are large, obvious, and comparatively long-lived — the trees, shrubs, and grasses characteristic of the terrestrial landscape. The situation is quite different in the oceans where, for the most part, the primary producers are minute, short-lived microorganisms suspended in the sunlit surface layer of the oceans. These energy-fixing microorganisms — the oceans’ invisible forest — are responsible for almost all of the primary production in the oceans. […] A large amount, perhaps 30-50 per cent, of marine primary production is produced by bacterioplankton comprising tiny marine photosynthetic bacteria ranging from about 0.5 to 2 μm in size. […] light availability and the strength of vertical mixing are important factors limiting primary production in the oceans. Nutrient availability is the other main factor limiting the growth of primary producers. One important nutrient is nitrogen […] nitrogen is a key component of amino acids, which are the building blocks of proteins. […] Photosynthetic marine organisms also need phosphorus, which is a requirement for many important biological functions, including the synthesis of nucleic acids, a key component of DNA. Phosphorus in the oceans comes naturally from the erosion of rocks and soils on land, and is transported into the oceans by rivers, much of it in the form of dissolved phosphate (PO43−), which can be readily absorbed by marine photosynthetic organisms. […] Inorganic nitrogen and phosphorus compounds are abundant in deep-ocean waters. […] In practice, inorganic nitrogen and phosphorus compounds are not used up at exactly the same rate. Thus one will be depleted before the other and becomes the limiting nutrient at the time, preventing further photosynthesis and growth of marine primary producers until it is replenished. Nitrogen is often considered to be the rate-limiting nutrient in most oceanic environments, particularly in the open ocean. However, in coastal waters phosphorus is often the rate-limiting nutrient.”

“The overall pattern of primary production in the Global Ocean depends greatly on latitude […] In polar oceans primary production is a boom-and-bust affair driven by light availability. Here the oceans are well mixed throughout the year so nutrients are rarely limiting. However, during the polar winter there is no light, and thus no primary production is taking place. […] Although limited to a short seasonal pulse, the total amount of primary production can be quite high, especially in the polar Southern Ocean […] In tropical open oceans, primary production occurs at a low level throughout the year. Here light is never limiting but the permanent tropical thermocline prevents the mixing of deep, nutrient-rich seawater with the surface waters. […] open-ocean tropical waters are often referred to as ‘marine deserts’, with productivity […] comparable to a terrestrial desert. In temperate open-ocean regions, primary productivity is linked closely to seasonal events. […] Although occurring in a number of pulses, primary productivity in temperate oceans [is] similar to [that of] a temperate forest or grassland. […] Some of the most productive marine environments occur in coastal ocean above the continental shelves. This is the result of a phenomenon known as coastal upwelling which brings deep, cold, nutrient-rich seawater to the ocean surface, creating ideal conditions for primary productivity […], comparable to a terrestrial rainforest or cultivated farmland. These hotspots of marine productivity are created by wind acting in concert with the planet’s rotation. […] Coastal upwelling can occur when prevailing winds move in a direction roughly parallel to the edge of a continent so as to create offshore Ekman transport. Coastal upwelling is particularly prevalent along the west coasts of continents. […] Since coastal upwelling is dependent on favourable winds, it tends to be a seasonal or intermittent phenomenon and the strength of upwelling will depend on the strength of the winds. […] Important coastal upwelling zones around the world include the coasts of California, Oregon, northwest Africa, and western India in the northern hemisphere; and the coasts of Chile, Peru, and southwest Africa in the southern hemisphere. These regions are amongst the most productive marine ecosystems on the planet.”

“Considering the Global Ocean as a whole, it is estimated that total marine primary production is about 50 billion tonnes of carbon per year. In comparison, the total production of land plants, which can also be estimated using satellite data, is estimated at around 52 billion tonnes per year. […] Primary production in the oceans is spread out over a much larger surface area and so the average productivity per unit of surface area is much smaller than on land. […] the energy of primary production in the oceans flows to higher trophic levels through several different pathways of various lengths […]. Some energy is lost along each step of the pathway — on average the efficiency of energy transfer from one trophic level to the next is about 10 per cent. Hence, shorter pathways are more efficient. Via these pathways, energy ultimately gets transferred to large marine consumers such as large fish, marine mammals, marine turtles, and seabirds.”

“…it has been estimated that in the 17th century, somewhere between fifty million and a hundred million green turtles inhabited the Caribbean Sea, but numbers are now down to about 300,000. Since their numbers are now so low, their impact on seagrass communities is currently small, but in the past, green turtles would have been extraordinarily abundant grazers of seagrasses. It appears that in the past, green turtles thinned out seagrass beds, thereby reducing direct competition among different species of seagrass and allowing several species of seagrass to coexist. Without green turtles in the system, seagrass beds are generally overgrown monocultures of one dominant species. […] Seagrasses are of considerable importance to human society. […] It is therefore of great concern that seagrass meadows are in serious decline globally. In 2003 it was estimated that 15 per cent of the planet’s existing seagrass beds had disappeared in the preceding ten years. Much of this is the result of increasing levels of coastal development and dredging of the seabed, activities which release excessive amounts of sediment into coastal waters which smother seagrasses. […] The number of marine dead zones in the Global Ocean has roughly doubled every decade since the 1960s”.

“Sea ice is habitable because, unlike solid freshwater ice, it is a very porous substance. As sea ice forms, tiny spaces between the ice crystals become filled with a highly saline brine solution resistant to freezing. Through this process a three-dimensional network of brine channels and spaces, ranging from microscopic to several centimetres in size, is created within the sea ice. These channels are physically connected to the seawater beneath the ice and become colonized by a great variety of marine organisms. A significant amount of the primary production in the Arctic Ocean, perhaps up to 50 per cent in those areas permanently covered by sea ice, takes place in the ice. […] Large numbers of zooplanktonic organisms […] swarm about on the under surface of the ice, grazing on the ice community at the ice-seawater interface, and sheltering in the brine channels. […] These under-ice organisms provide the link to higher trophic levels in the Arctic food web […] They are an important food source for fish such as Arctic cod and glacial cod that graze along the bottom of the ice. These fish are in turn fed on by squid, seals, and whales.”

“[T]he Antarctic marine system consists of a ring of ocean about 10° of latitude wide – roughly 1,000 km. […] The Arctic and Antarctic marine systems can be considered geographic opposites. In contrast to the largely landlocked Arctic Ocean, the Southern Ocean surrounds the Antarctic continental land mass and is in open contact with the Atlantic, Indian, and Pacific Oceans. Whereas the Arctic Ocean is strongly influenced by river inputs, the Antarctic continent has no rivers, and so hard-bottomed seabed is common in the Southern Ocean, and there is no low-saline surface layer, as in the Arctic Ocean. Also, in contrast to the Arctic Ocean with its shallow, broad continental shelves, the Antarctic continental shelf is very narrow and steep. […] Antarctic waters are extremely nutrient rich, fertilized by a permanent upwelling of seawater that has its origins at the other end of the planet. […] This continuous upwelling of cold, nutrient-rich seawater, in combination with the long Antarctic summer day length, creates ideal conditions for phytoplankton growth, which drives the productivity of the Antarctic marine system. As in the Arctic, a well-developed sea-ice community is present. Antarctic ice algae are even more abundant and productive than in the Arctic Ocean because the sea ice is thinner, and there is thus more available light for photosynthesis. […] Antarctica’s most important marine species [is] the Antarctic krill […] Krill are very adept at surviving many months under starvation conditions — in the laboratory they can endure more than 200 days without food. During the winter months they lower their metabolic rate, shrink in body size, and revert back to a juvenile state. When food once again becomes abundant in the spring, they grow rapidly […] As the sea ice breaks up they leave the ice and begin feeding directly on the huge blooms of free-living diatoms […]. With so much food available they grow and reproduce quickly, and start to swarm in large numbers, often at densities in excess of 10,000 individuals per cubic metre — dense enough to colour the seawater a reddish-brown. Krill swarms are patchy and vary greatly in size […] Because the Antarctic marine system covers a large area, krill numbers are enormous, estimated at about 600 billion animals on average, or 500 million tonnes of krill. This makes Antarctic krill one of the most abundant animal species on the planet […] Antarctic krill are the main food source for many of Antarctica’s large marine animals, and a key link in a very short and efficient food chain […]. Krill comprise the staple diet of icefish, squid, baleen whales, leopard seals, fur seals, crabeater seals, penguins, and seabirds, including albatross. Thus, a very simple and efficient three-step food chain is in operation — diatoms eaten by krill in turn eaten by a suite of large consumers — which supports the large numbers of large marine animals living in the Southern Ocean.”

Links:

Ocean gyre. North Atlantic Gyre. Thermohaline circulation. North Atlantic Deep Water. Antarctic bottom water.
Cyanobacteria. Diatom. Dinoflagellate. Coccolithophore.
Trophic level.
Nitrogen fixation.
High-nutrient, low-chlorophyll regions.
Light and dark bottle method of measuring primary productivity. Carbon-14 method for estimating primary productivity.
Ekman spiral.
Peruvian anchoveta.
El Niño. El Niño–Southern Oscillation.
Copepod.
Dissolved organic carbon. Particulate organic matter. Microbial loop.
Kelp forest. Macrocystis. Sea urchin. Urchin barren. Sea otter.
Seagrass.
Green sea turtle.
Manatee.
Demersal fish.
Eutrophication. Harmful algal bloom.
Comb jelly. Asterias amurensis.
Great Pacific garbage patch.
Eelpout. Sculpin.
Polynya.
Crabeater seal.
Adélie penguin.
Anchor ice mortality.

March 13, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Geology, Zoology | Leave a comment

Systems Biology (III)

Some observations from chapter 4 below:

The need to maintain a steady state ensuring homeostasis is an essential concern in nature while negative feedback loop is the fundamental way to ensure that this goal is met. The regulatory system determines the interdependences between individual cells and the organism, subordinating the former to the latter. In trying to maintain homeostasis, the organism may temporarily upset the steady state conditions of its component cells, forcing them to perform work for the benefit of the organism. […] On a cellular level signals are usually transmitted via changes in concentrations of reaction substrates and products. This simple mechanism is made possible due to limited volume of each cell. Such signaling plays a key role in maintaining homeostasis and ensuring cellular activity. On the level of the organism signal transmission is performed by hormones and the nervous system. […] Most intracellular signal pathways work by altering the concentrations of selected substances inside the cell. Signals are registered by forming reversible complexes consisting of a ligand (reaction product) and an allosteric receptor complex. When coupled to the ligand, the receptor inhibits the activity of its corresponding effector, which in turn shuts down the production of the controlled substance ensuring the steady state of the system. Signals coming from outside the cell are usually treated as commands (covalent modifications), forcing the cell to adjust its internal processes […] Such commands can arrive in the form of hormones, produced by the organism to coordinate specialized cell functions in support of general homeostasis (in the organism). These signals act upon cell receptors and are usually amplified before they reach their final destination (the effector).”

“Each concentration-mediated signal must first be registered by a detector. […] Intracellular detectors are typically based on allosteric proteins. Allosteric proteins exhibit a special property: they have two stable structural conformations and can shift from one form to the other as a result of changes in ligand concentrations. […] The concentration of a product (or substrate) which triggers structural realignment in the allosteric protein (such as a regulatory enzyme) depends on the genetically-determined affinity of the active site to its ligand. Low affinity results in high target concentration of the controlled substance while high affinity translates into lower concentration […]. In other words, high concentration of the product is necessary to trigger a low-affinity receptor (and vice versa). Most intracellular regulatory mechanisms rely on noncovalent interactions. Covalent bonding is usually associated with extracellular signals, generated by the organism and capable of overriding the cell’s own regulatory mechanisms by modifying the sensitivity of receptors […]. Noncovalent interactions may be compared to requests while covalent signals are treated as commands. Signals which do not originate in the receptor’s own feedback loop but modify its affinity are known as steering signals […] Hormones which act upon cells are, by their nature, steering signals […] Noncovalent interactions — dependent on substance concentrations — impose spatial restrictions on regulatory mechanisms. Any increase in cell volume requires synthesis of additional products in order to maintain stable concentrations. The volume of a spherical cell is given as V = 4/3 π r3, where r indicates cell radius. Clearly, even a slight increase in r translates into a significant increase in cell volume, diluting any products dispersed in the cytoplasm. This implies that cells cannot expand without incurring great energy costs. It should also be noted that cell expansion reduces the efficiency of intracellular regulatory mechanisms because signals and substrates need to be transported over longer distances. Thus, cells are universally small, regardless of whether they make up a mouse or an elephant.”

An effector is an element of a regulatory loop which counteracts changes in the regulated quantity […] Synthesis and degradation of biological compounds often involves numerous enzymes acting in sequence. The product of one enzyme is a substrate for another enzyme. With the exception of the initial enzyme, each step of this cascade is controlled by the availability of the supplied substrate […] The effector consists of a chain of enzymes, each of which depends on the activity of the initial regulatory enzyme […] as well as on the activity of its immediate predecessor which supplies it with substrates. The function of all enzymes in the effector chain is indirectly dependent on the initial enzyme […]. This coupling between the receptor and the first link in the effector chain is a universal phenomenon. It can therefore be said that the initial enzyme in the effector chain is, in fact, a regulatory enzyme. […] Most cell functions depend on enzymatic activity. […] It seems that a set of enzymes associated with a specific process which involves a negative feedback loop is the most typical form of an intracellular regulatory effector. Such effectors can be controlled through activation or inhibition of their associated enzymes.”

“The organism is a self-contained unit represented by automatic regulatory loops which ensure homeostasis. […] Effector functions are conducted by cells which are usually grouped and organized into tissues and organs. Signal transmission occurs by way of body fluids, hormones or nerve connections. Cells can be treated as automatic and potentially autonomous elements of regulatory loops, however their specific action is dependent on the commands issued by the organism. This coercive property of organic signals is an integral requirement of coordination, allowing the organism to maintain internal homeostasis. […] Activities of the organism are themselves regulated by their own negative feedback loops. Such regulation differs however from the mechanisms observed in individual cells due to its place in the overall hierarchy and differences in signal properties, including in particular:
• Significantly longer travel distances (compared to intracellular signals);
• The need to maintain hierarchical superiority of the organism;
• The relative autonomy of effector cells. […]
The relatively long distance travelled by organism’s signals and their dilution (compared to intracellular ones) calls for amplification. As a consequence, any errors or random distortions in the original signal may be drastically exacerbated. A solution to this problem comes in the form of encoding, which provides the signal with sufficient specificity while enabling it to be selectively amplified. […] a loudspeaker can […] assist in acoustic communication, but due to the lack of signal encoding it cannot compete with radios in terms of communication distance. The same reasoning applies to organism-originated signals, which is why information regarding blood glucose levels is not conveyed directly by glucose but instead by adrenalin, glucagon or insulin. Information encoding is handled by receptors and hormone-producing cells. Target cells are capable of decoding such signals, thus completing the regulatory loop […] Hormonal signals may be effectively amplified because the hormone itself does not directly participate in the reaction it controls — rather, it serves as an information carrier. […] strong amplification invariably requires encoding in order to render the signal sufficiently specific and unambiguous. […] Unlike organisms, cells usually do not require amplification in their internal regulatory loops — even the somewhat rare instances of intracellular amplification only increase signal levels by a small amount. Without the aid of an amplifier, messengers coming from the organism level would need to be highly concentrated at their source, which would result in decreased efficiency […] Most signals originated on organism’s level travel with body fluids; however if a signal has to reach its destination very rapidly (for instance in muscle control) it is sent via the nervous system”.

“Two types of amplifiers are observed in biological systems:
1. cascade amplifier,
2. positive feedback loop. […]
A cascade amplifier is usually a collection of enzymes which perform their action by activation in strict sequence. This mechanism resembles multistage (sequential) synthesis or degradation processes, however instead of exchanging reaction products, amplifier enzymes communicate by sharing activators or by directly activating one another. Cascade amplifiers are usually contained within cells. They often consist of kinases. […] Amplification effects occurring at each stage of the cascade contribute to its final result. […] While the kinase amplification factor is estimated to be on the order of 103, the phosphorylase cascade results in 1010-fold amplification. It is a stunning value, though it should also be noted that the hormones involved in this cascade produce particularly powerful effects. […] A positive feedback loop is somewhat analogous to a negative feedback loop, however in this case the input and output signals work in the same direction — the receptor upregulates the process instead of inhibiting it. Such upregulation persists until the available resources are exhausted.
Positive feedback loops can only work in the presence of a control mechanism which prevents them from spiraling out of control. They cannot be considered self-contained and only play a supportive role in regulation. […] In biological systems positive feedback loops are sometimes encountered in extracellular regulatory processes where there is a need to activate slowly-migrating components and greatly amplify their action in a short amount of time. Examples include blood coagulation and complement factor activation […] Positive feedback loops are often coupled to negative loop-based control mechanisms. Such interplay of loops may impart the signal with desirable properties, for instance by transforming a flat signals into a sharp spike required to overcome the activation threshold for the next stage in a signalling cascade. An example is the ejection of calcium ions from the endoplasmic reticulum in the phospholipase C cascade, itself subject to a negative feedback loop.”

“Strong signal amplification carries an important drawback: it tends to “overshoot” its target activity level, causing wild fluctuations in the process it controls. […] Nature has evolved several means of signal attenuation. The most typical mechanism superimposes two regulatory loops which affect the same parameter but act in opposite directions. An example is the stabilization of blood glucose levels by two contradictory hormones: glucagon and insulin. Similar strategies are exploited in body temperature control and many other biological processes. […] The coercive properties of signals coming from the organism carry risks associated with the possibility of overloading cells. The regulatory loop of an autonomous cell must therefore include an “off switch”, controlled by the cell. An autonomous cell may protect itself against excessive involvement in processes triggered by external signals (which usually incur significant energy expenses). […] The action of such mechanisms is usually timer-based, meaning that they inactivate signals following a set amount of time. […] The ability to interrupt signals protects cells from exhaustion. Uncontrolled hormone-induced activity may have detrimental effects upon the organism as a whole. This is observed e.g. in the case of the vibrio cholerae toxin which causes prolonged activation of intestinal epithelial cells by locking protein G in its active state (resulting in severe diarrhea which can dehydrate the organism).”

“Biological systems in which information transfer is affected by high entropy of the information source and ambiguity of the signal itself must include discriminatory mechanisms. These mechanisms usually work by eliminating weak signals (which are less specific and therefore introduce ambiguities). They create additional obstacles (thresholds) which the signals must overcome. A good example is the mechanism which eliminates the ability of weak, random antigens to activate lymphatic cells. It works by inhibiting blastic transformation of lymphocytes until a so-called receptor cap has accumulated on the surface of the cell […]. Only under such conditions can the activation signal ultimately reach the cell nucleus […] and initiate gene transcription. […] weak, reversible nonspecific interactions do not permit sufficient aggregation to take place. This phenomenon can be described as a form of discrimination against weak signals. […] Discrimination may also be linked to effector activity. […] Cell division is counterbalanced by programmed cell death. The most typical example of this process is apoptosis […] Each cell is prepared to undergo controlled death if required by the organism, however apoptosis is subject to tight control. Cells protect themselves against accidental triggering of the process via IAP proteins. Only strong proapoptotic signals may overcome this threshold and initiate cellular suicide”.

Simply knowing the sequences, structures or even functions of individual proteins does not provide sufficient insight into the biological machinery of living organisms. The complexity of individual cells and entire organisms calls for functional classification of proteins. This task can be accomplished with a proteome — a theoretical construct where individual elements (proteins) are grouped in a way which acknowledges their mutual interactions and interdependencies, characterizing the information pathways in a complex organism.
Most ongoing proteome construction projects focus on individual proteins as the basic building blocks […] [We would instead argue in favour of a model in which] [t]he basic unit of the proteome is one negative feedback loop (rather than a single protein) […]
Due to the relatively large number of proteins (between 25 and 40 thousand in the human organism), presenting them all on a single graph with vertex lengths corresponds to the relative duration of interactions would be unfeasible. This is why proteomes are often subdivided into functional subgroups such as the metabolome (proteins involved in metabolic processes), interactome (complex-forming proteins), kinomes (proteins which belong to the kinase family) etc.”

February 18, 2018 Posted by | Biology, Books, Chemistry, Genetics, Medicine, Molecular biology | Leave a comment

Systems Biology (II)

Some observations from the book’s chapter 3 below:

“Without regulation biological processes would become progressively more and more chaotic. In living cells the primary source of information is genetic material. Studying the role of information in biology involves signaling (i.e. spatial and temporal transfer of information) and storage (preservation of information). Regarding the role of the genome we can distinguish three specific aspects of biological processes: steady-state genetics, which ensure cell-level and body homeostasis; genetics of development, which controls cell differentiation and genesis of the organism; and evolutionary genetics, which drives speciation. […] The ever growing demand for information, coupled with limited storage capacities, has resulted in a number of strategies for minimizing the quantity of the encoded information that must be preserved by living cells. In addition to combinatorial approaches based on noncontiguous genes structure, self-organization plays an important role in cellular machinery. Nonspecific interactions with the environment give rise to coherent structures despite the lack of any overt information store. These mechanisms, honed by evolution and ubiquitous in living organisms, reduce the need to directly encode large quantities of data by adopting a systemic approach to information management.”

Information is commonly understood as a transferable description of an event or object. Information transfer can be either spatial (communication, messaging or signaling) or temporal (implying storage). […] The larger the set of choices, the lower the likelihood [of] making the correct choice by accident and — correspondingly — the more information is needed to choose correctly. We can therefore state that an increase in the cardinality of a set (the number of its elements) corresponds to an increase in selection indeterminacy. This indeterminacy can be understood as a measure of “a priori ignorance”. […] Entropy determines the uncertainty inherent in a given system and therefore represents the relative difficulty of making the correct choice. For a set of possible events it reaches its maximum value if the relative probabilities of each event are equal. Any information input reduces entropy — we can therefore say that changes in entropy are a quantitative measure of information. […] Physical entropy is highest in a state of equilibrium, i.e. lack of spontaneity (G = 0,0) which effectively terminates the given reaction. Regulatory processes which counteract the tendency of physical systems to reach equilibrium must therefore oppose increases in entropy. It can be said that a steady inflow of information is a prerequisite of continued function in any organism. As selections are typically made at the entry point of a regulatory process, the concept of entropy may also be applied to information sources. This approach is useful in explaining the structure of regulatory systems which must be “designed” in a specific way, reducing uncertainty and enabling accurate, error-free decisions.

The fire ant exudes a pheromone which enables it to mark sources of food and trace its own path back to the colony. In this way, the ant conveys pathing information to other ants. The intensity of the chemical signal is proportional to the abundance of the source. Other ants can sense the pheromone from a distance of several (up to a dozen) centimeters and thus locate the source themselves. […] As can be expected, an increase in the entropy of the information source (i.e. the measure of ignorance) results in further development of regulatory systems — in this case, receptors capable of receiving signals and processing them to enable accurate decisions. Over time, the evolution of regulatory mechanisms increases their performance and precision. The purpose of various structures involved in such mechanisms can be explained on the grounds of information theory. The primary goal is to select the correct input signal, preserve its content and avoid or eliminate any errors.”

Genetic information stored in nucleotide sequences can be expressed and transmitted in two ways:
a. via replication (in cell division);
b. via transcription and translation (also called gene expression […]
)
Both processes act as effectors and can be triggered by certain biological signals transferred on request.
Gene expression can be defined as a sequence of events which lead to the synthesis of proteins or their products required for a particular function. In cell division, the goal of this process is to generate a copy of the entire genetic code (S phase), whereas in gene expression only selected fragments of DNA (those involved in the requested function) are transcribed and translated. […] Transcription calls for exposing a section of the cell’s genetic code and although its product (RNA) is short-lived, it can be recreated on demand, just like a carbon copy of a printed text. On the other hand, replication affects the entire genetic material contained in the cell and must conform to stringent precision requirements, particularly as the size of the genome increases.”

The magnitude of effort involved in replication of genetic code can be visualized by comparing the DNA chain to a zipper […]. Assuming that the zipper consists of three pairs of interlocking teeth per centimeter (300 per meter) and that the human genome is made up of 3 billion […] base pairs, the total length of our uncoiled DNA in “zipper form” would be equal to […] 10,000 km […] If we were to unfasten the zipper at a rate of 1 m per second, the entire unzipping process would take approximately 3 months […]. This comparison should impress upon the reader the length of the DNA chain and the precision with which individual nucleotides must be picked to ensure that the resulting code is an exact copy of the source. It should also be noted that for each base pair the polymerase enzyme needs to select an appropriate matching nucleotide from among four types of nucleotides present in the solution, and attach it to the chain (clearly, no such problem occurs in zippers). The reliability of an average enzyme is on the order of 10-3–10-4, meaning that one error occurs for every 1,000–10,000 interactions between the enzyme and its substrate. Given this figure, replication of 3*109 base pairs would introduce approximately 3 million errors (mutations) per genome, resulting in a highly inaccurate copy. Since the observed reliability of replication is far higher, we may assume that some corrective mechanisms are involved. Really, the remarkable precision of genetic replication is ensured by DNA repair processes, and in particular by the corrective properties of polymerase itself.

Many mutations are caused by the inherent chemical instability of nucleic acids: for example, cytosine may spontaneously convert to uracil. In the human genome such an event occurs approximately 100 times per day; however uracil is not normally encountered in DNA and its presence alerts defensive mechanisms which correct the error. Another type of mutation is spontaneous depurination, which also triggers its own, dedicated error correction procedure. Cells employ a large number of corrective mechanisms […] DNA repair mechanisms may be treated as an “immune system” which protects the genome from loss or corruption of genetic information. The unavoidable mutations which sometimes occur despite the presence of error correction-mechanisms can be masked due to doubled presentation (alleles) of genetic information. Thus, most mutations are recessive and not expressed in the phenotype. As the length of the DNA chain increases, mutations become more probable. It should be noted that the number of nucleotides in DNA is greater than the relative number of aminoacids participating in polypeptide chains. This is due to the fact that each aminoacid is encoded by exactly three nucleotides — a general principle which applies to all living organisms. […] Fidelity is, of course, fundamentally important in DNA replication as any harmful mutations introduced in its course are automatically passed on to all successive generations of cells. In contrast, transcription and translation processes can be more error-prone as their end products are relatively short-lived. Of note is the fact that faulty transcripts appear in relatively low quantities and usually do not affect cell functions, since regulatory processes ensure continued synthesis of the required substances until a suitable level of activity is reached. Nevertheless, it seems that reliable transcription of genetic material is sufficiently significant for cells to have developed appropriate proofreading mechanisms, similar to those which assist replication. […] the entire information pathway — starting with DNA and ending with active proteins — is protected against errors. We can conclude that fallibility is an inherent property of genetic information channels, and that in order to perform their intended function, these channels require error correction mechanisms.”

The discrete nature of genetic material is an important property which distinguishes prokaryotes from eukaryotes. […] The ability to select individual nucleotide fragments and construct sequences from predetermined “building blocks” results in high adaptability to environmental stimuli and is a fundamental aspect of evolution. The discontinuous nature of genes is evidenced by the presence of fragments which do not convey structural information (introns), as opposed to structure-encoding fragments (exons). The initial transcript (pre-mRNA) contains introns as well as exons. In order to provide a template for protein synthesis, it must undergo further processing (also known as splicing): introns must be cleaved and exon fragments attached to one another. […] Recognition of intron-exon boundaries is usually very precise, while the reattachment of adjacent exons is subject to some variability. Under certain conditions, alternative splicing may occur, where the ordering of the final product does not reflect the order in which exon sequences appear in the source chain. This greatly increases the number of potential mRNA combinations and thus the variety of resulting proteins. […] While access to energy sources is not a major problem, sources of information are usually far more difficult to manage — hence the universal tendency to limit the scope of direct (genetic) information storage. Reducing the length of genetic code enables efficient packing and enhances the efficiency of operations while at the same time decreasing the likelihood of errors. […] The number of genes identified in the human genome is lower than the number of distinct proteins by a factor of 4; a difference which can be attributed to alternative splicing. […] This mechanism increases the variety of protein structures without affecting core information storage, i.e. DNA sequences. […] Primitive organisms often possess nearly as many genes as humans, despite the essential differences between both groups. Interspecies diversity is primarily due to the properties of regulatory sequences.”

The discontinuous nature of genes is evolutionarily advantageous but comes at the expense of having to maintain a nucleus where such splicing processes can be safely conducted, in addition to efficient transport channels allowing transcripts to penetrate the nuclear membrane. While it is believed that at early stages of evolution RNA was the primary repository of genetic information, its present function can best be described as an information carrier. Since unguided proteins cannot ensure sufficient specificity of interaction with nucleic acids, protein-RNA complexes are used often in cases where specific fragments of genetic information need to be read. […] The use of RNA in protein complexes is common across all domains of the living world as it bridges the gap between discrete and continuous storage of genetic information.”

Epigenetic differentiation mechanisms are particularly important in embryonic development. […] Unlike the function of mature organisms, embryonic programming refers to structures which do not yet exist but which need to be created through cell proliferation and differentiation. […] Differentiation of cells results in phenotypic changes. This phenomenon is the primary difference between development genetics and steady-state genetics. Functional differences are not, however, associated with genomic changes: instead they are mediated by the transcriptome where certain genes are preferentially selected for transcription while others are suppressed. […] In a mature, specialized cell only a small portion of the transcribable genome is actually expressed. The remainder of the cell’s genetic material is said to be silenced. Gene silencing is a permanent condition. Under normal circumstances mature cells never alter their function, although such changes may be forced in a laboratory setting […] Cells which make up the embryo at a very early stage of development are pluripotent, meaning that their purpose can be freely determined and that all of their genetic information can potentially be expressed (under certain conditions). […] At each stage of the development process the scope of pluripotency is reduced until, ultimately, the cell becomes monopotent. Monopotency implies that the final function of the cell has already been determined, although the cell itself may still be immature. […] functional dissimilarities between specialized cells are not associated with genetic mutations but rather with selective silencing of genes. […] Most genes which determine biological functions have a biallelic representation (i.e. a representation consisting of two alleles). The remainder (approximately 10 % of genes) is inherited from one specific parent, as a result of partial or complete silencing of their sister alleles (called paternal or maternal imprinting) which occurs during gametogenesis. The suppression of a single copy of the X chromosome is a special case of this phenomenon.”

Evolutionary genetics is subject to two somewhat contradictory criteria. On the one hand, there is clear pressure on accurate and consistent preservation of biological functions and structures while on the other hand it is also important to permit gradual but persistent changes. […] the observable progression of adaptive traits which emerge as a result of evolution suggests a mechanism which promotes constructive changes over destructive ones. Mutational diversity cannot be considered truly random if it is limited to certain structures or functions. […] Approximately 50 % of the human genome consists of mobile segments, capable of migrating to various positions in the genome. These segments are called transposons and retrotransposons […] The mobility of genome fragments not only promotes mutations (by increasing the variability of DNA) but also affects the stability and packing of chromatin strands wherever such mobile sections are reintegrated with the genome. Under normal circumstances the activity of mobile sections is tempered by epigenetic mechanisms […]; however in certain situations gene mobility may be upregulated. In particular, it seems that in “prehistoric” (remote evolutionary) times such events occurred at a much faster pace, accelerating the rate of genetic changes and promoting rapid evolution. Cells can actively promote mutations by way of the so-called AID process (activity-dependent cytosine deamination). It is an enzymatic mechanism which converts cytosine into uracil, thereby triggering repair mechanisms and increasing the likelihood of mutations […] The existence of AID proves that cells themselves may trigger evolutionary changes and that the role of mutations in the emergence of new biological structures is not strictly passive.”

Regulatory mechanisms which receive signals characterized by high degrees of uncertainty, must be able to make informed choices to reduce the overall entropy of the system they control. This property is usually associated with development of information channels. Special structures ought to be exposed within information channels connecting systems of different character as for example linking transcription to translation or enabling transduction of signals through the cellular membrane. Examples of structures which convey highly entropic information are receptor systems associated with blood coagulation and immune responses. The regulatory mechanism which triggers an immune response relies on relatively simple effectors (complement factor enzymes, phages and killer cells) coupled to a highly evolved receptor system, represented by specific antibodies and organized set of cells. Compared to such advanced receptors the structures which register the concentration of a given product (e.g. glucose in blood) are rather primitive. Advanced receptors enable the immune system to recognize and verify information characterized by high degrees of uncertainty. […] In sequential processes it is usually the initial stage which poses the most problems and requires the most information to complete successfully. It should come as no surprise that the most advanced control loops are those associated with initial stages of biological pathways.”

February 10, 2018 Posted by | Biology, Books, Chemistry, Evolutionary biology, Genetics, Immunology, Medicine, Molecular biology | Leave a comment

Systems Biology (I)

This book is really dense and is somewhat tough for me to blog. One significant problem is that: “The authors assume that the reader is already familiar with the material covered in a classic biochemistry course.” I know enough biochem to follow most of the stuff in this book, and I was definitely quite happy to have recently read John Finney’s book on the biochemical properties of water and Christopher Hall’s introduction to materials science, as both of those books’ coverage turned out to be highly relevant (these are far from the only relevant books I’ve read semi-recently – Atkins introduction to thermodynamics is another book that springs to mind) – but even so, what do you leave out when writing a post like this? I decided to leave out a lot. Posts covering books like this one are hard to write because it’s so easy for them to blow up in your face because you have to include so many details for the material included in the post to even start to make sense to people who didn’t read the original text. And if you leave out all the details, what’s really left? It’s difficult..

Anyway, some observations from the first chapters of the book below.

“[T]he biological world consists of self-managing and self-organizing systems which owe their existence to a steady supply of energy and information. Thermodynamics introduces a distinction between open and closed systems. Reversible processes occurring in closed systems (i.e. independent of their environment) automatically gravitate toward a state of equilibrium which is reached once the velocity of a given reaction in both directions becomes equal. When this balance is achieved, we can say that the reaction has effectively ceased. In a living cell, a similar condition occurs upon death. Life relies on certain spontaneous processes acting to unbalance the equilibrium. Such processes can only take place when substrates and products of reactions are traded with the environment, i.e. they are only possible in open systems. In turn, achieving a stable level of activity in an open system calls for regulatory mechanisms. When the reaction consumes or produces resources that are exchanged with the outside world at an uneven rate, the stability criterion can only be satisfied via a negative feedback loop […] cells and living organisms are thermodynamically open systems […] all structures which play a role in balanced biological activity may be treated as components of a feedback loop. This observation enables us to link and integrate seemingly unrelated biological processes. […] the biological structures most directly involved in the functions and mechanisms of life can be divided into receptors, effectors, information conduits and elements subject to regulation (reaction products and action results). Exchanging these elements with the environment requires an inflow of energy. Thus, living cells are — by their nature — open systems, requiring an energy source […] A thermodynamically open system lacking equilibrium due to a steady inflow of energy in the presence of automatic regulation is […] a good theoretical model of a living organism. […] Pursuing growth and adapting to changing environmental conditions calls for specialization which comes at the expense of reduced universality. A specialized cell is no longer self-sufficient. As a consequence, a need for higher forms of intercellular organization emerges. The structure which provides cells with suitable protection and ensures continued homeostasis is called an organism.”

“In biology, structure and function are tightly interwoven. This phenomenon is closely associated with the principles of evolution. Evolutionary development has produced structures which enable organisms to develop and maintain its architecture, perform actions and store the resources needed to survive. For this reason we introduce a distinction between support structures (which are akin to construction materials), function-related structures (fulfilling the role of tools and machines), and storage structures (needed to store important substances, achieving a compromise between tight packing and ease of access). […] Biology makes extensive use of small-molecule structures and polymers. The physical properties of polymer chains make them a key building block in biological structures. There are several reasons as to why polymers are indispensable in nature […] Sequestration of resources is subject to two seemingly contradictory criteria: 1. Maximize storage density; 2. Perform sequestration in such a way as to allow easy access to resources. […] In most biological systems, storage applies to energy and information. Other types of resources are only occasionally stored […]. Energy is stored primarily in the form of saccharides and lipids. Saccharides are derivatives of glucose, rendered insoluble (and thus easy to store) via polymerization.Their polymerized forms, stabilized with α-glycosidic bonds, include glycogen (in animals) and starch (in plantlife). […] It should be noted that the somewhat loose packing of polysaccharides […] makes them unsuitable for storing large amounts of energy. In a typical human organism only ca. 600 kcal of energy is stored in the form of glycogen, while (under normal conditions) more than 100,000 kcal exists as lipids. Lipids deposit usually assume the form of triglycerides (triacylglycerols). Their properties can be traced to the similarities between fatty acids and hydrocarbons. Storage efficiency (i.e. the amount of energy stored per unit of mass) is twice that of polysaccharides, while access remains adequate owing to the relatively large surface area and high volume of lipids in the organism.”

“Most living organisms store information in the form of tightly-packed DNA strands. […] It should be noted that only a small percentage of DNA (about few %) conveys biologically relevant information. The purpose of the remaining ballast is to enable suitable packing and exposure of these important fragments. If all of DNA were to consist of useful code, it would be nearly impossible to devise a packing strategy guaranteeing access to all of the stored information.”

“The seemingly endless diversity of biological functions frustrates all but the most persistent attempts at classification. For the purpose of this handbook we assume that each function can be associated either with a single cell or with a living organism. In both cases, biological functions are strictly subordinate to automatic regulation, based — in a stable state — on negative feedback loops, and in processes associated with change (for instance in embryonic development) — on automatic execution of predetermined biological programs. Individual components of a cell cannot perform regulatory functions on their own […]. Thus, each element involved in the biological activity of a cell or organism must necessarily participate in a regulatory loop based on processing information.”

“Proteins are among the most basic active biological structures. Most of the well-known proteins studied thus far perform effector functions: this group includes enzymes, transport proteins, certain immune system components (complement factors) and myofibrils. Their purpose is to maintain biological systems in a steady state. Our knowledge of receptor structures is somewhat poorer […] Simple structures, including individual enzymes and components of multienzyme systems, can be treated as “tools” available to the cell, while advanced systems, consisting of many mechanically-linked tools, resemble machines. […] Machinelike mechanisms are readily encountered in living cells. A classic example is fatty acid synthesis, performed by dedicated machines called synthases. […] Multiunit structures acting as machines can be encountered wherever complex biochemical processes need to be performed in an efficient manner. […] If the purpose of a machine is to generate motion then a thermally powered machine can accurately be called a motor. This type of action is observed e.g. in myocytes, where transmission involves reordering of protein structures using the energy generated by hydrolysis of high-energy bonds.”

“In biology, function is generally understood as specific physiochemical action, almost universally mediated by proteins. Most such actions are reversible which means that a single protein molecule may perform its function many times. […] Since spontaneous noncovalent surface interactions are very infrequent, the shape and structure of active sites — with high concentrations of hydrophobic residues — makes them the preferred area of interaction between functional proteins and their ligands. They alone provide the appropriate conditions for the formation of hydrogen bonds; moreover, their structure may determine the specific nature of interaction. The functional bond between a protein and a ligand is usually noncovalent and therefore reversible.”

“In general terms, we can state that enzymes accelerate reactions by lowering activation energies for processes which would otherwise occur very slowly or not at all. […] The activity of enzymes goes beyond synthesizing a specific protein-ligand complex (as in the case of antibodies or receptors) and involves an independent catalytic attack on a selected bond within the ligand, precipitating its conversion into the final product. The relative independence of both processes (binding of the ligand in the active site and catalysis) is evidenced by the phenomenon of noncompetitive inhibition […] Kinetic studies of enzymes have provided valuable insight into the properties of enzymatic inhibitors — an important field of study in medicine and drug research. Some inhibitors, particularly competitive ones (i.e. inhibitors which outcompete substrates for access to the enzyme), are now commonly used as drugs. […] Physical and chemical processes may only occur spontaneously if they generate energy, or non-spontaneously if they consume it. However, all processes occurring in a cell must have a spontaneous character because only these processes may be catalyzed by enzymes. Enzymes merely accelerate reactions; they do not provide energy. […] The change in enthalpy associated with a chemical process may be calculated as a net difference in the sum of molecular binding energies prior to and following the reaction. Entropy is a measure of the likelihood that a physical system will enter a given state. Since chaotic distribution of elements is considered the most probable, physical systems exhibit a general tendency to gravitate towards chaos. Any form of ordering is thermodynamically disadvantageous.”

“The chemical reactions which power biological processes are characterized by varying degrees of efficiency. In general, they tend to be on the lower end of the efficiency spectrum, compared to energy sources which drive matter transformation processes in our universe. In search for a common criterion to describe the efficiency of various energy sources, we can refer to the net loss of mass associated with a release of energy, according to Einstein’s formula:
E = mc2
The
M/M coefficient (relative loss of mass, given e.g. in %) allows us to compare the efficiency of energy sources. The most efficient processes are those involved in the gravitational collapse of stars. Their efficiency may reach 40 %, which means that 40 % of the stationary mass of the system is converted into energy. In comparison, nuclear reactions have an approximate efficiency of 0.8 %. The efficiency of chemical energy sources available to biological systems is incomparably lower and amounts to approximately 10(-7) % […]. Among chemical reactions, the most potent sources of energy are found in oxidation processes, commonly exploited by biological systems. Oxidation tends  to result in the largest net release of energy per unit of mass, although the efficiency of specific types of oxidation varies. […] given unrestricted access to atmospheric oxygen and to hydrogen atoms derived from hydrocarbons — the combustion of hydrogen (i.e. the synthesis of water; H2 + 1/2O2 = H2O) has become a principal source of energy in nature, next to photosynthesis, which exploits the energy of solar radiation. […] The basic process associated with the release of hydrogen and its subsequent oxidation (called the Krebs cycle) is carried by processes which transfer electrons onto oxygen atoms […]. Oxidation occurs in stages, enabling optimal use of the released energy. An important byproduct of water synthesis is the universal energy carrier known as ATP (synthesized separately). As water synthesis is a highly spontaneous process, it can be exploited to cover the energy debt incurred by endergonic synthesis of ATP, as long as both processes are thermodynamically coupled, enabling spontaneous catalysis of anhydride bonds in ATP. Water synthesis is a universal source of energy in heterotrophic systems. In contrast, autotrophic organisms rely on the energy of light which is exploited in the process of photosynthesis. Both processes yield ATP […] Preparing nutrients (hydrogen carriers) for participation in water synthesis follows different paths for sugars, lipids and proteins. This is perhaps obvious given their relative structural differences; however, in all cases the final form, which acts as a substrate for dehydrogenases, is acetyl-CoA“.

“Photosynthesis is a process which — from the point of view of electron transfer — can be treated as a counterpart of the respiratory chain. In heterotrophic organisms, mitochondria transport electrons from hydrogenated compounds (sugars, lipids, proteins) onto oxygen molecules, synthesizing water in the process, whereas in the course of photosynthesis electrons released by breaking down water molecules are used as a means of reducing oxydised carbon compounds […]. In heterotrophic organisms the respiratory chain has a spontaneous quality (owing to its oxidative properties); however any reverse process requires energy to occur. In the case of photosynthesis this energy is provided by sunlight […] Hydrogen combustion and photosynthesis are the basic sources of energy in the living world. […] For an energy source to become useful, non-spontaneous reactions must be coupled to its operation, resulting in a thermodynamically unified system. Such coupling can be achieved by creating a coherent framework in which the spontaneous and non-spontaneous processes are linked, either physically or chemically, using a bridging component which affects them both. If the properties of both reactions are different, the bridging component must also enable suitable adaptation and mediation. […] Direct exploitation of the energy released via the hydrolysis of ATP is possible usually by introducing an active binding carrier mediating the energy transfer. […] Carriers are considered active as long as their concentration ensures a sufficient release of energy to synthesize a new chemical bond by way of a non-spontaneous process. Active carriers are relatively short-lived […] Any active carrier which performs its function outside of the active site must be sufficiently stable to avoid breaking up prior to participating in the synthesis reaction. Such mobile carriers are usually produced when the required synthesis consists of several stages or cannot be conducted in the active site of the enzyme for sterical reasons. Contrary to ATP, active energy carriers are usually reaction-specific. […] Mobile energy carriers are usually formed as a result of hydrolysis of two high-energy ATP bonds. In many cases this is the minimum amount of energy required to power a reaction which synthesizes a single chemical bond. […] Expelling a mobile or unstable reaction component in order to increase the spontaneity of active energy carrier synthesis is a process which occurs in many biological mechanisms […] The action of active energy carriers may be compared to a ball rolling down a hill. The descending snowball gains sufficient energy to traverse another, smaller mound, adjacent to its starting point. In our case, the smaller hill represents the final synthesis reaction […] Understanding the role of active carriers is essential for the study of metabolic processes.”

“A second category of processes, directly dependent on energy sources, involves structural reconfiguration of proteins, which can be further differentiated into low and high-energy reconfiguration. Low-energy reconfiguration occurs in proteins which form weak, easily reversible bonds with ligands. In such cases, structural changes are powered by the energy released in the creation of the complex. […] Important low-energy reconfiguration processes may occur in proteins which consist of subunits. Structural changes resulting from relative motion of subunits typically do not involve significant expenditures of energy. Of particular note are the so-called allosteric proteins […] whose rearrangement is driven by a weak and reversible bond between the protein and an oxygen molecule. Allosteric proteins are genetically conditioned to possess two stable structural configurations, easily swapped as a result of binding or releasing ligands. Thus, they tend to have two comparable energy minima (separated by a low threshold), each of which may be treated as a global minimum corresponding to the native form of the protein. Given such properties, even a weakly interacting ligand may trigger significant structural reconfiguration. This phenomenon is of critical importance to a variety of regulatory proteins. In many cases, however, the second potential minimum in which the protein may achieve relative stability is separated from the global minimum by a high threshold requiring a significant expenditure of energy to overcome. […] Contrary to low-energy reconfigurations, the relative difference in ligand concentrations is insufficient to cover the cost of a difficult structural change. Such processes are therefore coupled to highly exergonic reactions such as ATP hydrolysis. […]  The link between a biological process and an energy source does not have to be immediate. Indirect coupling occurs when the process is driven by relative changes in the concentration of reaction components. […] In general, high-energy reconfigurations exploit direct coupling mechanisms while indirect coupling is more typical of low-energy processes”.

Muscle action requires a major expenditure of energy. There is a nonlinear dependence between the degree of physical exertion and the corresponding energy requirements. […] Training may improve the power and endurance of muscle tissue. Muscle fibers subjected to regular exertion may improve their glycogen storage capacity, ATP production rate, oxidative metabolism and the use of fatty acids as fuel.

February 4, 2018 Posted by | Biology, Books, Chemistry, Genetics, Molecular biology, Pharmacology, Physics | Leave a comment

Lakes (II)

(I have had some computer issues over the last couple of weeks, which was the explanation for my brief blogging hiatus, but they should be resolved by now and as I’m already starting to fall quite a bit behind in terms of my intended coverage of the books I’ve read this year I hope to get rid of some of the backlog in the days to come.)

I have added some more observations from the second half of the book, as well as some related links, below.

“[R]ecycling of old plant material is especially important in lakes, and one way to appreciate its significance is to measure the concentration of CO2, an end product of decomposition, in the surface waters. This value is often above, sometimes well above, the value to be expected from equilibration of this gas with the overlying air, meaning that many lakes are net producers of CO2 and that they emit this greenhouse gas to the atmosphere. How can that be? […] Lakes are not sealed microcosms that function as stand-alone entities; on the contrary, they are embedded in a landscape and are intimately coupled to their terrestrial surroundings. Organic materials are produced within the lake by the phytoplankton, photosynthetic cells that are suspended in the water and that fix CO2, release oxygen (O2), and produce biomass at the base of the aquatic food web. Photosynthesis also takes place by attached algae (the periphyton) and submerged water plants (aquatic macrophytes) that occur at the edge of the lake where enough sunlight reaches the bottom to allow their growth. But additionally, lakes are the downstream recipients of terrestrial runoff from their catchments […]. These continuous inputs include not only water, but also subsidies of plant and soil organic carbon that are washed into the lake via streams, rivers, groundwater, and overland flows. […] The organic carbon entering lakes from the catchment is referred to as ‘allochthonous’, meaning coming from the outside, and it tends to be relatively old […] In contrast, much younger organic carbon is available […] as a result of recent photosynthesis by the phytoplankton and littoral communities; this carbon is called ‘autochthonous’, meaning that it is produced within the lake.”

“It used to be thought that most of the dissolved organic matter (DOM) entering lakes, especially the coloured fraction, was unreactive and that it would transit the lake to ultimately leave unchanged at the outflow. However, many experiments and field observations have shown that this coloured material can be partially broken down by sunlight. These photochemical reactions result in the production of CO2, and also the degradation of some of the organic polymers into smaller organic molecules; these in turn are used by bacteria and decomposed to CO2. […] Most of the bacterial species in lakes are decomposers that convert organic matter into mineral end products […] This sunlight-driven chemistry begins in the rivers, and continues in the surface waters of the lake. Additional chemical and microbial reactions in the soil also break down organic materials and release CO2 into the runoff and ground waters, further contributing to the high concentrations in lake water and its emission to the atmosphere. In algal-rich ‘eutrophic’ lakes there may be sufficient photosynthesis to cause the drawdown of CO2 to concentrations below equilibrium with the air, resulting in the reverse flux of this gas, from the atmosphere into the surface waters.”

“There is a precarious balance in lakes between oxygen gains and losses, despite the seemingly limitless quantities in the overlying atmosphere. This balance can sometimes tip to deficits that send a lake into oxygen bankruptcy, with the O2 mostly or even completely consumed. Waters that have O2 concentrations below 2mg/L are referred to as ‘hypoxic’, and will be avoided by most fish species, while waters in which there is a complete absence of oxygen are called ‘anoxic’ and are mostly the domain for specialized, hardy microbes. […] In many temperate lakes, mixing in spring and again in autumn are the critical periods of re-oxygenation from the overlying atmosphere. In summer, however, the thermocline greatly slows down that oxygen transfer from air to deep water, and in cooler climates, winter ice-cover acts as another barrier to oxygenation. In both of these seasons, the oxygen absorbed into the water during earlier periods of mixing may be rapidly consumed, leading to anoxic conditions. Part of the reason that lakes are continuously on the brink of anoxia is that only limited quantities of oxygen can be stored in water because of its low solubility. The concentration of oxygen in the air is 209 millilitres per litre […], but cold water in equilibrium with the atmosphere contains only 9ml/L […]. This scarcity of oxygen worsens with increasing temperature (from 4°C to 30°C the solubility of oxygen falls by 43 per cent), and it is compounded by faster rates of bacterial decomposition in warmer waters and thus a higher respiratory demand for oxygen.”

“Lake microbiomes play multiple roles in food webs as producers, parasites, and consumers, and as steps into the animal food chain […]. These diverse communities of microbes additionally hold centre stage in the vital recycling of elements within the lake ecosystem […]. These biogeochemical processes are not simply of academic interest; they totally alter the nutritional value, mobility, and even toxicity of elements. For example, sulfate is the most oxidized and also most abundant form of sulfur in natural waters, and it is the ion taken up by phytoplankton and aquatic plants to meet their biochemical needs for this element. These photosynthetic organisms reduce the sulfate to organic sulfur compounds, and once they die and decompose, bacteria convert these compounds to the rotten-egg smelling gas, H2S, which is toxic to most aquatic life. In anoxic waters and sediments, this effect is amplified by bacterial sulfate reducers that directly convert sulfate to H2S. Fortunately another group of bacteria, sulfur oxidizers, can use H2S as a chemical energy source, and in oxygenated waters they convert this reduced sulfur back to its benign, oxidized, sulfate form. […] [The] acid neutralizing capacity (or ‘alkalinity’) varies greatly among lakes. Many lakes in Europe, North America, and Asia have been dangerously shifted towards a low pH because they lacked sufficient carbonate to buffer the continuous input of acid rain that resulted from industrial pollution of the atmosphere. The acid conditions have negative effects on aquatic animals, including by causing a shift in aluminium to its more soluble and toxic form Al3+. Fortunately, these industrial emissions have been regulated and reduced in most of the developed world, although there are still legacy effects of acid rain that have resulted in a long-term depletion of carbonates and associated calcium in certain watersheds.”

“Rotifers, cladocerans, and copepods are all planktonic, that is their distribution is strongly affected by currents and mixing processes in the lake. However, they are also swimmers, and can regulate their depth in the water. For the smallest such as rotifers and copepods, this swimming ability is limited, but the larger zooplankton are able to swim over an impressive depth range during the twenty-four-hour ‘diel’ (i.e. light–dark) cycle. […] the cladocerans in Lake Geneva reside in the thermocline region and deep epilimnion during the day, and swim upwards by about 10m during the night, while cyclopoid copepods swim up by 60m, returning to the deep, dark, cold waters of the profundal zone during the day. Even greater distances up and down the water column are achieved by larger animals. The opossum shrimp, Mysis (up to 25mm in length) lives on the bottom of lakes during the day and in Lake Tahoe it swims hundreds of metres up into the surface waters, although not on moon-lit nights. In Lake Baikal, one of the main zooplankton species is the endemic amphipod, Macrohectopus branickii, which grows up to 38mm in size. It can form dense swarms at 100–200m depth during the day, but the populations then disperse and rise to the upper waters during the night. These nocturnal migrations connect the pelagic surface waters with the profundal zone in lake ecosystems, and are thought to be an adaptation towards avoiding visual predators, especially pelagic fish, during the day, while accessing food in the surface waters under the cover of nightfall. […] Although certain fish species remain within specific zones of the lake, there are others that swim among zones and access multiple habitats. […] This type of fish migration means that the different parts of the lake ecosystem are ecologically connected. For many fish species, moving between habitats extends all the way to the ocean. Anadromous fish migrate out of the lake and swim to the sea each year, and although this movement comes at considerable energetic cost, it has the advantage of access to rich marine food sources, while allowing the young to be raised in the freshwater environment with less exposure to predators. […] With the converse migration pattern, catadromous fish live in freshwater and spawn in the sea.”

“Invasive species that are the most successful and do the most damage once they enter a lake have a number of features in common: fast growth rates, broad tolerances, the capacity to thrive under high population densities, and an ability to disperse and colonize that is enhanced by human activities. Zebra mussels (Dreissena polymorpha) get top marks in each of these categories, and they have proven to be a troublesome invader in many parts of the world. […] A single Zebra mussel can produce up to one million eggs over the course of a spawning season, and these hatch into readily dispersed larvae (‘veligers’), that are free-swimming for up to a month. The adults can achieve densities up to hundreds of thousands per square metre, and their prolific growth within water pipes has been a serious problem for the cooling systems of nuclear and thermal power stations, and for the intake pipes of drinking water plants. A single Zebra mussel can filter a litre a day, and they have the capacity to completely strip the water of bacteria and protists. In Lake Erie, the water clarity doubled and diatoms declined by 80–90 per cent soon after the invasion of Zebra mussels, with a concomitant decline in zooplankton, and potential impacts on planktivorous fish. The invasion of this species can shift a lake from dominance of the pelagic to the benthic food web, but at the expense of native unionid clams on the bottom that can become smothered in Zebra mussels. Their efficient filtering capacity may also cause a regime shift in primary producers, from turbid waters with high concentrations of phytoplankton to a clearer lake ecosystem state in which benthic water plants dominate.”

“One of the many distinguishing features of H2O is its unusually high dielectric constant, meaning that it is a strongly polar solvent with positive and negative charges that can stabilize ions brought into solution. This dielectric property results from the asymmetrical electron cloud over the molecule […] and it gives liquid water the ability to leach minerals from rocks and soils as it passes through the ground, and to maintain these salts in solution, even at high concentrations. Collectively, these dissolved minerals produce the salinity of the water […] Sea water is around 35ppt, and its salinity is mainly due to the positively charged ions sodium (Na+), potassium (K+), magnesium (Mg2+), and calcium (Ca2+), and the negatively charged ions chloride (Cl), sulfate (SO42-), and carbonate CO32-). These solutes, collectively called the ‘major ions’, conduct electrons, and therefore a simple way to track salinity is to measure the electrical conductance of the water between two electrodes set a known distance apart. Lake and ocean scientists now routinely take profiles of salinity and temperature with a CTD: a submersible instrument that records conductance, temperature, and depth many times per second as it is lowered on a rope or wire down the water column. Conductance is measured in Siemens (or microSiemens (µS), given the low salt concentrations in freshwater lakes), and adjusted to a standard temperature of 25°C to give specific conductivity in µS/cm. All freshwater lakes contain dissolved minerals, with specific conductivities in the range 50–500µS/cm, while salt water lakes have values that can exceed sea water (about 50,000µS/cm), and are the habitats for extreme microbes”.

“The World Register of Dams currently lists 58,519 ‘large dams’, defined as those with a dam wall of 15m or higher; these collectively store 16,120km3 of water, equivalent to 213 years of flow of Niagara Falls on the USA–Canada border. […] Around a hundred large dam projects are in advanced planning or construction in Africa […]. More than 300 dams are planned or under construction in the Amazon Basin of South America […]. Reservoirs have a number of distinguishing features relative to natural lakes. First, the shape (‘morphometry’) of their basins is rarely circular or oval, but instead is often dendritic, with a tree-like main stem and branches ramifying out into the submerged river valleys. Second, reservoirs typically have a high catchment area to lake area ratio, again reflecting their riverine origins. For natural lakes, this ratio is relatively low […] These proportionately large catchments mean that reservoirs have short water residence times, and water quality is much better than might be the case in the absence of this rapid flushing. Nonetheless, noxious algal blooms can develop and accumulate in isolated bays and side-arms, and downstream next to the dam itself. Reservoirs typically experience water level fluctuations that are much larger and more rapid than in natural lakes, and this limits the development of littoral plants and animals. Another distinguishing feature of reservoirs is that they often show a longitudinal gradient of conditions. Upstream, the river section contains water that is flowing, turbulent, and well mixed; this then passes through a transition zone into the lake section up to the dam, which is often the deepest part of the lake and may be stratified and clearer due to decantation of land-derived particles. In some reservoirs, the water outflow is situated near the base of the dam within the hypolimnion, and this reduces the extent of oxygen depletion and nutrient build-up, while also providing cool water for fish and other animal communities below the dam. There is increasing attention being given to careful regulation of the timing and magnitude of dam outflows to maintain these downstream ecosystems. […] The downstream effects of dams continue out into the sea, with the retention of sediments and nutrients in the reservoir leaving less available for export to marine food webs. This reduction can also lead to changes in shorelines, with a retreat of the coastal delta and intrusion of seawater because natural erosion processes can no longer be offset by resupply of sediments from upstream.”

“One of the most serious threats facing lakes throughout the world is the proliferation of algae and water plants caused by eutrophication, the overfertilization of waters with nutrients from human activities. […] Nutrient enrichment occurs both from ‘point sources’ of effluent discharged via pipes into the receiving waters, and ‘nonpoint sources’ such the runoff from roads and parking areas, agricultural lands, septic tank drainage fields, and terrain cleared of its nutrient- and water-absorbing vegetation. By the 1970s, even many of the world’s larger lakes had begun to show worrying signs of deterioration from these sources of increasing enrichment. […] A sharp drop in water clarity is often among the first signs of eutrophication, although in forested areas this effect may be masked for many years by the greater absorption of light by the coloured organic materials that are dissolved within the lake water. A drop in oxygen levels in the bottom waters during stratification is another telltale indicator of eutrophication, with the eventual fall to oxygen-free (anoxic) conditions in these lower strata of the lake. However, the most striking impact with greatest effect on ecosystem services is the production of harmful algal blooms (HABs), specifically by cyanobacteria. In eutrophic, temperate latitude waters, four genera of bloom-forming cyanobacteria are the usual offenders […]. These may occur alone or in combination, and although each has its own idiosyncratic size, shape, and lifestyle, they have a number of impressive biological features in common. First and foremost, their cells are typically full of hydrophobic protein cases that exclude water and trap gases. These honeycombs of gas-filled chambers, called ‘gas vesicles’, reduce the density of the cells, allowing them to float up to the surface where there is light available for growth. Put a drop of water from an algal bloom under a microscope and it will be immediately apparent that the individual cells are extremely small, and that the bloom itself is composed of billions of cells per litre of lake water.”

“During the day, the [algal] cells capture sunlight and produce sugars by photosynthesis; this increases their density, eventually to the point where they are heavier than the surrounding water and sink to more nutrient-rich conditions at depth in the water column or at the sediment surface. These sugars are depleted by cellular respiration, and this loss of ballast eventually results in cells becoming less dense than water and floating again towards the surface. This alternation of sinking and floating can result in large fluctuations in surface blooms over the twenty-four-hour cycle. The accumulation of bloom-forming cyanobacteria at the surface gives rise to surface scums that then can be blown into bays and washed up onto beaches. These dense populations of colonies in the water column, and especially at the surface, can shade out bottom-dwelling water plants, as well as greatly reduce the amount of light for other phytoplankton species. The resultant ‘cyanobacterial dominance’ and loss of algal species diversity has negative implications for the aquatic food web […] This negative impact on the food web may be compounded by the final collapse of the bloom and its decomposition, resulting in a major drawdown of oxygen. […] Bloom-forming cyanobacteria are especially troublesome for the management of drinking water supplies. First, there is the overproduction of biomass, which results in a massive load of algal particles that can exceed the filtration capacity of a water treatment plant […]. Second, there is an impact on the taste of the water. […] The third and most serious impact of cyanobacteria is that some of their secondary compounds are highly toxic. […] phosphorus is the key nutrient limiting bloom development, and efforts to preserve and rehabilitate freshwaters should pay specific attention to controlling the input of phosphorus via point and nonpoint discharges to lakes.”

Ultramicrobacteria.
The viral shunt in marine foodwebs.
Proteobacteria. Alphaproteobacteria. Betaproteobacteria. Gammaproteobacteria.
Mixotroph.
Carbon cycle. Nitrogen cycle. AmmonificationAnammox. Comammox.
Methanotroph.
Phosphorus cycle.
Littoral zone. Limnetic zone. Profundal zone. Benthic zone. Benthos.
Phytoplankton. Diatom. Picoeukaryote. Flagellates. Cyanobacteria.
Trophic state (-index).
Amphipoda. Rotifer. Cladocera. Copepod. Daphnia.
Redfield ratio.
δ15N.
Thermistor.
Extremophile. Halophile. Psychrophile. Acidophile.
Caspian Sea. Endorheic basin. Mono Lake.
Alpine lake.
Meromictic lake.
Subglacial lake. Lake Vostock.
Thermus aquaticus. Taq polymerase.
Lake Monoun.
Microcystin. Anatoxin-a.

 

 

February 2, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Engineering, Zoology | Leave a comment